report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The program’s acquisition approach involves the conversion of Sikorsky S-92A helicopters into VH-92A presidential helicopters by incorporating a unique mission interior that accommodates government-provided equipment such as, communications and mission systems. The program is limiting modifications to the aircraft to avoid a costly airworthiness recertification and reduce investment costs, delivery timelines, and execution risks. As we reported in March 2015, the Navy’s approach is to use mature technology; however, a fully configured mission communication system has yet to be tested in an aircraft. We reported last year that the VH-92A program continued to make progress by establishing a knowledge-based business case for entry into system development that included an approved cost, schedule and performance baseline based on actions substantively in line with acquisition best practices. Demonstrating technology maturity, making trade-offs, having reasonable cost and schedule estimates, and holding a system-level preliminary design review (PDR) by the start of system development are all best practices. While the Navy’s deferral of a system- level preliminary design review until after the start of system development deviated from acquisition best practices, we reported last year that a number of factors, such as the program’s reliance on mature technologies, selection of an in-production aircraft, and award of a fixed price incentive type contract reflect reduced risk in the deferral. A significant risk mitigation factor the Navy has in its favor is its contract with Sikorsky which includes a ceiling price that would limit how much the Navy would have to pay under the contract. To maintain this advantage, the Navy will have to ensure that no requirements changes are made that would require it to negotiate a supplemental agreement for equitable adjustment to the contract. In the past, DOD has typically used cost- reimbursement contracts in which the government generally pays all allowable costs incurred by the contractor. Recent legislation and defense policy now emphasize the use of fixed price development contracts, where warranted, to limit the government’s exposure to cost increases. Since the start of development in 2014, the VH-92A program has generally progressed as planned. Through November 2015, Sikorsky has accomplished approximately $239.0 million (22 percent) in development work–leaving about $863.9 million (78 percent) in estimated work to go over the next 5 years. As of December 2015, Sikorsky indicated that, nearly all of the developmental tasks expected to be accomplished by that point had been accomplished at only slightly greater cost than anticipated. The program’s current estimates for total program cost suggest shows no overall cost growth. Table 1 compares the program’s current estimated quantities and total costs (fiscal year 2016 dollars) to the program’s estimates at the start of development. The contractor’s November 2015 estimate of the most likely cost at completion for its development efforts, which represent a portion of the program’s total research and development cost, suggest a final contract price slightly over the contract’s target price (by less than 2 percent) but below its ceiling price. We evaluated the contractor’s data through October 1, 2015 and found that the contractor’s most likely estimate at completion based on the data at that time was not overly optimistic. In fact, it was slightly higher than our highest estimate. In addition, the program is currently on schedule. In the past year, the program successfully conducted its PDR and carried out a number of other significant development activities including continued development of the mission communications system, prime contractor taking custody of two S-92A aircraft, initial testing of one engineering and developmental model (EDM) aircraft, and initiation of S-92A to VH-92A developmental model helicopter conversions. Though the program is early in development with significant system integration and testing ahead, it currently is on track to accomplish key milestones including completion of a critical design review in July 2016, making an initial production decision, and establishing an initial operational capability as planned, see figure 1. The program passed a significant schedule milestone in August 2015 when it conducted its system-level PDR. The purpose of the PDR was to evaluate the VH-92A preliminary design, assess the likelihood of that design to meet requirements and readiness to move forward into detailed design. The PDR occurred 16 months after development start–1 month ahead of the contractual date for the event. Among other issues, there were 12 requests for actions identified during the review, for example the need to achieve complete alignment in weight management processes between the Naval Air Systems Command and the contractor. All of those requests were subsequently deemed successfully closed after review by Naval Air Systems Command personnel and concurrence of the submitters of those requests. On January 26, 2016 the PDR chairman closed the event stating that PDR was successful in presenting the program status and identifying areas of concern. During the past year, the program continued development of the VH-92A Mission Communications System (MCS), an executive communications suite utilizing existing off-the-shelf components that is to provide passengers and crew with access to on-board and off-board communications services. The government is developing and providing the MCS design and some government furnished equipment to the contractor for integration into the presidential helicopters. Hardware components and architecture were previously defined and the ongoing MCS efforts principally relate to developing communications and monitoring software and integration of the system into the aircraft. Last year, version 0.6 of MCS software was provided to the Navy’s systems integration lab and to Lockheed Martin for use in setting up MCS wiring. The program subsequently released version 0.8 of MCS software that includes nearly full functionality (except for that relating to an inter- communications subsystem) and in December 2015, contractor engineers started loading the software at the contractor’s system integration lab for testing. Two more MCS software releases are currently anticipated: version 1.0, which is to provide full functionality including the inter- communications subsystem, is expected in April 2016 and at least one follow-on release that will address and incorporate subsequently identified corrections. In addition, the first of two EDM aircraft, arrived at the subcontractor Lockheed Martin’s Owego, New York facility in December 2014, underwent subcontractor-led, risk-reduction efforts, including installation of antennas and interference testing and planning for the placement of the wiring needed for the government-furnished mission communications system. The subcontractor used radios and antennas for the mission communications system, power supplies, and instrumentation in support of contractor testing. According to Sikorsky, the testing validated capability predictions. This testing consisted of 89.6 flight test hours and 44.6 ground test hours. Table 2 provides a profile of the total, to date, anticipated test effort that is to utilize the two EDM aircraft and four subsequently developed system demonstration test article aircraft. In September 2015, the program’s first S-92A aircraft, which was utilized by Lockheed Martin in its risk reduction efforts, and a second S-92A aircraft were transferred to Sikorsky’s Stratford, Connecticut for modifications to become the two planned EDM aircraft. The modification process was started ahead of schedule, reducing schedule risk. As to be expected with a major system development effort, as the program has progressed it has faced a number of design, integration, and technical challenges, some preexisting and others realized during the course of development. Examples of the challenges the program is currently managing include design of the passenger doors, incorporation of titanium framing in the two initial aircraft, and meeting requirements relating to electromagnetic environmental effects (E3) and electromagnetic pulse (EMP), and cybersecurity. Aircraft Door Design: Design of the VH-92A forward and rear doors has proven more challenging and taken longer than the contractor anticipated. For the VH-92A, the forward passenger door in Sikorsky’s S-92A helicopter’s configuration is being modified to include dual hand rails and timed entry lights. In addition, the VH-92A aircraft requires a second entrance and exit, requiring the design of a new passenger door and stairs in the aircraft to replace the current S-92A rear ramp. Realization of the head clearance requirement for that door necessitated a larger door, increasing its weight. In addition, the weight of both doors went up in the process of redesigning the aircraft to meet other requirements. The increase in the doors’ weight in combination with a requirement for a single-person manual open and close capability necessitated an unanticipated redesign of the doors’ counterbalance systems and also complicated latch design. Extensive design and structural analysis for the door efforts were needed to resolve those design issues and ensure the new design would not affect the overall airworthiness certification for the aircraft. According to Sikorsky, as of February 2016, 90 out of 105 design drawings for the doors were completed and improvements to the schedule have begun and should continue as the drawings continue to be released. Titanium Framing: The two EDM aircraft are being retrofitted with titanium frames and the remainder of the VH-92A fleet will come with the titanium frames incorporated as part of the Sikorsky S-92A production process. This will improve aircraft performance and fatigue life. As of January 2016, the machining of titanium frames for the first EDM aircraft had been completed and installation had begun. The frames for the second EDM aircraft are in the machining process and are expected to be completed by the end of the second quarter of 2016. The machining process, which involves drilling critical alignment holes into the titanium, has taken longer than anticipated. The contractor realized that this effort would cause schedule delays and worked to mitigate this schedule risk by approving additional engineering and shop hours to insure that the frames were properly machined, finished, and can fit the aircraft upon installation. E3 and EMP: VH-92A aircraft must comply with both commercial and military standards pertaining to electromagnetic environment effects. Achieving those standards involves consideration of the electromagnetic compatibility of equipment used on the aircraft and mitigation of electromagnetic interference caused by that equipment. In addition, developers of military systems, such as the presidential helicopters, may face additional requirements relating to the ability to survive the effects of an electromagnetic pulse. A number of techniques exist to harden aircraft from the effects of an EMP, for example, increasing shielding on equipment and wiring, which are being considered and utilized by the VH- 92A program. As the program has progressed a greater understanding of the effort required to meet the level of EMP survivability required has resulted in increased EMP-related efforts. According to an official from the office of the Director of Operational Testing and Evaluation, the program has been working to help identify what additional measures are needed for EMP survivability. EMP related testing is underway to determine the exact additional measures needed, such as increased shielding or use of EMP limiters that protect electronics from EMP resulting power surges. As of December 2015, one area of concern relating to these efforts was that some of the initially identified EMP limiters may not have provided the needed level of protection. However, the program has continued to work on this issue and believes it has identified workable solutions. According to the program office, they believe these efforts have now resulted in a compliant design for protecting critical systems. Cybersecurity: VH-92A aircraft and systems must meet cybersecurity requirements. In 2014, after the program’s initial (June 2013) Test and Evaluation Master Plan was approved, a revised DOD cybersecurity policy and risk management framework were released. The program has subsequently been working to address the changes necessitated by the revised policy and framework including actively pursuing a contract change to migrate from the certification required under the contract to the current certification standard. In addition, changes have been made to the program’s Test and Evaluation Master Plan to reflect the changed policy and framework. In his January 2016 PDR closeout assessment, the PDR Chairman stated a cost and scope analysis of the needed migration has occurred and the change will be incorporated into the program baseline with minor impact. He further noted, however, that future evolution in the definition of cyber threats will remain a risk to the program as additional mandates are defined. The program’s efforts relating to a sub-component of the government developed VH-92A MCS, the Inter-Communication System (ICS), reflect the challenges associated with meeting updated requirements in support of airworthiness certifications. The ICS supplier is in the process of addressing a Federal Aviation Administration (FAA) standard on software considerations in airborne systems and equipment certifications that changed. The standard is the primary means for meeting airworthiness requirements and obtaining approval of software used in civil aviation products. In November 2015, the Navy, contractor and subject matter experts focused on possible approaches for the ICS supplier to successfully meet the updated standard. In this case, it was determined that the supplier’s previous efforts demonstrated the ability to provide the needed capability and additional issue papers that covers the difference between the old and revised standard would be sufficient. The ICS supplier revised their development schedule and the ICS baseline software delivery to Sikorsky and the Navy’s MCS system integration laboratory is now set for March 2016. Cost and Performance Trades: The program has been helped looking for opportunities to save cost and schedule to offset increased efforts such as those discussed above. For example, program officials explained that they’ve identified an opportunity to remove a contractually-required capability as the Marine Corps decided it provided no appreciable advantage. That capability is not inherent in the S-92A aircraft and would have needed to have been designed and integrated into the aircraft. It was a requirement that existed prior to the selection of the replacement helicopter. Subsequent consideration of the requirement based on the operators’ concept of operations and the capabilities of the S-92A aircraft led to a determination that the requirement did not provide an appreciable benefit. They explained that given the desire to maximize the overall performance of the aircraft (range, power, etc.), and decrease the overall risk associated with integrating the associated capability, the requirement was removed from the contract. The contractor estimated that dropping this requirement resulted in the elimination of about 20 percent of the total testing for the affected subsystem. Additionally, the Navy and contractor are currently in discussions on a downward adjustment to the contract price to reflect elimination of the requirement. Similarly, the contractor identified an opportunity to save time and money through a change in planned contractor testing. The VH-92A must be certified by the FAA and approved by the Navy for flight in moderate icing conditions, a certification the S-92A baseline aircraft already holds. It was originally thought, though, that icing-related flight testing would be needed to reflect changes made to the baseline aircraft’s outer body such as the addition of antennas. However, based on the existing S-92A certifications and data gathered during testing done with the antennas on the first EDM aircraft at Lockheed Martin’s Owego facility, Sikorsky and FAA representatives subsequently determined that analysis would suffice toward obtaining FAA certification. This revised approach resulted in a savings of approximately 2 months of schedule and $3 million in cost— both of which will be applied to other activities within the contract. An earned value management (EVM) system is a project management tool that integrates the technical scope of work with schedule and cost elements for investment planning and control. A well-planned schedule is another management tool that can help government programs use public funds effectively by specifying when work will be performed in the future and measuring program performance against an approved plan. During our review of the program, we compared the prime contractor’s EVM system and its Integrated Master Schedule (IMS) to best practices. We found that Sikorsky’s EVM system and IMS substantially or fully met best practices. To determine if a contractor is executing the work planned within the funds and time budgeted, the prime contractor produces monthly reports detailing cost and schedule performance in an EVM system. Our research has identified a number of best practices and characteristics that are the basis of effective earned value management and which should result in reliable and valid earned value management data that can be used for making informed decisions. We examined Sikorsky’s EVM system in the context of the best practices from the GAO Cost Estimating and Assessment Guide, and overall found that it that it fully or substantially met the three characteristics identified for a reliable EVM system. Specifically, that the EVM system was comprehensive, that data resulting from the system was reliable, and that program management utilized that data for decision-making purposes. See appendix III for a summary assessment of the Sikorsky’s EVM practices compared to best practices. We also found that the program’s IMS substantially met the best practices for a reliable schedule. The success of programs depend, in part, on having an integrated and reliable master schedule that defines when and how long work will occur and how each activity is related to the others. Such a schedule is necessary for government acquisition programs for many reasons. It provides not only a road map for systematic project execution but also the means by which to gauge progress, identify and resolve potential problems, and promote accountability at all levels of the program. An IMS provides a time sequence for the duration of a program’s activities and helps everyone understand both the dates for major milestones and the activities that drive the schedule. A program’s IMS is also a vehicle for developing a time-phased budget baseline. Moreover, it is an essential basis for managing tradeoffs between cost, schedule, and scope. Among other things, scheduling allows program management to decide between possible sequences of activities, determine the flexibility of the schedule according to available resources, predict the consequences of managerial action or inaction on events, and allocate contingency plans to mitigate risks. Our research has identified 10 best practices associated with effective schedule estimating that can be collapsed into 4 general characteristics (comprehensive, well- constructed, credible and controlled) for sound schedule estimating. Overall, we found the program’s IMS is reliable as it substantially met all four of the characteristics. See appendix IV for a more detailed assessment of the VH-92A program’s schedule estimate compared to best practices. While the program had made good progress, it is still early in development with significant system integration and testing ahead. We will continue to monitor the presidential helicopter acquisition as it progresses. We are not making any recommendations in this report. DOD provided written comments on a draft of this report, which are reprinted in appendix V. In its written comments, DOD stated that it believes its efforts on this program are aligned with our best practices and it will continue to monitor the program and ensure that mitigations are in place to address potential risk areas. We will also continue to monitor the program as it moves forward. DOD also provided technical comments that were incorporated, where appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Secretary of the Navy. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff contributing to this report are listed in appendix VI. To conduct this work, we analyzed program documents (including the acquisition strategy and contractor progress reports) and plans to determine how the program is progressing in terms of its cost, schedule, and performance, and how well the program is adhering to best practices. We interviewed program officials from the Navy’s Presidential Helicopter Program Office, as well as officials from the office of the Director of Operational Testing and Evaluation and the office of the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation to discuss the status of the program. To develop the numbers on the cost and cycle time of the VH-92A program in table 1, we obtained and analyzed cost, quantity, and schedule data from the program’s Selected Acquisition Report and other information provided by the program. We converted all cost information to fiscal year 2016 dollars using conversion factors from the Department of Defense (DOD) Comptroller’s National Defense Budget Estimates for Fiscal Year 2016. Through discussions with DOD officials responsible for the database and confirming selected data with the program office, we determined that the information obtained was sufficiently reliable for the purposes of this report. To understand potential program challenges and steps taken to address those challenges, we examined program and contractor documents and other reports relating to the development effort. We also examined DOD’s risk management planning guidance and reviewed a copy of the program’s draft risk management plan and the contractors’ latest risk assessment. We discussed development challenges and risk management with VH-92A program officials and officials from the Sikorsky Aircraft Corporation and Lockheed Martin. To learn more about the program’s earned value management (EVM) system we met with officials from the Defense Contract Management Agency, the government agency responsible for, among other things, ensuring the integrity of the contracting process, and reviewed their Program Assessment Reports on the program to determine if the prime contractor’s (Sikorsky), EVM system produced reports that met the criteria for reliable and valid EVM data. Our EVM analysis focused on Sikorsky’s Integrated Program Management Report data from September 2014 through October 2015 and the Integrated Master Schedule (IMS) dated October 2015, as well as interviews with the program office, and supporting documentation. Specifically, we compared project documentation with EVM best practices as identified in GAO’s Cost Estimating and Assessment Guide. Our research has identified a number of best practices that are the basis of effective earned value management and should result in reliable and valid earned value management data that can be used for making informed decisions. These best practices have been collapsed into three high level characteristics of a reliable earned value management system which are: Establish a comprehensive EVM System: If the EVM data is to be used to manage a program, the contractor’s (and subcontractors’) EVM system should be certified to ensure that it complies with the agency’s implementation of the American National Standards Institute guidelines. In addition to a certified system, an integrated baseline review must be conducted to ensure that the performance measurement baseline accurately captures all of the work to be accomplished. In order to develop the performance measurement baseline, an integrated network schedule should be developed and maintained. This schedule should reflect the program’s work breakdown structure, clearly show the logical sequencing of activities, and identify the resources necessary to complete the activities in order to develop the time-phased budget baseline. Lastly, there should be a rigorous EVM system surveillance program in place. Effective surveillance ensures that the contractor is following its own corporate processes and procedures and confirms that the contractor’s processes and procedures continue to satisfy the American National Standards Institute guidelines. Ensure that the data resulting from the EVM system are reliable: To ensure the data are reliable, it is important to make sure that the Integrated Program Management Report data make sense and do not contain anomalies that would make them invalid. If errors are not detected, then the data will be skewed, resulting in bad decision- making. In addition to checking for data anomalies, the integrated program management report data between the different formats should be consistent. Reliable EVM data is important in order to generate estimates at completion. Managers should rely on EVM data to generate estimates at completion at least monthly. Estimates at completion are derived from the cost of work completed along with an estimate of what it will cost to complete all unaccomplished work. Ensure that the program management team is using earned value data for decision-making purposes: For EVM data to be useful it must be reviewed regularly. Cost and schedule deviations from the baseline plan give management at all levels information about where corrective actions are needed to bring the program back on track or to update completion dates and estimates at completion. Management should focus on corrective actions and identify ways to manage cost, schedule and technical scope to meet program objectives. Management also needs to ensure that the performance measurement baseline is updated accordingly as changes occur. Because changes are normal, the American National Standards Institute guidelines allow for incorporating changes to the performance measurement baseline. However, it is imperative that changes be incorporated into the EVM system as soon as possible to maintain the validity of the performance measurement baseline. See appendix III for our summary assessment of the VH-92A program’s EVM data and practices compared to best practices. EVM data are considered reliable if the overall assessment ratings for each of the three characteristics are substantially or fully met. If any of the characteristics are not met, minimally met, or partially met, then the EVM data cannot be considered reliable. We reviewed the program’s IMS and compared it to the GAO Schedule Assessment Guide. Our research has identified 10 best practices associated with effective schedule estimating that can be collapsed into 4 general characteristics for sound schedule estimating: Comprehensive: A comprehensive schedule includes all activities for both the government and its contractors necessary to accomplish a project’s objectives as defined in the project’s work breakdown structure. The schedule includes the labor, materials, travel, facilities, equipment, and the like needed to do the work and depicts when those resources are needed and when they will be available. It realistically reflects how long each activity will take and allows for discrete progress measurement. Well-constructed: A schedule is well-constructed if all its activities are logically sequenced with the most straightforward logic possible. Unusual or complicated logic techniques are used judiciously and justified in the schedule documentation. The schedule’s critical path represents a true model of the activities that drive the project’s earliest completion date and total float accurately depicts schedule flexibility. Credible: A schedule that is credible is horizontally traceable—that is, it reflects the order of events necessary to achieve aggregated products or outcomes. It is also vertically traceable: activities in varying levels of the schedule map to one another and key dates presented to management in periodic briefings are in sync with the schedule. Data about risks and opportunities are used to predict a level of confidence in meeting the project’s completion date. The level of necessary schedule contingency and high priority risks and opportunities are identified by conducting a robust schedule risk analysis. Controlled: A schedule is controlled if it is updated periodically by trained schedulers using actual progress and logic to realistically forecast dates for program activities. It is compared against a designated baseline schedule to measure, monitor, and report the project’s progress. The baseline schedule is accompanied by a baseline document that explains the overall approach to the project, defines ground rules and assumptions, and describes the unique features of the schedule. The baseline schedule and current schedule are subject to a configuration management control process. For our evaluations of the schedule estimates, when the tasks associated with the leading practices that define a characteristic were mostly or completely satisfied, we considered the characteristic to be substantially or fully met. When all four characteristics were at least substantially met, we considered a schedule estimate to be reliable. In addition, we interviewed agency and contractor officials to determine the methodology used to develop the IMS. To assess the schedule, we obtained and reviewed documentation, including the work breakdown structure. See appendix IV for our summary assessment of the VH-92A program’s schedule estimate compared to best practices. We conducted this performance audit from July 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Presidential Helicopter Acquisition: Program Established Knowledge- Based Business Case and Entered System Development with Plans for Managing Challenges (GAO-15-392R, April 14, 2015) Presidential Helicopter Acquisition: Update on Program’s Progress toward Development Start (GAO-14-358R, April 10, 2014) Department of Defense’s Waiver of Competitive Prototyping Requirement for the VXX Presidential Helicopter Replacement Program (GAO-13-826R, September 6, 2013) Presidential Helicopter Acquisition: Program Makes Progress in Balancing Requirements, Costs, and Schedule (GAO-13-257, April 9, 2013) Presidential Helicopter Acquisition: Effort Delayed as DOD Adopts New Approach to Balance Requirements, Costs, and Schedule (GAO-12-381R, February 27, 2012) Defense Acquisitions: Application of Lessons Learned and Best Practices in the Presidential Helicopter Program (GAO-11-380R, March 25, 2011) Best practice The program has a certified EVM system Substantially Met: The contractor’s EVM An Integrated Baseline Review was conducted to ensure the performance measurement baseline captures all of the work The schedule reflects the work breakdown structure, the logical sequencing of activities, and the necessary resources EVM surveillance is being performed system has been rated acceptable, indicating that it generally complies with EVM system guidelines. Met: An Integrated Baseline Review was conducted in November 2014 that assessed the technical, schedule, resource, and cost risk associated with the program’s various control accounts. Substantially Met: The schedule has a consistent and well-defined work breakdown structure and is complex with few missing logic links; however, the schedule is not fully resource loaded. Met: The Defense Contract Management Agency performs monthly reports regarding the prime contractor and their major subcontractor to the program office. EVM data, including cost and schedule variances, are reviewed on a regular basis Management uses EVM data to develop corrective action plans performance data is consistent between reporting formats, there are many data anomalies that are not explained in the Integrated Program Management Report narrative (Format 5). Met: There are no inconsistencies between the cost and schedule performance data and between the Integrated Program Management Report formats reported. Substantially Met: The contractor estimate at completion is not overly optimistic; in fact, it is greater than the GAO estimate at completion range. Met: The program office and contractor review the cost and schedule and variances and use that information to determine corrective actions for potential cost and schedule overruns. Met: The program office uses the EVM data and variances as a basis to request additional resources. Substantially Met: While the performance management baseline change process is clearly defined and changes are documented in the contractor’s monthly reports, there is no explanation for the change in the budget at complete in January 2015. Individual assessment Substantially Met: While the work breakdown structure has a dictionary that defines all the tasks and is consistent between the program management documents and reports, there are cases of tasks that do not have unique names. Partially Met: While the schedule contains some resources, the program office stated that the IMS is not fully resource loaded since this is not a requirement of the Integrated Program Management Report instructions. However, the program office stated that they assess the resources (labor and materials) at the weekly integrated product team meetings. Substantially Met: The durations were established taking into account available resources, productivity and past experience. Additionally, the schedule accounts for holidays and the contractor and subcontractor non-work periods. Substantially Met: The schedule is complex with few missing logic links. For the most part, extensive documentation of the logic anomalies exists; however, any dangling logic can interfere with network analysis and the forecasting ability of the schedule. Thus, the small relative number of dangling logic, but high absolute number precludes a fully met score. Substantially Met: Clear waterfalls of driving paths to engineering development model (EDM) 1 and EDM 2 deliveries as well as Milestone C and program finish exist within the schedule. Detailed documentation of how the critical path is derived is also discussed in the program reviews. However, long duration testing activities are present in the EDM 1 and Milestone C paths and there are some dangling activities that keep this best practice from being fully met. Partially Met: The IMS does not have any negative float and all float values are calculated as days. Although the schedule reflects many activities with high float values, valid justification exists for many. In some cases, it is clear why float is so large, such as high-level program milestones or level of effort activities not having a successor. However, there are instances of high float values that are derived from complete network logic that the program office ignores; in these cases, unreasonable float should be documented and explained. Partially Met: The schedule aligns vertically with the contractor integrated program management reports. However, changes in dates for specific tasks do not show that the schedule is horizontally traceable. Substantially Met: Schedule risk analyses have been performed. However, logic issues cause the schedule risk assessment to not be completely reliable. Partially Met: While the schedule has no date anomalies, it does not maintain a document to track changes in the schedule’s logic or provide a schedule narrative that includes key details regarding how the schedule is updated. Substantially Met: While the schedule’s government tasks are baselined and have an established process for variance measurement, there is no evidence of a schedule baseline document. In addition to the contact named above, Bruce H. Thomas, Assistant Director; Bonita J. P. Oden, Analyst-in-Charge; William C. Allbritton; Stephanie M. Gustafson; Ozzy Trevino; Jennifer V. Leotta; Juana S. Collymore; Karen A. Richey; Hai V. Tran; Marie P. Ahearn; and Katherine S. Lenane made key contributions to this report.
The mission of the presidential helicopter fleet is to provide safe, reliable, and timely transportation for the President, Vice President, foreign heads of state, and other official parties as directed by the White House Military Office. The Navy plans to acquire VH-92A helicopters to replace its aging fleet. Initial delivery of VH-92A presidential helicopters is scheduled to begin in fiscal year 2020 with production ending in fiscal year 2023. Total program acquisition cost is estimated to be $5.1 billion. This is GAO's seventh report on the program since 2011. The National Defense Authorization Act for Fiscal Year 2014 included a provision that GAO report annually on the acquisition of the VH-92A aircraft. This report discusses (1) the program's cost, schedule, and performance status; (2) challenges it faces in system development; and (3) its adherence to acquisition best practices. To conduct the review, GAO examined program documents, including Navy, contractor, and on-site government program monitor reports. GAO also interviewed officials, reviewed the earned value management system, and assessed the integrated master schedule against GAO best practices. Since 2014, the VH-92A presidential helicopter program has generally progressed as planned. Through November 2015, the contractor accomplished approximately $239.0 million (22 percent) in development work—leaving about $863.9 million (78 percent) in estimated work over the next 5 years. As of December 2015, the prime contractor had accomplished nearly all of the expected developmental tasks at only slightly greater cost than anticipated. The program is currently on track to accomplish key development milestones as planned. In the past year, the program successfully conducted its preliminary design review and carried out a number of other significant development activities, including: continued development of the mission communications system, delivery and initial testing of aircraft for risk-reduction activities, and initiation of the conversion of Sikorsky S-92A helicopters into VH-92A developmental models. As expected with a major system development effort, the program faces a number of design and technical challenges, some preexisting and others realized during the course of development. Those challenges include designing passenger doors, incorporating titanium framing in the two initial aircraft, meeting requirements relating to electromagnetic environmental effects, and cybersecurity. The program took advantage of capability and testing trades that produced cost and schedule savings. For example, the program was able to reduce physical testing by relying on existing information about the aircraft's performance, supplemented by additional information collected during testing and through modeling. When assessed against best practices, GAO found that the contractor's earned value management system, a project management tool for investment planning and control, fully or substantially met the three characteristics for a reliable earned value management system. Similarly, in assessing the program's integrated master schedule against best practices, GAO found that it substantially met all four of the characteristics required for a reliable schedule. GAO is not making recommendations in this report. In commenting on a draft of this report, DOD stated that it believes its efforts on this program are aligned with GAO's best practices and it will continue to monitor the program and ensure that mitigations are in place to address potential risk areas. GAO will also continue to monitor the program as it moves forward.
Over the past few years, both IRS’ Office of Internal Audit and we have reported on the need for improved controls to protect against unauthorized accesses of taxpayer data by IRS employees. In October 1992, Internal Audit reported that IRS had limited ability to prevent unauthorized accesses and detect such accesses once they had occurred. In September 1993, we reported that IRS did not adequately monitor the activities of thousands of employees who were authorized to read and change taxpayer files. We noted that the greatest risk involved IRS’ Integrated Data Retrieval System (IDRS), which is the primary computer system used by IRS employees to access and adjust taxpayer accounts. In 1994, IRS implemented an automated tool—the Electronic Audit Research Log (EARL)—to monitor and detect unauthorized accesses to data on IDRS. In August 1995, we reported that IRS had taken some actions to, among other things, restrict account access and analyze computer usage. We concluded, however, that IRS still lacked sufficient safeguards to prevent or detect unauthorized accesses of taxpayer information. We noted, for example, that security reports issued to monitor and identify unauthorized accesses were cumbersome and virtually useless to managers responsible for ensuring computer security. In 1996, IRS implemented enhancements to EARL that were designed to improve the quality of data being provided to managers. In April 1997, we reported on continuing shortcomings in IRS’ efforts to prevent unauthorized access to confidential taxpayer data. We noted, for example, that IRS did not (1) monitor all employees with access to automated systems and data for evidence of unauthorized access, (2) consistently investigate cases involving unauthorized access, and (3) consistently discipline employees who accessed taxpayer data without authorization. Because of continuing concerns about unauthorized accesses, Public Law 105-35 was signed into law on August 5, 1997. The law made willful unauthorized inspection of taxpayer data illegal. The law provides that a person convicted of unauthorized access shall be subject to a fine of up to $1,000, or imprisonment of not more than 1 year, or both, together with the costs of prosecution. The law also states that an officer or employee of the United States who is convicted of any such violation shall, in addition to any other punishment, be dismissed from office or discharged from employment. In cases where a person is criminally charged with unauthorized access, the law requires that the Secretary of the Treasury notify the taxpayer whose tax information was accessed. To achieve our objectives, we interviewed officials from IRS’ Centralized Case Development Center (CCDC), CAU, Office of Systems Standards and Evaluation, and Office of Chief Inspector (we discuss the role of each of these offices later in the report); visited the CCDC in Cincinnati, OH, to observe its operations; analyzed data runs that IRS produced at our request as well as IRS management information system reports on unauthorized access; and reviewed IRS reports and documentation on unauthorized access. Other than checking for consistency, we did not verify the reliability of statistical data provided by IRS. We also did not assess the effectiveness of the various actions taken by IRS since enactment of Public Law 105-35. We requested comments on a draft of this report from IRS and the Acting Treasury Inspector General for Tax Administration. Their comments are discussed near the end of this letter. We did our work from October 1998 through January 1999 in accordance with generally accepted government auditing standards. In an August 1997 report on controlling unauthorized access to taxpayer records, IRS concluded that, in the long run, the best solution was to modernize core IRS systems. According to IRS, modernization will (1) allow it to restrict employees’ access to only those taxpayer records that they have a specific work-related reason to look at and (2) enable it to detect unauthorized accesses almost as soon as they happen. However, IRS does not expect to implement those modernization efforts for several years. In the meantime, IRS has taken several steps directed at deterring, preventing, and detecting unauthorized access and ensuring that appropriate disciplinary action is taken when unauthorized access is proven. Some of IRS’ actions are intended to deter unauthorized access (i.e., keep employees from trying to access taxpayer data without authorization). These actions focus on awareness. In an attempt to make certain that all employees are explicitly informed about unauthorized access and the related penalties, IRS, among other things, adopted a policy that proven instances of unauthorized access will result in removal from IRS, absent any extenuating circumstances; sent a memo in October 1997 to all IRS employees that discussed, among other things, the penalties associated with unauthorized access; started giving annual agencywide briefings in November 1997 to inform all employees of IRS’ unauthorized access policy and the penalties for violations; created a form that is to be signed by employees and managers to acknowledge attendance at a briefing and receipt of guides on what constitutes unauthorized access; created a policy that all employees who join or return to IRS after the annual awareness briefings have been administered will be given their briefing within 30 days; developed a standard message to be given in all training courses in which access to tax information is discussed; developed a video and guides on unauthorized access to ensure that managers deliver a consistent message in briefing employees; and finally, established an unauthorized access steering committee and unauthorized access support team to address questions and issues raised by employees and managers. Other IRS actions are intended to prevent unauthorized access (i.e., stop employees who intentionally or unintentionally try to access taxpayer data without authorization). In that regard, according to IRS, the most effective way to safeguard against unauthorized access is to build controls into automated systems that prevent employees from accessing information they have no need to access. However, according to IRS, its current systems cannot be effectively modified to provide the “need to know” environment that allows employees to access taxpayers’ records only when they have a work-related reason to do so. IRS expects to correct this situation as part of its long-term systems modernization effort. In the meantime, IRS has taken some steps to prevent unauthorized access. For example, IRS (1) has incorporated blocks into its systems to prevent employees from accessing their own records and, in some cases, the records of their spouses or ex-spouses and (2) is reviewing the access rights given to individual employees to ensure that they do not have greater access to tax data than is necessary to do their work. Until February 1999, when a new system was implemented, IRS depended primarily on EARL to identify potential instances of unauthorized access through after-the-fact analysis of accesses to data on IDRS. EARL used data analysis techniques based on a few known patterns of abuse to identify potential cases of unauthorized access. When EARL was run, it created lists of potential violations (leads) that were provided to analysts at each of IRS’ 10 service centers. The analysts were responsible for researching the lists to determine whether the leads warranted further investigation. An IRS study done in August 1997 concluded that the just- described process did not provide the consistent approach needed to support IRS’ policy on unauthorized access of taxpayer records. According to the study report and IRS officials, (1) there was a lack of uniformity in the output produced by EARL because each service center had developed its own computer programs, (2) most EARL leads required labor-intensive research to determine whether unauthorized access likely took place, and (3) each service center had developed its own techniques for developing EARL cases. To correct this lack of a consistent approach to unauthorized access, IRS (1) centralized responsibility for identifying and investigating potential instances of unauthorized access in CCDC, which is located within IRS’ Office of the Chief Inspector, and (2) developed a new automated tool to provide better unauthorized access detection capabilities. In May 1997, the Acting Commissioner of Internal Revenue transferred responsibility for the detection of unauthorized access to the Office of the Chief Inspector. In October 1997, the Chief Inspector created CCDC, which is responsible for identifying potential cases of unauthorized access and determining whether they warrant further investigation by the Internal Security Division in the Office of the Chief Inspector. CCDC became operational in February 1998, when it began assuming responsibility from the 10 service centers for analyzing unauthorized access leads. The transition was completed in September 1998. Since then, CCDC has been responsible for reviewing all leads and deciding which, if any, should be referred to Internal Security. The CCDC staff includes forensic data analysts, security analysts, computer programmers, and criminal investigators. In addition to referrals from CCDC, Internal Security also receives allegations of unauthorized access from various other sources, such as calls to the Inspection Service integrity hotline from IRS employees, taxpayers, and tax practitioners. IRS, in February 1999, implemented the Audit Trail Lead Analysis System (ATLAS) to replace EARL. IRS officials stated that ATLAS is an improvement over EARL because ATLAS (1) will provide better unauthorized access detection capabilities and (2) is a national system that will not be subject to local modifications and practices by the 10 service centers. These improvements, according to the Director of CCDC and IRS documents, will produce better leads than those produced by EARL, because they are more indicative of potential unauthorized access. For example, according to IRS, ATLAS is programmed to do an exact match between an employee’s name and the names of taxpayers whose tax information the employee has accessed. EARL’s name match component, on the other hand, only matched the first six characters of the last names. According to IRS data, of the 5,468 total leads received by the Office of the Chief Inspector between October 1, 1997, and November 30, 1998, EARL’s match of the first six characters of an employee’s name accounted for 3,793 (69.4 percent). However, of the 338 closed leads that were referred to Internal Security for investigation, only 67 (19.8 percent) were generated by EARL’s name match. According to the Director of CCDC and IRS documents, ATLAS’ increased precision in matching names should result in fewer leads—because there will now have to be an exact match of last names—but these should be more indicative of potential unauthorized access. Although IRS has taken several steps to identify unauthorized accesses involving IDRS, it has done little to detect accesses involving the estimated 130 other information systems that contain taxpayer information. IRS does not have a system such as EARL or ATLAS to analyze accesses involving these other systems. IRS officials in the Systems Standards and Evaluation Office (the office with overall responsibility for security and privacy within IRS) informed us that this problem is to be corrected as part of IRS’ long- term systems modernization efforts. However, these efforts will not be implemented for several years. Meanwhile, according to these officials, they have been looking at the controls in these various information systems to prevent unauthorized access. They also said that they depend on the supervisors of employees who use non-IDRS information systems to be on the alert for unauthorized access. In August 1997, the Office of Systems Standards and Evaluation reported that the handling and tracking of unauthorized access cases had not been consistent. The report further stated that IRS would be better served if key operations were centralized to establish consistency and timeliness in developing cases, making decisions on levels of evidence for removals and legal actions, processing and implementing removals, and tracking and reporting cases. To deal with inconsistencies in case handling and tracking, IRS created CAU within the Labor Relations Office in the National Office. CAU, which became operational in October 1997, is responsible for tracking and reporting the status of all unauthorized access cases; preparing paperwork for all cases, including those in which unauthorized access was not proven; forwarding paperwork on cases to the heads of the appropriate offices for clearance or disciplinary action; and providing consultative support to management in the administration of discipline. To further ensure that consistent disciplinary actions are imposed for proven cases of unauthorized access, the Systems Standards and Evaluation Office is tasked with reviewing those actions. Between October 1, 1997, and November 30, 1998, the Office of the Chief Inspector received 5,468 leads (information indicating potential unauthorized accesses) and completed work on 4,392 of those leads. Of the 4,392 closed leads, 338 (8 percent) resulted in referrals to Internal Security. Table 1 shows the disposition of the 4,054 closed leads not referred for investigation. During the period covered by our review, EARL accounted for a large majority of the leads received by the Office of the Chief Inspector and most of the cases referred to Internal Security for further investigation. Of the 5,468 leads received by the Chief Inspector during the 14 months ending November 30, 1998, 4,742, or 87 percent, were generated by EARL.The other 13 percent came from other sources, such as complaints from taxpayers and IRS employees. Although most of the leads referred to Internal Security also came from EARL (about 56 percent of the 338 referrals), other sources of leads proved to be more productive. In that regard, of the EARL leads closed by the Office of the Chief Inspector, 5 percent were referred for investigation compared with about 22 percent of the leads from other sources. Between October 1, 1997, and November 30, 1998, according to IRS’ data, Internal Security opened at least 139 investigations of cases in which unauthorized access was alleged to have occurred after passage of Public Law 105-35. As of November 30, 1998, Internal Security had completed 86 of these investigations, while the other 53 investigations were still ongoing. From October 1, 1997, to January 25, 1999, according to IRS, Internal Security sent CAU 64 cases for adjudication in which unauthorized access was alleged to have taken place after enactment of Public Law 105-35. As of January 25, 1999, action had been completed on 36 of these cases, and 28 remained open. In 15 of the 36 completed cases, IRS determined that an intentional unauthorized access had occurred. Of the remaining 21 cases, IRS determined that 14 involved no unauthorized access, 6 involved accidental accesses that were reported by the employees to their supervisors in accordance with established procedures, and 1 involved an accidental access that was not reported in accordance with established procedures. In the latter case, the employee was reprimanded. As shown in table 2, of the 15 proven intentional unauthorized accesses, 10 involved service center employees, and 5 involved district office employees. According to IRS, the offending employees in those 15 cases either resigned in lieu of termination or were terminated. According to IRS data, proven cases of unauthorized access that occurred after enactment of Public Law 105-35 have generally been referred to U.S. Attorneys for possible prosecution. In almost every case, according to IRS data, the U.S. Attorney declined to prosecute. As of February 2, 1999, one case had been accepted for prosecution. According to IRS, although the case was still open, the employee had been removed from the agency. Pursuant to the law, IRS notified the three taxpayers whose data the employee had accessed. We obtained written comments on a draft of this report from IRS’ Chief Information Officer (see app. I) and the Acting Treasury Inspector General for Tax Administration (see app. II). The Chief Information Officer said that IRS agreed with the information in our report. He emphasized that, in some regards, IRS’ current weaknesses are associated with its aging systems and that these weaknesses will be corrected as part of IRS’ long-term systems modernization plans. In the meantime, according to the Chief Information Officer, IRS (1) has initiated actions to block employees’ access to more taxpayer accounts than they are currently restricted from accessing and (2) is reviewing the feasibility of incorporating audit trail records from systems other than IDRS into ATLAS. The Acting Treasury Inspector General for Tax Administration said that the report provides a good summary of the actions taken by his office (formerly the Office of the Chief Inspector) to implement the provisions of Public Law 105-35. He said that the identification and investigation of unlawful accesses of taxpayer information has been and will remain a high priority of his office. As agreed with your office, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to Senator William V. Roth, Chairman, and Senator Daniel P. Moynihan, Ranking Minority Member, Senate Committee on Finance; and Representative Bill Archer, Chairman, and Representative Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means. We will also send copies to the Honorable Robert E. Rubin, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; the Honorable Jacob Lew, Director, Office of Management and Budget; and Mr. Lawrence W. Rogers, Acting Treasury Inspector General for Tax Administration. Copies will be made available to others upon request. Major contributors to this report were David J. Attianese, Assistant Director; and John Lesser, Evaluator-in- Charge. Please contact me on (202) 512-9110 if you have any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Internal Revenue Service's (IRS) implementation of the Taxpayer Browsing Protection Act, focusing on: (1) actions IRS has taken to implement the law; and (2) the number of potential and proven incidents of unauthorized access by IRS employees that IRS has identified since enactment of the law, as well as penalties imposed in cases where unauthorized access was proven. GAO noted that: (1) the IRS has two approaches for implementing the law; (2) over the long term, IRS believes that modernizing its core automated systems offers the best means to prevent and detect unauthorized access to taxpayer data; (3) according to IRS, modernization will: (a) allow it to restrict employees' access to only those taxpayer records that they have a specific work-related reason to look at; and (b) enable it to detect unauthorized accesses almost as soon as they happen; (4) it will be several years, however, before this modernization becomes a reality; (5) in the meantime, IRS has taken several other steps directed at deterring, preventing, and detecting unauthorized access and ensuring that consistent disciplinary action is taken when unauthorized access is proven; (6) between October 1, 1997, and November 30, 1998, the Office of the Chief Inspector identified 5,468 potential instances of unauthorized access and completed preliminary investigative work on 4,392 of those leads; (7) of those 4,392 leads, 338 were determined to warrant further investigation; (8) many of these 338 cases were still under investigation or adjudication as of January 25, 1999; (9) using data provided by IRS, GAO identified 36 cases for which investigation and adjudication had been completed; (10) of those 36 cases, 15 involved an IRS determination that IRS employees had intentionally accessed taxpayer data without authorization; (11) in the other 21 cases, IRS determined that either there was no unauthorized access or the access was accidental; (12) according to IRS, employees involved in the 15 cases of intentional unauthorized access either resigned in lieu of termination or were terminated; (13) according to IRS data, proven cases of unauthorized access that occurred after enactment of Public Law 105-35 have generally been referred to U.S. Attorneys for prosecution, and these U.S. Attorneys have, with one exception, declined to prosecute; (14) according to IRS, the one case that was accepted for prosecution was still open as of February 2, 1999, but the employee had been removed from the agency; and (15) as required by the law, IRS notified the three taxpayers whose data the employee had accessed.
With a few exceptions, the volume of salvage timber the Forest Service has offered for sale has remained fairly constant over the years. However, as the green (or nonsalvage) timber sale program has decreased in size, the salvage sale program has increased as a percentage of the total volume offered for sale. For example, even though the actual salvage timber offered for sale declined from about 2.9 billion board feet to 1.9 billion board feet in 1990 through 1996, it actually increased as a percent of total timber offered for sale from about 26 percent to 48 percent during this same period. The National Forest Management Act of 1976, as amended (16 U.S.C. 472a), established the Salvage Sale Fund as a permanent appropriation, and the Congress appropriated $3 million in fiscal year 1977 to get it started. The act authorized the Secretary of Agriculture to require timber purchasers to make deposits into the Salvage Sale Fund as part of the payment for the timber. Such deposits are then available to replenish the fund and pay for the costs of preparing and administering future salvage sales. As appropriations to fund the overall timber program have decreased, the importance of the Salvage Sale Fund as a source of funding has increased. For example, in fiscal year 1990, moneys from the Salvage Sale Fund represented 20 percent of all funds needed for the green and salvage timber programs, but by fiscal year 1996, the amount had risen to 45 percent. The Salvage Sale Fund is not the only fund in which salvage sale timber receipts are deposited. Salvage sale receipts not used to recover costs may be deposited into (1) the Knutson-Vandenberg (K-V) Fund, where they are used to reforest harvested timberlands, and (2) the National Forest Fund, where they can be used to make required payments to the states, the Roads and Trails Fund, and other obligations. Under federal law, at the end of each fiscal year, 25 percent of all moneys received at each national forest, including moneys received from salvage sales, is to be paid to the state in which the forest is located. These funds are to be expended for public roads and schools. Federal law also requires that at the end of the fiscal year, 10 percent of all moneys received is to be deposited into the Roads and Trails Fund. These funds are to be expended for roads and trails in the forests from which the moneys were derived. The Forest Service’s guidelines require that a plan be prepared for each salvage sale or group of small sales. This plan determines the amount of receipts to be deposited into the Salvage Sale Fund to recover the sale’s costs. Specifically, the salvage sale plan identifies the sale’s volume, the sale’s direct and indirect costs, and any additional amount that may be collected to meet future program needs. The salvage sale plan is the only document in which these costs are estimated and identified on a sale-by-sale basis. The Forest Service’s accounting systems do not track actual sale-by-sale costs. The history of the Salvage Sale Fund has been one of a growing balance through fiscal year 1993 and then a declining balance for the next 3 years. From the start of fiscal year 1990 through the end of fiscal year 1993, the Salvage Sale Fund’s balance more than doubled, from $111 million to a high of $247 million (see table 1). Declines through fiscal year 1996 lowered the balance to $186 million, a drop of 25 percent. The fund’s ending balance declined from fiscal years 1994 through 1996 for the following reasons: In fiscal year 1994, $40.2 million of the fund’s balance was considered excess to the salvage sale program’s anticipated needs and was used for other authorized purposes. During fiscal year 1994, salvage timber offered for sale declined to its lowest level in almost 10 years. As a result, less salvage sale receipts were collected from these sales in fiscal years 1995 and 1996. In fiscal year 1995, the emergency salvage timber sale program was implemented and additional costs were incurred to prepare and administer sales that would generate receipts largely in future years. In fiscal year 1996, as costs for the emergency salvage timber sale program continued to rise, the Forest Service deposited $35.6 million originally intended for the Salvage Sale Fund into the National Forest Fund to cover a shortage in the funds needed to make the payment to the states and other obligations. In addition, Forest Service officials stated that lowered receipts resulted from the volume offered under the emergency salvage program because the salvage timber was of lower quality. Because the fund’s balance had declined for 3 years and because the salvage sale program’s obligations for the last 2 years exceeded deposits to the Salvage Sale Fund by more than $30 million, we asked Forest Service officials to provide us with information about the agency’s ability to meet the salvage sale program’s future needs with available funding levels. They told us that the Salvage Sale Fund’s obligations for fiscal year 1997 and 1998 will be much lower than those in fiscal year 1996 because they expect a lower volume of salvage timber to be offered for sale. In addition, the Forest Service projects that in fiscal year 1997, about $167 million in salvage sale receipts will be deposited into the fund to cover an estimated $172 million in obligations. Forest Service officials expect the fund’s fiscal year 1997 ending balance to be about $182 million, an amount they consider sufficient to meet expected needs of $153 million in fiscal year 1998. Several management practices that affect the flow of salvage sale receipts into the fund need to be improved to ensure more consistency in the salvage sale program. Specifically, these practices include how regions and forests (1) establish priorities for distributing salvage sale receipts, (2) establish estimates of costs to be recovered, (3) review salvage sale plans for completeness and accuracy, and (4) satisfactorily correct deficiencies. When timber sale receipts were at much higher levels, Forest Service regional and forest-level officials decided how to distribute receipts. As a result, none of the four forests we visited distributed salvage sale receipts in the same order or complied with the legislative distribution priorities. Recently, however, declining timber receipts, combined with concerns about meeting all required obligations, resulted in headquarters actions to clarify how receipts should be distributed. It is not yet clear whether these clarifications will ensure that regions and forests handle the distributions of receipts in keeping with the different legislative priorities applicable to salvage and green sale receipts. If the separate legislative priorities are not applied, salvage sale receipts could be used for other purposes before the fund is replenished to cover costs. The first legislative priority for the distribution of timber sale receipts is the required 25-percent payment to the states. Even though the 25-percent requirement applies to receipts from both salvage and green sales, it does not require that the payment be made from the same source that generated the receipts. For example, if the receipts from green sales are sufficient, then they may be used to make the payment to the states that are attributable to salvage sales. There is one basic difference in how salvage sale receipts and green sale receipts are to be handled once the 25-percent requirement is met: Salvage sale receipts must be deposited into the Salvage Sale Fund until the sale’s preparation and administration costs are recovered. This deposit must occur from salvage sale receipts because receipts from the sale of green timber may not be deposited to the Salvage Sale Fund. Once salvage sale costs are recovered, any remaining salvage sale receipts may then be deposited in accordance with the priorities attributable to green sales. Since September 1996, the Forest Service has made several attempts to clarify how timber sale receipts should be distributed. These include amendments to the manual and the handbook as well as both interim and draft guidelines. The Forest Service issued interim guidelines in January 1997 to provide guidance until a task force developed and completed national guidelines. This task force issued its first draft in June 1997, a second draft in August, and a final report on August 28, 1997. However, none of these documents—the amendments, the interim or draft guidelines, or the final task force report—clearly illustrated the separate priorities existing for the distribution of salvage and green timber sale receipts. In its report, the task force recommended establishing priority groups to distribute timber receipts. For example, the first priority group includes required commitments for the payments to the states, the payment to roads and trails, the payments for the next year’s planned purchaser-elect road program, and the recovery of required K-V reforestation costs. The second priority group includes the regional and local needs of the Salvage Sale Fund and other reforestation activities. However, the priority groupings do not show that, unlike green sale receipts, deposits to the Salvage Sale Fund must be made to recover costs before the identified K-V reforestation requirements are satisfied. If receipts are set aside for other activities before salvage sale costs are recovered, the amount remaining may be insufficient to adequately replenish the fund. The task force’s report has been sent to the regions for implementation. How it will be implemented and interpreted remains to be determined. The four regions we reviewed were all responding in different ways to the interim guidance they had received: Officials in the Southern Region stated that because the region has always met the payments to the states and the other required payments, they saw no reason to change their established priorities as a result of the interim guidance. The region and the forests will monitor the situation to ensure that the National Forest Fund can meet all of its obligations, but the forests will continue to decide how to distribute timber sale receipts. The Pacific Northwest Region and the Northern Region have adopted regional policies similar to those in the task force’s June draft, except that the priorities within the first category have been reordered. For example, required reforestation is listed before the payments to the states. The Pacific Southwest Region is following the January interim guidance. We reviewed the task force’s final report, including the new guidance, which clearly identifies the 25-percent payment to the states as the first priority and the appropriate source of funding for the Roads and Trails Fund, both of which were not always clear in earlier guidance. However, the relative priority of distributing receipts from salvage sales to the Salvage Sale Fund and to the K-V Fund remains unclear. For example, the guidance states that the Salvage Sale Fund takes priority over the K-V Fund for salvage sale receipts but later states that if insufficient value is received on a salvage sale to fund the needs of both the Salvage Sale Fund and the K-V Fund, then a decision must be made as to which fund will take priority. In addition, the transmittal letter leaves the relative priority between the Salvage Sale Fund and K-V Fund to the discretion of the responsible line officer. These statements could easily lead to continued confusion. Consequently, we remain concerned about whether the final version of the guidance will be clear enough to be correctly interpreted or consistently implemented by those who must use it. Our concern stems in part from the variety of regional practices we found for the interpretation and implementation of the interim and draft guidelines as well as for the other problems discussed below. A critical step in replenishing the Salvage Sale Fund is accurately estimating the amounts necessary to reimburse the fund for direct and indirect sale costs. Because the Forest Service does not account for actual costs on a sale-by-sale basis, these costs must be estimated using cost information from previous years. While these estimates are used to determine what can be deposited into the Salvage Sale Fund, the Forest Service has not provided detailed guidance on how these costs should be determined. The method used to estimate costs is left to the regions, which, in turn, often pass this decision along to the individual forests. This practice has led to a variety of cost development methods. At the four forests we reviewed, four different cost development methods were used. For salvage sales awarded in fiscal year 1995, the Clearwater National Forest developed costs using a 3-year average of cost data taken from the accrual-based Timber Sale Program Information and Reporting System; the Umatilla National Forest used fiscal year 1992 expenditure data taken from the cash-based Central Accounting System; the Stanislaus National Forest used a 3-year average of the Central Accounting System expenditure data; and the Homochitto National Forest developed its own cost estimates on the basis of its experience. The Forest Service does not account for costs on a sale-by-sale basis, and as a result, the method chosen to estimate these costs can have a substantial impact on the amount to be deposited in the fund. As the size of the salvage sale program changes, the costs associated with it rise and fall. Thus, the costs selected and the period chosen can have a significant effect on the amount identified as needed to replenish the fund. For example, if the Umatilla National Forest had used the 3-year average method utilized by the Stanislaus National Forest, its identified costs would have been $1.3 million instead of the $367,223 actually claimed. By selecting a method that incorrectly estimates the program’s cost, a forest runs the risk of not setting aside the amount necessary to finance the program in the future. (For a table showing the total costs for the sales examined in the four forests we reviewed, see app. II.) Forests need to accurately prepare salvage sale plans because these documents serve as the basis for depositing available receipts to the Salvage Sale Fund. At the four forests we reviewed, however, we found numerous errors. For example, (1) regional and headquarters overhead had not been included in the indirect costs, (2) overhead was calculated on overhead, (3) incorrect volumes were listed, (4) excessive allowable surcharges were calculated, and (5) basic computational errors were made. These errors and omissions point to a lack of adequate review of the salvage sale plans by managers at the forest and regional levels. The effect of these errors varied, understating costs in some places and overstating them in others. For example, of the 16 sales reviewed at the Umatilla National Forest, 6 overstated indirect costs and 7 understated them. The overall impact was an overstatement of about $21,000. At the Stanislaus National Forest, the program’s future needs were based on 150 percent of direct and indirect costs instead of the 50 percent permitted by the Forest Service’s handbook; this calculation overstated the amounts to be collected for the nine sales reviewed by almost $150,000. Furthermore, this incorrect calculation method has been in effect since at least 1991. We also found instances in which salvage sale plans were never prepared. At the Homochitto National Forest, 3 of the 19 sales we reviewed had no plan. Without a plan, there is no basis for distributing any receipts to the Salvage Sale Fund. This omission at the Homochitto National Forest cost the Salvage Sale Fund about $19,000 in deposits. Over the past 5 years, both the U.S. Department of Agriculture’s Office of Inspector General and various regional and headquarters teams within the Forest Service have reviewed the salvage sale program. These reviews have reported many management weaknesses similar to those we identified. However, many of these management weaknesses persist because the Forest Service has not communicated the results of these reviews to all regions or adequately followed up to ensure that corrective actions are taken. In 1992, the Office of Inspector General audited three Forest Service regions to determine whether the salvage sale program complied with the applicable laws and regulations and whether collections and receipts were appropriate. Among other things, the Inspector General found that the guidelines and monitoring of the salvage sale program were inadequate, improvements were needed in the management and in the collection of salvage sale funds, and controls over expenditures charged to the salvage sale program were inadequate. To correct these problems, the Inspector General recommended that the Forest Service provide detailed instructions to its field offices on the management of the salvage sale program and that the program be monitored on a regular basis. The Inspector General also recommended that detailed and specific instructions be established for the preparation of salvage sale plans in addressing allowable direct costs, the calculation of indirect costs, and permissible excess collections. In response to these recommendations, the Forest Service updated and clarified its manual and handbook and agreed to schedule additional reviews of its salvage sale program. However, at the four forests and regions that we reviewed, neither the guidance nor the monitoring is specific enough to address the problems we found. For example, while the guidance requires that estimated costs be included in salvage sale plans, it does not state how estimates should be calculated. The guidance also requires that costs be updated, but it does not state how or on what basis. The monitoring system put in place does not include provisions requiring follow-up to ensure that problems are corrected or that the weaknesses, problems, or best practices identified in one office are communicated throughout the agency so that changes can be made everywhere they are needed. The Forest Service conducts its own reviews of the salvage sale program by annually selecting one or two regional offices for in-depth analysis. During these reviews, headquarters and regional officials visit selected forests and examine guidelines, program direction, and accounting procedures. However, the problems or best practices identified during these reviews are not communicated throughout the agency so that changes can be made where needed. Consequently, the problems identified during a 1992 review were also identified as problems 3 years later in another region. Since 1992, each region that we visited had been selected for review. The Forest Service review teams found many of the same problems we identified, including incorrect calculation and updating of direct and indirect costs, inconsistent priorities in distributing salvage sale funds, failure to update salvage sale plans, and failure to collect the correct amount for the program’s future needs. Action plans were prepared to address the problems uncovered by the reviews, but the Forest Service did not share this information with other regions or do the follow-up necessary to ensure that the weaknesses were actually corrected. For example, when we asked Southern Region officials about the status of the action items in their September 1995 review, we were told that many of the items in the review that were targeted for completion by June 1996 were still open in June 1997. Headquarters officials said that because of limited staff, they seldom follow up to ensure that the problems are corrected, and they also do not report the results of their reviews to other regions. They said that they rely on regional officials to report on the status of corrective actions and that they would follow up on specific weaknesses during their next review. The Forest Service has established two task forces whose work may help improve some of the management practices affecting Salvage Sale Fund replenishment. The first task force, dealing with funding priorities, has already been discussed. The other task force is developing directions for calculating indirect costs, improving internal management controls over indirect costs, and identifying ways to best manage the K-V Fund. Forest Service officials expect, however, that some of these findings will be applicable to the management of the Salvage Sale Fund. Forest Service officials stated that the issuance date for the task force’s report is uncertain at this time. Over the years, the Forest Service has often used task forces to identify problems and recommend solutions. The results of these task forces’ studies, like those of activity reviews, are often thorough and constructive, and they could do much to correct identified problems if the recommendations were communicated and implemented. As we have pointed out, however, regions and forests do not always carry out suggestions or recommendations for change. As we stated in our testimony of July 31, 1997, the highly decentralized management structure of the Forest Service gives managers considerable autonomy and discretion for interpreting and applying the agency’s policies and directions. As a result, it will be a significant challenge for the Forest Service to ensure that the recommendations made by the two task forces will be fully and consistently implemented throughout the agency. The actions taken by the Forest Service in the past year to improve the management of the Salvage Sale Fund show a willingness to correct identified weaknesses. Task forces have completed the new guidance for the distribution of timber sale receipts and are identifying ways in which the management of the Salvage Sale Fund can be improved. Substantial progress has been made. The guidance on priorities, however, needs additional clarification to ensure compliance with the legislative priorities for the distribution of salvage sale receipts. In addition, concerns about management practices affecting fund replenishment still need to be resolved and corrective action implemented. The need for consistent action requires that the guidance include the identification of appropriate data sources, cost calculation methods, and specific monitoring and feedback activities. In addition, the correction of individual mistakes or errors may not result in solving systemic problems. When reviews identify best practices or mistakes, some mechanism is needed to communicate this information throughout the agency so that all locations benefit. To help ensure that appropriate and consistent practices are in place to manage the Salvage Sale Fund, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to take the following actions: Clarify the agency’s guidance to emphasize that the Salvage Sale Fund takes priority over the K-V Fund for the distribution of salvage sale receipts until preparation and administration costs have been recovered. Establish national guidance that identifies acceptable data sources and methods for calculating the cost estimates that determine the fund’s replenishment requirements. Establish national procedures to ensure that salvage sale plans will be adequately reviewed to detect errors. Develop national follow-up procedures to ensure that errors, problems, or best practices found in one location are communicated, corrected, or implemented everywhere. We provided a draft of this report to the Forest Service for review and comment. The Forest Service said that the report accurately and fairly presented the information about the fund’s balance and the management practices affecting the replenishment of the fund. The Forest Service agreed with the recommendations for corrective action. To respond to the assignment objectives, we reviewed pertinent legislation, the agency’s guidance, the agency’s financial records, monitoring reports, and selected salvage sales. We spoke with representatives from Forest Service headquarters, four regional offices, and four national forest offices to discuss how the Forest Service manages the Salvage Sale Fund. We conducted our work from September 1996 through September 1997 in accordance with generally accepted government auditing standards. Appendix I provides a detailed discussion of our scope and methodology. We are sending copies of this report to the Secretary of Agriculture, the Chief of the Forest Service, the Director, Office of Management and Budget, and appropriate congressional committees. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (206) 287-4810. Major contributors to this report are listed in appendix III. Sales of salvage timber represented nearly half of all timber offered for sale in fiscal year 1996. Because of this increase in salvage sales, the Ranking Minority Member, Subcommittee on Interior and Related Agencies, House Committee on Appropriations, asked us to provide information on the status of the fund’s balance and the management practices used by the Forest Service to replenish the Salvage Sale Fund. We agreed to provide this information in two phases. In phase one, we provided information on the uses and status of the fund and compared the timber sales receipts deposited in the Salvage Sale Fund to the outlays from the fund on a national, regional, and forest-level basis for fiscal years 1991 through 1995. The second phase provides a more in-depth assessment of the current status of the fund’s balance and the adequacy of the Forest Service’s efforts to replenish and manage the fund. To obtain information on the current status of the Salvage Sale Fund’s balance, we requested information on fiscal year 1996 receipts, expenditures, and the fund’s ending balance and reviewed the Forest Service’s fiscal year 1997 projections for salvage sale deposits and obligations. In addition, we spoke with the Department of Agriculture’s Office of General Counsel to establish the legislative distribution priorities for salvage sale receipts. To obtain information on the adequacy of management practices affecting the replenishment of the fund, we spoke with agency officials at all organizational levels. We also reviewed the agency’s guidance, financial records, and monitoring reports along with applicable laws and their legislative history. Specifically, we interviewed representatives from the Forest Management, Budgeting, and Financial Management offices at Forest Service headquarters, four regional offices, and four forest offices. The four regions we selected were chosen because they had large salvage sale programs, provided wide geographic coverage, and had a variety of salvage conditions ranging from fires to insect infestation. Within each region, one forest was selected for detailed review. Two of the forests—the Clearwater and Stanislaus—were chosen because they were included in our recent review of the emergency salvage sale program. We selected the Homochitto National Forest, within the National Forests in Mississippi, because of the extensive Southern Pine Beetle epidemic in fiscal year 1995 and the resulting large salvage sale program. Finally, we selected the Umatilla National Forest in Oregon because it had a large salvage sale program and had not been reviewed by GAO in recent years. Table I.1 provides the forests’ names, locations, and regions. We examined the Forest Service’s handbooks and manuals for guidance on how to develop direct and indirect salvage sale cost rates, distribute salvage sales receipts, develop salvage sale program budgets, and prepare individual salvage sale plans. To ascertain how this guidance was used, we performed a detailed review of the salvage sales awarded at the four forests in fiscal year 1995. Fiscal year 1995 was selected because most sales were prepared before the major impact of the emergency salvage sale program and because enough time had elapsed for many of the sales to be completed. For the Clearwater, Stanislaus, and Umatilla National Forests, we selected all salvage sales awarded in fiscal year 1995. Because of the extensive beetle epidemic in 1995, the Homochitto awarded more than 800 timber sale contracts and permits to sell the timber volume necessary to accomplish its salvage sale program. Because we were testing the system rather than extrapolating our findings to the whole, we randomly selected 13 contracts and 6 permits for detailed review. Our review of the salvage sale files also included examining pertinent data on sales volumes, the salvage sales’ collection plans, the sale areas’ improvement plans, and financial documents showing how the receipts were distributed among the various Forest Service funds. Because the Forest Service does not have a sale-by-sale accounting system, we used data on forest-level obligations as the basis for determining the charges to the Salvage Sale Fund. We did not perform a financial audit of these data, nor did we independently verify or test the reliability of the deposits, the fund’s balance, or other Forest Service-supplied data. However, the Forest Service’s financial statement audit reports for fiscal years 1992 through 1995 revealed significant internal control weaknesses in various accounting subsystems that resulted in unreliable accounting data, including timber-related data. Even with these weaknesses, we used the data because they were the only data available. We reviewed the agency-conducted activity reviews completed since fiscal year 1992 to determine whether the deficiencies we noted were similar to those identified internally. We then determined whether corrective action plans were developed and implemented. Finally, we reviewed the Department of Agriculture’s Office of Inspector General’s report issued in 1993 on the Forest Service’s Salvage Sale Fund and reviewed the documents provided by the Inspector General that explain the corrective actions taken by the Forest Service in response to the Inspector General’s recommendations. We conducted our review from September 1996 through September 1997 in accordance with generally accepted government auditing standards. Clearwater (5 sales) Stanislaus (13 sales) Umatilla (16 sales) Alan R. Kasdan The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of the Salvage Sale Fund's balance and the management practices affecting the replenishment of the fund. GAO noted that: (1) after reaching a high of $247 million at the end of fiscal year (FY) 1993, the Salvage Sale Fund's balance declined 25 percent to $186 million at the end of FY 1996; (2) the decline occurred for a variety of reasons, and the fund's balance appears to be stabilizing in FY 1997; (3) if the Forest Service's estimates are correct, the Salvage Sale Fund's balance will total about $182 million at the end of FY 1997, a balance the Forest Service believes is sufficient to meet the estimated obligations for FY 1998; (4) several management practices that affect the flow of salvage sale receipts into the Salvage Sale Fund need to be improved; and (5) specifically, these practices include how regions and forests: (a) establish priorities for distributing salvage timber sale receipts; (b) establish estimates of the costs to recovered; (c) review salvage sale plans for completeness and accuracy; and (d) satisfactorily correct deficiencies.
In November 2013, we issued the first report, which examined (1) actual government support for banks and bank holding companies during the financial crisis and (2) recent statutory and regulatory changes related to government support for banks and bank holding companies. See GAO, Government Support for Bank Holding Companies: Statutory Changes to Limit Future Support Are Not Yet Fully Implemented, GAO-14-18 (Washington, D.C.: Nov. 14, 2013). At a January 2014 hearing, we provided testimony based on this report. See GAO, Government Support for Bank Holding Companies: Statutory Changes to Limit Future Support Are Not Yet Fully Implemented, GAO-14-174T (Washington, D.C.: Jan. 8, 2014). holding company.these reforms from credit rating agencies, investment firms, and corporations that are customers of banks. Where available and relevant, we reviewed some public statements, reports, and other analyses by these groups. For example, to obtain information about credit rating agencies’ assessments of the likelihood and level of government support for large bank holding companies, we reviewed relevant publications by the three largest credit rating agencies: Fitch Ratings (Fitch), Moody’s Investors Service (Moody’s), and Standard & Poor’s (S&P). We interviewed representatives from each of these rating agencies to obtain their perspectives on factors contributing to changes in their assessments of government support over time. We obtained perspectives on the potential impacts of We conducted interviews with representatives from 10 investment firms and six corporations to learn about (1) factors that influence their decisions to invest in or do business with bank holding companies of various sizes; (2) how they assess the risks of banks and the extent to which they rely on credit rating agencies’ assessments of these risks; (3) their views on the likelihood that the federal government would intervene to prevent the failure of a large bank holding company and factors that have influenced these views over time; and (4) how, if at all, expectations of government support have impacted their decisions to invest in or do business with banks of various sizes. In selecting investment firms and large corporations for interviews, we selected nonrepresentative samples of firms. As a result, the views we present from these firms are not generalizable to the broader community of bank investors and customers and do not indicate which views are most prevalent. We selected investment firms with experience investing in debt or equity securities of banks and bank holding companies and selected different types of investment firms to obtain perspectives reflecting a range of investing strategies. Specifically, we selected three large asset management firms (each with more than $1 trillion in assets under management); three public pension funds (each with more than $50 billion in assets under management); three hedge funds; and one large insurance company. We selected U.S. corporations from different industry sectors and with a range of banking needs. We identified four of these firms and contacted them with the assistance of the U.S. Chamber of Commerce, which reached out to its members on our behalf, and selected two additional firms to achieve additional diversity across industry sectors. The corporations we interviewed included four multinational corporations (a chemical company, a delivery and logistics company, an energy company, and a technology company) and two corporations with all or close to all of their operations in the United States (a regional electric utility company and a national retail services company). To obtain additional information and perspectives on how financial reforms or credit ratings could impact the relative advantages or disadvantages of being a large bank holding company, we reviewed relevant publicly available information in the financial statements of bank holding companies and conducted interviews with bank holding companies of various sizes, bank industry associations, public interest groups, academics, and other experts. For example, we reviewed bank holding companies’ financial disclosures about how Dodd-Frank reforms could increase certain fees and how a credit rating downgrade could impact the amount of collateral required of them under certain financial contracts. We also reviewed our prior work on potential impacts of Dodd- Frank Act implementation. As part of our first objective, we reviewed regulators’ efforts to assess their progress in addressing too-big-to-fail perceptions and market distortions that can result. We reviewed Dodd-Frank Act provisions that outline statutory responsibilities for the Financial Stability Oversight Council (FSOC) and reviewed relevant sections of the FSOC annual report. We interviewed officials from FSOC, the Department of the Treasury (Treasury), the Board of Governors of the Federal Reserve System (Federal Reserve Board), FDIC, and the Office of the Comptroller of the Currency (OCC) about their efforts to analyze the impacts of Dodd- Frank reforms on too-big-to-fail perceptions and to evaluate whether additional policy actions may be needed to address any remaining market distortions. We also reviewed relevant congressional testimonies and other public statements by agency officials. To assess the extent to which the largest bank holding companies have received funding cost advantages as a result of perceptions that the government would not allow them to fail, we conducted an econometric analysis of the relationship between a bank holding company’s size and its funding costs. To inform our econometric approach and understand the breadth of results and methodological approaches, we reviewed studies that estimated the funding cost difference between large and small financial institutions that could be associated with the perception that some institutions are too big to fail. We evaluated studies that met the following criteria: (1) used a comparative empirical approach that attempted to account for differences across financial institutions that could influence funding costs, (2) included U.S. bank holding companies, and (3) included analysis of data from 2002 or later. We chose these criteria to identify the most relevant and rigorous studies related to our research objective. To identify studies that met these criteria, we sought input from individuals, agencies, and groups that we interviewed, identified studies cited in an initial set of studies we had already identified, and conducted a systematic search of research databases (including Google Scholar and SSRN). Our criteria excluded studies that used option-pricing approaches—that is, techniques that use tools for pricing stock options to estimate the value associated with possible government interventions to assist distressed banks—because these studies assume a too-big-to-fail funding cost advantage exists and only estimate its magnitude. We also excluded two studies that otherwise met our criteria, but did not attempt to control for important differences between financial institutions. We were aware of potential conflicts of interest associated with a number of studies in our review. For example, one study was conducted by researchers at a large bank holding company and two others were sponsored by a trade group representing large commercial banks. We considered the potential impact these conflicts of interest might have on their methods and results. We ultimately included 16 studies in our review that we determined were sufficiently reliable for the purposes of this report. In reviewing these studies, we assessed what they identified as the level of funding cost differences and how that level has changed over time and we identified the strengths and limitations of the studies’ approaches. Because of limitations of the methodologies of these studies, their results, while suggestive of general trends, are not definitive and thus should be interpreted with caution. We interviewed authors of selected studies, federal financial regulators, and other experts to obtain perspectives on the strengths and limitations of relevant quantitative approaches that have been used. Taking into consideration the strengths and limitations of different methodologies, we developed our own econometric approach to evaluate the extent to which the largest bank holding companies may have received funding cost advantages as a result of perceptions that the government would not allow them to fail. In addition, we selected three experts with relevant expertise to review our econometric approach and assess its strengths and limitations. These experts reviewed our approach before we implemented it and provided comments on our methodology. In many instances, we made changes or additions to our models to address their comments, and in other instances, we disclosed additional limitations of the models. Before selecting these experts, we reviewed potential sources of conflicts of interest, and we determined that the experts we selected did not have any material conflicts of interest for the purpose of reviewing our work. We used a multivariate regression model to estimate the relationship between bank holding companies’ funding costs and their size while controlling for factors other than size that may also influence funding costs. Our general regression model is the following: In this model, 𝑏 denotes the bank holding company, 𝑞 denotes the quarter, 𝑓𝑢𝑛𝑑𝑖𝑛𝑔 𝑐𝑜𝑠𝑡𝑏𝑞 is the bank holding company’s cost of funding in a quarter, 𝑠𝑖𝑧𝑒𝑏𝑞 is a measure of the bank holding company’s size at the beginning of the quarter, 𝑐𝑟𝑒𝑑𝑖𝑡 𝑟𝑖𝑠𝑘𝑏𝑞 is a list of proxies for the bank holding company’s credit risk—the risk that the bank holding company will not repay the funds it borrowed as agreed, 𝑋𝑏𝑞 is a list of other variables that may influence funding costs, 𝜀𝑏𝑞 is an idiosyncratic error term, and 𝛼,𝛽,𝛾,𝛿, and Θ are parameters to be estimated. The parameter 𝛽 captures the direct relationship between a bank holding company’s funding cost and its size. The parameter 𝛿 captures the indirect relationship between a bank holding company’s funding cost and its size that exists if the size of a bank holding company affects the relationship between its funding cost and credit risk. If investors view larger bank holding companies as less risky than smaller bank holding companies due to beliefs that the government is more likely to rescue larger bank holding companies in distress, then either 𝛽 is less than zero, 𝛿 is less than zero, or both. However, the parameters 𝛽 and 𝛿 may also reflect factors other than these beliefs. We used a measure of funding costs based on bonds issued by bank holding companies. Bank holding companies use a variety of funding types from different sources, including various types of deposits, bonds, and equity. We used bond yield spreads—the difference between the yield on a bond and the yield on a Treasury bond of comparable maturity—to measure a bank holding company’s cost of bond funding. Treasury securities are widely viewed as a risk-free asset, so the yield spread measures the price that investors charge a bank holding company to borrow to compensate them for credit risk and other factors. We focused on bond yield spreads because they are a measure of funding costs that is available for bank holding companies of a range of sizes, including bank holding companies with less than $10 billion in assets. Furthermore, bonds are traded in secondary markets, so changes in bond yield spreads can be publicly observed in a timely manner. Finally, bond yield spreads are a direct measure of funding costs, unlike alternatives such as credit ratings. We used Bloomberg to identify U.S. bank holding companies with more than $500 million in assets that were operating in 1 or more years from 2006 through 2013, and to identify all plain vanilla, fixed-rate, senior unsecured bonds issued by these bank holding companies, excluding We collected data on bond bonds with an explicit government guarantee.yield spreads, bank holding company size, variables associated with bank holding company credit risk, and bond characteristics from Bloomberg. We used these data to assemble a dataset with one observation for each bond in each quarter from the first quarter of 2006 through the fourth quarter of 2013. We constructed alternative measures to control for size, bond liquidity, and credit risk due to uncertainty about how to appropriately capture these important factors influencing bond yields and because the regression results may be sensitive to alternative specifications (see table 1). The numbers of bank holding companies and bonds we analyzed and summary statistics for our indicators of size, credit risk, and other factors are in appendix I. We developed a variety of econometric models that use alternative measures of bond liquidity, bank holding company credit risk, and the size or systemic importance of a bank holding company. We estimated the parameters for each of our models separately for each year from 2006 through 2013 to allow the relationship between bank holding company size and bond funding costs to vary over time. Our baseline models used average yield spreads on senior unsecured bonds based on actual trades, executable quotes, and composites derived from executable and indicative quotes to measure bond funding costs; total assets to measure size; equity capital and subordinated debt as percentages of total assets to measure capital adequacy; and issue size and total volume to measure bond liquidity. We estimated the baseline model for each year and for each of our five measures of volatility, as well as for each year without a measure of volatility. We also estimated models that added average bid- ask spread to our baseline indicators of bond liquidity, models that used average yield spreads based only on actual trades, models that used equity capital and subordinated debt as percentages of risk-weighted assets as our indicators of capital adequacy, models that used global systemically important bank (GSIB) designation as an indicator of size, models that used the $50 billion asset threshold as an indicator of size, and models that used both total assets and the square of total assets as indicators of size.from 2006 through 2013. For all models, we included indicators for each quarter to control for the influence on yield spreads of economic conditions, the regulatory environment, and other factors that vary over time but not across bank holding companies. The details of the models we estimated and the results for our baseline models for select years are in appendix I. Altogether, we used 42 separate models for each year We used our models to compare bond funding costs for bank holding companies of different sizes, all else being equal. account for the possibility that investors’ beliefs about government rescues depend on the credit risk level of the bank holding company, we made comparisons for bank holding companies with the average level of credit risk that prevailed each year. In addition, we assessed the impact of credit risk on our comparisons by making comparisons at credit risk levels higher and lower than the average for each year and also while holding the level of credit risk constant over time at the average level for 2008—the year when the financial crisis peaked and credit risk for bank holding companies was high. By holding credit risk constant, we can assess the extent to which changes in average credit risk over time may have influenced changes in funding costs relative to other factors. Our models allow the size of a bank holding company to influence its bond funding costs directly and also indirectly through the interactions between size and the credit risk variables. As a result, no single parameter is sufficient to describe the relationship between bond funding costs and size. To summarize the overall relationship between bond funding costs and size reflected in each specification, we calculated bond funding costs for bank holding companies of different sizes and credit risk levels using our estimates of the parameters for each specification for each year. See appendix I for more details on the calculations. size of the losses that the government may impose on investors if it rescues the bank holding company, but our methodology—like the methodologies used by other researchers—does not allow us to precisely identify the influence of each of these components. Although we have taken into account many factors that may influence bond yield spreads and that differ for bank holding companies of different sizes, our estimates of differences in bond yield spreads for bank holding companies of different sizes may reflect factors other than investors’ beliefs about the likelihood of government support because our control variables are imperfect or may be incomplete. In addition, our estimates of differences in bond yield spreads for bank holding companies of different sizes may reflect differences in the characteristics of bank holding companies that choose to issue bonds. The section of this report that addresses our second objective contains a fuller discussion of the limitations associated with our empirical work. For parts of our work that involved the analysis of computer-processed data, such as market data used in our analysis of funding cost differences, we assessed the reliability of these data by reviewing relevant documentation and conducting interviews with data providers to review steps they took to collect and ensure the reliability of the data. In addition, we electronically tested data fields for missing values, outliers, and obvious errors. We determined that these data were sufficiently reliable for our purposes. We conducted this performance audit from January 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. While the 2007-2009 financial crisis highlighted concerns about the market distortions that can result from too-big-to-fail perceptions, concern about such distortions pre-dated the crisis. A key factor giving rise to the too-big-to-fail dilemma has been the emergence of financial institutions of such size, interconnectedness, and market importance that their failure could threaten to severely disrupt the financial system and damage the economy. Although the federal government’s policy responses to failing financial institutions in recent decades have not formed a clear pattern in terms of the availability or structure of government support, these responses may have influenced market views on the likelihood of government support. Several observers trace too-big-to-fail concerns back to 1984 when FDIC provided support to Continental Illinois National Bank, then the sixth largest U.S. bank in terms of total assets, to prevent its failure and losses to its depositors and creditors. The Federal Reserve Board’s response to the near failure of a large U.S. hedge fund, Long-Term Capital Management (LTCM), in 1998 was another significant event that may have contributed to too-big-to-fail perceptions. While LTCM was not itself a large bank, the Federal Reserve Board’s intervention in helping to facilitate private-sector assistance to LTCM may have signaled the willingness of federal government authorities to intervene to avoid potential systemic consequences from a large, interconnected financial firm’s failure. Other factors may have contributed to some ambiguity surrounding the likely recipients and circumstances of government support in the years leading up to the 2007-2009 financial crisis. For example, failures and near-failures of large financial firms had been infrequent and occurred under varying circumstances, making it difficult to discern a clear pattern of government support. During the 2007-2009 crisis, the federal government took actions to stabilize the financial system by creating new emergency programs with broad-based eligibility and providing firm-specific assistance to prevent the failures of large financial institutions. Notably, however, U.S. government authorities’ initial responses to impending failures of large financial institutions did not send a clear signal about the availability of government support. In March 2008, the Federal Reserve Board authorized emergency assistance to prevent the failure of one large investment bank (Bear Stearns Companies, Inc.), but 6 months later, Federal Reserve Board officials determined that they could not assist another large failing investment bank, Lehman Brothers Holdings, Inc. (Lehman Brothers). Following Lehman Brothers’ bankruptcy announcement on September 15, 2008, which triggered an intensification of the financial crisis, U.S. government authorities took actions that signaled a stronger near-term commitment to prevent the failure of systemically important financial institutions. On the day after Lehman Brothers’ bankruptcy announcement, the Federal Reserve Board authorized up to $85 billion of credit assistance for American International Group, Inc. (AIG) to prevent its failure. In addition, on September 29, 2008, the Secretary of the Treasury invoked the systemic risk exception for the first time since the enactment of the FDIC Improvement Act of 1991(FDICIA) to authorize FDIC to provide assistance to avert the failure of Wachovia Corporation—then the fourth-largest banking organization in terms of assets in the United States—by facilitating Citigroup Inc.’s acquisition of its banking operations. The G7 is an informal forum of coordination among Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States. also took steps to prevent the failures of large financial institutions. Examples of large foreign financial institutions that received firm-specific assistance from their governments include Royal Bank of Scotland Group PLC (United Kingdom) and UBS (Switzerland). Since the onset of the financial crisis, the largest banks have grown bigger in many major advanced economies, even as the financial sector has shrunk, and U.S. and foreign policymakers have acknowledged that crisis policy interventions raised moral hazard concerns. As discussed earlier, market perceptions that some firms are too big to fail can distort market participants’ incentives to properly price and restrain risk-taking by these firms. U.S. regulators have coordinated with foreign counterparts through the G20 and the Financial Stability Board to develop a policy framework for addressing the risks posed by large, complex financial institutions. In November 2010, G20 leaders endorsed the Financial Stability Board’s framework for addressing too-big-to-fail concerns. The framework aims to reduce the probability and impact of the failure of systemically important firms. Key elements of this framework include developing effective resolution regimes and strengthening capital standards for systemically important financial institutions. FDIC, the Federal Reserve Bank of New York, and Treasury helped to develop standards the Financial Stability Board issued for effective resolution regimes in October 2011. In addition, U.S. banking regulators have worked with their foreign counterparts to develop a strengthened capital regime that will require global systemically important banks to have U.S. federal financial regulators are additional loss absorbing capacity.implementing these and other elements of the Financial Stability Board’s framework for addressing too big to fail as part of the process of implementing relevant Dodd-Frank Act provisions. U.S. federal financial regulators have made progress in implementing Dodd-Frank Act provisions and related reforms to restrict future government support and reduce the likelihood and impacts of the failure of a systemically important financial institution (SIFI). be grouped into four general categories: (1) restrictions on regulators’ emergency authorities to provide assistance to financial institutions; (2) new tools and authorities for regulators to resolve a failing SIFI outside of bankruptcy if its failure would have serious adverse effects on the U.S. financial system; (3) enhanced regulatory standards for SIFIs related to capital, liquidity, and risk management; and (4) other reforms intended to reduce the potential disruptions to the financial system that could result from a SIFI’s failure. Restrictions on Emergency Authorities. The Dodd-Frank Act revised Federal Reserve Board and FDIC emergency authorities so that emergency assistance can no longer be provided to assist a single and specific firm but rather can only be made available through a program with broad-based eligibility—that is, a program that provides funding support to institutions that meet program requirements and that choose to participate. While the Dodd-Frank Act does not use the term “systemically important financial institution,” this term is commonly used by academics and other experts to refer to bank holding companies with $50 billion or more in total consolidated assets and nonbank financial companies designated by FSOC for Federal Reserve supervision and enhanced prudential standards. § 165(d)(1), 124 Stat. at 1426 (codified at 12 U.S.C. § 5365(d)(1)). agencies may jointly decide to impose more stringent regulatory requirements on the company. Orderly Liquidation Authority. OLA gives FDIC the authority, subject to certain constraints, to resolve large financial firms, including nonbanks, outside of the bankruptcy process. This authority allows for FDIC to be appointed receiver for a financial firm if the Secretary of the Treasury determines, among other things, that the firm’s failure and its resolution under applicable federal or state law, including bankruptcy, would have serious adverse effects on U.S. financial stability and no viable private-sector alternative is available to prevent the default of the financial company. While the Dodd-Frank Act does not specify how FDIC must exercise this authority, FDIC is developing an approach to resolving a firm under OLA that it refers to as the Single Point-of-Entry (SPOE) approach. Under the SPOE approach, FDIC would be appointed receiver of the top-tier U.S. parent holding company of a financial group determined to be in default or in danger of default following the completion of the appointment process set forth under the Dodd-Frank Act. Immediately after placing the parent holding company into receivership, FDIC would transfer assets (primarily the equity and investments in subsidiaries) from the receivership estate to a bridge financial holding company. By allowing FDIC to take control of the firm at the holding company level, this approach is intended to allow subsidiaries (domestic and foreign) carrying out critical services to remain open and operating. In a SPOE resolution, at the parent holding company level, shareholders would be wiped out, and unsecured debt holders would have their claims written down to reflect any losses that shareholders cannot cover. Under the Dodd-Frank Act, officers and directors responsible for the failure cannot be retained. The new resolution authority under the Dodd-Frank Act provides a back-up source for liquidity support, the Orderly Liquidation Fund, which could provide liquidity support to the bridge financial company if customary sources of liquidity are unavailable. The law requires FDIC to recover any losses arising from a resolution by collecting assessments from bank holding companies with $50 billion or more in consolidated assets, nonbank financial holding companies designated for supervision by the Federal Reserve Board, and other financial companies with $50 billion or more in consolidated assets. §165(a)(1), 124 Stat. at 1423 (codified at 12 U.S.C. § 5365(a)(1)). ratios above specified standards, under both normal and adverse conditions. In addition, the Federal Reserve Board has announced its intention to apply capital surcharges to some or all firms based on the risks these firms pose to the financial system. Liquidity requirements. The act required the Federal Reserve Board to establish liquidity standards, which as finalized include requirements for covered firms to hold liquid assets that can be used to cover their cash outflows over short periods and in stressed conditions. In addition, the Federal Reserve Board, FDIC, and OCC have issued a proposed rule that would implement a minimum liquidity requirement that is consistent with the Basel III liquidity coverage ratio and would apply to internationally active U.S. banking organizations and U.S. depository institutions with $250 billion or more in total consolidated assets. Risk management requirements. Publicly traded covered firms must establish a risk committee and be subject to enhanced risk management standards. Stress testing requirements. The Federal Reserve Board is required to conduct an annual evaluation of whether covered firms have sufficient capital to absorb losses that could arise from adverse economic conditions. The Federal Reserve Board has been implementing the enhanced standards required by the Dodd-Frank Act in conjunction with its implementation of Basel III, a set of risk-based capital, leverage, and liquidity standards developed by the Basel Committee on Banking Supervision. The Basel capital reforms include a risk-based capital surcharge that will apply to financial institutions that have been designated as GSIBs. Further, the U.S. banking agencies have already adopted a leverage capital surcharge that will apply to the eight U.S. banking organizations that are GSIBs. Other Reforms. The act includes other reforms that could help reduce the likelihood or impacts of a SIFI’s failure. Authorities related to SIFI size and complexity. The Dodd-Frank Act grants regulators new authorities to take certain actions if they determine that a SIFI poses risks of serious adverse effects on the stability of the financial system. These include the authority for the Federal Reserve Board to require a SIFI to meet even stricter regulatory standards, the authority for the Federal Reserve Board to limit (with the approval of FSOC) the ability of a SIFI to merge with another company if it determines that the merger would pose a grave threat to U.S. financial stability, and, as noted above, the joint authority for the Federal Reserve Board and FDIC to require a firm to take steps to become more resolvable in bankruptcy. Volcker rule. Section 619 of the Dodd-Frank Act (also known as the Volcker rule) generally prohibits proprietary trading by insured depository institutions and their affiliates and places restrictions on sponsorship or investment in hedge and private equity funds. The Volcker rule’s restrictions may have greater impacts on larger bank holding companies that have been more involved in the types of activities the rule restricts. To the extent that Volcker rule implementation prevents these large institutions from engaging in certain risky activities, it could serve to reduce the likelihood of their failure. Swaps clearing and margin requirements. Title VII of the Dodd-Frank Act establishes a new regulatory framework for swaps to reduce risk, increase transparency, and promote market integrity in swaps markets. As we previously reported, requirements for swaps to be cleared through clearinghouses can reduce the vulnerability of the financial system to the failure of one or a few of the major swap dealers by transferring credit risk from the swap counterparties to the clearinghouse. At the same time, experts have pointed out that clearinghouses concentrate credit risk and thus represent a potential source of systemic risk. A benefit of the central clearing requirement is that clearinghouses require members to post margin for their trades and the Dodd-Frank Act also includes provisions that require regulators to develop margin requirements for uncleared swaps. These new requirements could help reduce systemic risk by preventing the build-up of large, undercollateralized exposures. Although federal financial regulators have finalized a number of rules related to these reforms, implementation of some key reforms has not yet been completed. For example, FDIC has largely completed the core rulemakings necessary to carry out its systemic resolution responsibilities, and is continuing to develop its SPOE approach. FDIC requested public comments on its SPOE resolution strategy in December 2013, and the comment period closed in March 2014. finalized certain rules that would subject SIFIs to enhanced prudential standards. For example, regulators have not finalized rules on single- counterparty credit limits. Resolution of Systemically Important Financial Institutions: The Single Point of Entry Strategy, 78 Fed. Reg. 243 (Dec. 18, 2013); Resolution of a Systemically Important Financial Institution: The Single Point of Entry Strategy, 79 Fed. Reg. 9899 (Feb. 21, 2014). While views among investment firms we interviewed and credit rating agencies varied, many believe the Dodd-Frank Act has reduced but not eliminated the possibility of a government rescue of one of the largest bank holding companies. Two of the three largest credit rating agencies cited FDIC’s resolution process as a key factor in their decisions to reduce or eliminate “uplift”—an increase in the credit rating—they had assigned to the credit ratings of eight of the largest bank holding companies due to their assumptions of government support for these firms. Several representatives from large investment firms with whom we spoke told us that FDIC’s resolution process makes significant progress in reducing expectations of government support, but several agreed that uncertainty around its implementation or the circumstances of its use remains. As such, some market perceptions that the government might not allow the largest bank holding companies to fail remain and can give rise to advantages for these firms if these perceptions affect decisions by investors, counterparties, and customers of these firms. For example, credit rating agencies’ assignment of higher credit ratings due to assumed government support can create benefits for these firms, but because investors may rely on credit ratings to varying degrees, the impact of such benefits may vary accordingly. In addition, Dodd-Frank Act provisions and related rules subject the largest firms to higher fees and stricter regulation that may reduce their risk of failure and increase costs on them relative to smaller competitors. Officials from FSOC and some of its member agencies have stated that financial reforms have not completely removed too-big-to-fail perceptions, but have made significant progress toward doing so. They anticipate that remaining expectations of government support will decline as Dodd-Frank implementation progresses. While views among credit rating agencies and investment firms varied, many believe the Dodd-Frank Act has reduced but not eliminated the possibility of a government rescue of one of the largest bank holding companies. During the financial crisis, credit rating agencies assigned or increased “uplift”—or an increase in the credit rating—for several large bank holding companies’ credit ratings to reflect their view that the increased possibility of government support for these firms reduced the risk that the firms’ creditors would suffer losses. We reviewed changes in credit rating agencies’ assumptions about government support over time and interviewed credit rating agency representatives. Because large investors do not necessarily rely on credit ratings or rating agencies’ assessments of government support, we obtained perspectives from representatives of large asset management firms, pension funds, hedge funds, and other investment firms that purchase debt and equity issued by bank holding companies. A firm with a lower standalone credit rating may receive a bigger increase in its rating from government support than a firm with a stronger standalone rating. holding companies. Fitch and Moody’s reports cited FDIC’s new resolution authority and a reduced willingness by the U.S. government to assist a failing bank holding company as key factors influencing these changes in assumed government support. As of June 2014, S&P had not changed its level of assumed government support since the financial crisis. However, in June 2013, S&P noted that regulatory developments may lead it to reassess its assumptions of government support for the eight bank holding companies. The three credit rating agencies each noted that their remaining assumptions of government support reflected continued uncertainty about the ability of the U.S. government to effectively resolve one of the largest bank holding companies in OLA. In September 2013, Fitch indicated that it would conduct a global review of its support ratings and in March 2014, Fitch reported that it expects to remove its support rating floor for several of the largest U.S. bank holding companies within the next one or two years. In November 2013, Moody’s removed all uplift from assumed government support from its credit ratings for the remaining eight large bank holding companies.Moody’s cited regulators’ substantial progress in establishing the SPOE receivership framework as a main consideration in their decision to remove the uplift. They noted that the SPOE framework would allow FDIC to impose losses on the creditors of a U.S. bank holding company to recapitalize and preserve the operations of the bank’s systemically important subsidiaries in a stress scenario. As a result, they believe that the holding company creditors of systemically important U.S. banks are unlikely to receive government support. Representatives of large investment firms with whom we spoke said that they rely primarily on their own assessments of government support when investing in financial institutions, and they identified OLA and other reforms as factors influencing their views. While representatives of several firms said that Dodd-Frank reforms have significantly reduced or eliminated expectations of government rescues, others said they continue to expect that the government would rescue one of the largest bank holding companies under certain scenarios if policymakers judged the potential costs to the economy from such a failure to be too great. Investors generally cited progress on OLA and enhanced regulatory standards for the largest bank holding companies as among the most important factors influencing their views on the likelihood of government support, and many considered living wills and other reforms to be less significant factors. FDIC’s resolution process. Investors with whom we spoke said that FDIC’s progress in developing its resolution process to implement OLA as an alternative to bankruptcy has caused them to significantly reduce their expectations of government support, but uncertainty around its implementation and circumstances of its use remains. Although several investors believed that FDIC’s resolution process is credible for managing a single large failure, two expressed doubts about whether it could be used to resolve multiple failing firms in a systemic crisis. They noted that if the economic costs of a large firm’s failure were judged to be too high, the federal government might not want to risk using OLA if regulators believed it would destabilize markets. Two investors noted that in the event that concerns about destabilizing markets led the federal government to provide emergency assistance to a failing firm in lieu of using OLA, policymakers might face political pressure to structure the assistance in a manner that imposed losses on creditors. Other factors being equal, an investor’s belief that there is a possibility of incurring losses even if the government prevents a firm’s failure would reduce that investor’s willingness to provide funds to that firm on more favorable terms because of a too-big-to-fail perception. Because OLA is untested, some uncertainty may exist about its viability as an alternative to bankruptcy and government rescues until it is used. Some investors identified areas where further progress is needed to enhance the credibility of OLA. First, some market observers have pointed to opportunities to further minimize the adverse market impacts that could result from resolving a firm under OLA. For example, although OLA provides for a 1-day stay on qualified financial contracts to allow for the selection of contracts to transfer to the bridge company, derivatives contracts written under the laws of other countries could allow counterparties to close out those contracts immediately, possibly posing liquidity issues for the firm and leading it to sell assets at depressed prices into the market. Some regulatory officials have said that cross- border agreements that create conformity in the treatment of derivatives contracts in resolution processes would enhance OLA’s effectiveness and practicality as a resolution tool. In addition, some investors noted that progress on the Federal Reserve’s planned proposal for a minimum long- term debt requirement could create greater certainty that the largest bank holding companies would have enough equity and debt to absorb losses and recapitalize their operating subsidiaries under OLA. Enhanced regulatory standards. Many investment firm representatives credited enhanced regulatory standards for the largest bank holding companies with improving the safety and soundness of these firms and reducing the likelihood that they would experience distress that could result in failure or government support. One representative from a large investment firm said that the best defense against banks needing government support is to make sure they are well-capitalized. Similarly, another investment firm representative said that higher capital ratios and strengthened balance sheets have given confidence to the markets that the institutions are more sound, in turn reducing the likelihood that they would fail and potentially receive government assistance. A representative from one large asset management firm said that enhanced capital and liquidity standards are a positive from a debt holder’s perspective because increased capital provides a bigger buffer to absorb losses and increased liquidity makes a run on the firm less likely. Living wills. Several investors said the living wills may have positive effects, but some investors have expressed doubts about the effectiveness of the plans, with one investor citing a lack of public transparency. In a public comment letter to FDIC, The Credit Roundtable, a financial industry association, noted that additional living will disclosures would improve the market’s ability to gauge the level of risk under a SPOE scenario. Additionally, while the purpose of living wills is to make SIFIs resolvable in bankruptcy, several large investors said they assume that a failing SIFI would be resolved through OLA. Remaining market assumptions about government support can give rise to advantages for the largest bank holding companies in three broad categories to the extent these assumptions affect decisions by investors, counterparties, and customers of these firms. Those categories are funding costs, financial contracts that reference ratings, and ability to attract customers. Market beliefs about government support could benefit a firm by lowering its funding costs. However, the extent to which this occurs depends in part on the extent to which providers of funds—such as depositors, bond investors, and stockholders—rely on credit ratings that assume government support or incorporate their own expectations of government support into their decisions to provide funds. For example, an investor that relies on credit ratings may view a firm with a rating that incorporates implied government support as having lower risk—other factors being equal—and may be more inclined to invest in the firm and accept a lower interest rate or return on the firm’s obligations. These effects can be more pronounced during a financial crisis, particularly if market strains cause credit rating agencies to reduce ratings more for firms they believe the government would not rescue and if providers of funds seek to reduce their risk exposures to firms they believe are not too big to fail. Several factors influence the extent to which investors rely on ratings. For example, an investor’s reliance on credit ratings can depend on the extent to which the investor conducts its own credit analysis. While representatives of large investment firms with whom we spoke said they rely primarily on their own assessments of credit risk and do not rely on credit ratings, smaller investors lacking the resources to do their own credit analysis may rely more on credit ratings and rating agencies’ assessments of the impact of possible government support on a firm’s risk profile. In addition, while an investment firm’s assessment of government support can be relevant to funds that it actively manages, it may not incorporate this factor into the investment decisions of funds that it manages using passive investment strategies.representatives of large investment firms said that while they do not rely on credit ratings for investment decisions, they pay attention to them when managing funds for clients whose investments must meet minimum credit rating requirements and for clients who may use credit ratings to assess their performance. Representatives of large investment firms with whom we spoke generally said their views on the likelihood of government support do not affect their investment decisions. Some representatives of investment firms said that while they believe some probability of government rescues remains, there is too much uncertainty surrounding future government support to factor it into their current investment decisions. Several bond investors said it is difficult to distinguish any pricing impacts from market expectations of government support from the variety of other factors related to firm size that can impact debt pricing and investors’ investment decisions. For example, compared to smaller institutions, large bank holding companies issue bonds more frequently and in larger amounts, which increases the liquidity of their bonds. Investors may accept lower interest rates on more liquid bonds because more liquid bonds can be sold more easily without reducing the price. In the section addressing the second objective of this report, we analyze the existence and size of potential funding cost advantages for the largest bank holding companies using quantitative approaches that control for factors outside of government support that can influence funding cost differences. Higher credit ratings from assumed government support can also benefit firms through private contracts that reference credit ratings. For example, derivative contracts often tie collateral requirements to a firm’s credit rating. Representatives of some large bank holding companies said that reduced credit ratings would require them to post more collateral. Additional collateral requirements would demand additional funds that could otherwise be used in other investments. The largest bank holding companies disclose information in their financial statements about how a credit rating downgrade could cause them to post more collateral. While estimates of these collateral impacts have varied over time and across firms, several of these firms have estimated that a downgrade in their credit rating could require them to post between $1 billion and $4 billion of additional collateral, depending on the size of the downgrade. Another way that private contracts can reference credit ratings is by setting minimum credit rating requirements. Examples of such requirements include investment funds that cannot purchase securities that are below minimum ratings requirements and counterparties that will not accept a letter of credit from a bank with a low credit rating. Corporate customers with whom we spoke expressed varying views on the degree to which expectations of government support influence their banking decisions. Two corporate customers with whom we spoke said that they believe the government would intervene to prevent the failure of the largest bank holding companies, but that potential government support is only one of several factors they consider in choosing a bank and is not necessarily a decisive factor. Several corporate treasurers identified size-related factors that are unrelated to government support that make them more inclined to use the largest banks for their banking needs. For example, treasurers of global firms noted that the largest U.S. banks have the geographic presence and ability to provide funding on the scale they need to support their operations around the world. One corporate customer noted that although the company’s credit facility includes both regional banks and some of the largest banks, they tend to use the services of large banks more because of their capacity for handling large transactions and the variety of their business lines. However, while two treasurers said that they tend to select the largest U.S. banks primarily for reasons that are unrelated to government support, their beliefs about which banks would be rescued by the government can impact how they manage their risk exposures to banks of different sizes. For example, a treasurer for a large domestic corporation said that the possibility of government rescues can be a factor when evaluating counterparty risk and the safety of deposits. She noted that in normal economic conditions, the likelihood of government support for banks is not a significant factor, but when markets become strained, her company may reduce its deposits and other exposures to regional banks they believe the government would allow to fail. Outside of these treasurers, a treasurer from a large global company said that potential government support may impact his company’s banking decisions indirectly through credit ratings. He noted that the company uses credit ratings as a factor in assessing a bank’s creditworthiness and adjusting its exposures to banks. For example, if a bank’s credit rating falls, the company may reduce its intraday exposure to that bank by shifting deposits and other exposures away from that bank. A few corporate customers told us they do not consider the possibility of government support for large banks when they decide how to allocate their banking business. The Dodd-Frank Act imposes new and higher fees on large bank holding companies and requires the Federal Reserve Board to subject large bank holding companies to enhanced regulatory standards for capital, liquidity, and risk management. These enhanced standards may help to reduce the likelihood and potential market impacts of the failure of a large bank holding company. Taken together, higher fees, stricter regulatory standards, and other reforms could increase costs for the largest bank New or revised fees holding companies relative to smaller competitors.and assessments impose higher direct costs on bank holding companies with more than $50 billion in total assets. Deposit insurance assessments. The Dodd-Frank Act required FDIC to change the definition of an insured depository institution’s assessment base, which can affect the amount of deposit insurance assessment the institution pays into the deposit insurance fund. According to FDIC, this change shifted some of the overall assessment burden from smaller banks to larger institutions that rely less on deposits but did not affect the overall amount of assessment revenue collected. The base was changed from total domestic deposits to average consolidated total assets minus average tangible equity. The largest bank holding companies generally saw the largest percentage increases in their deposit insurance assessments because they rely less on domestic deposits for their funding than smaller institutions. One of the largest bank holding companies reported that the change to the assessment calculation resulted in a $600 million increase in its deposit insurance assessments in 2011. In the quarter after the rule became effective, those banks with less than $10 billion in assets saw a 33 percent drop in their assessments (from about $1 billion to about $700 million), while those banks with over $10 billion in assets saw a 17 percent rise in their assessments (from about $2.4 billion to about $2.8 billion). Fees on SIFIs. In addition, the Dodd-Frank Act directs the Federal Reserve Board to collect fees from bank SIFIs equal to the expenses the Federal Reserve Board estimates are necessary or appropriate to carry out its supervision and regulation of those companies. In addition, the Dodd-Frank Act directs Treasury to collect assessments from bank and nonbank SIFIs to fund the operations of the Office of Financial Research. These assessments totaled $137 million in 2012 and $35 million in 2013. The Dodd-Frank Act requires the Federal Reserve Board to subject large bank holding companies to heightened standards for capital, liquidity, and stress testing, as well as other provisions, all of which could reduce the risk of their failure and the costs that their distress could impose on the financial system. Following Dodd-Frank enactment, bank SIFIs significantly increased their capital and liquidity in advance of finalization of new rules for capital, leverage, and liquidity standards. As of December 31, 2013, the six largest U.S. GSIBs had an average tier 1 common equity capital ratio of 12.1 percent, compared to the 4.5 percent minimum required under Basel III and an average of 8.4 percent among these firms as of December 31, 2009. In addition, pursuant to the Dodd-Frank Act, the Federal Reserve Board conducts stress testing and evaluates the capital planning process of large bank holding companies to help ensure these firms are resilient to periods of economic or financial stress. In the most recent round of capital planning reviews, the Federal Reserve Board rejected the capital plan of one U.S. GSIB and required another to resubmit its capital plan after errors were discovered. Pending approval of their revised capital plans, the Federal Reserve Board did not allow proposed actions by these firms, such as dividend increases, that would have reduced their capital. In April 2014, U.S. bank regulators adopted a new rule that strengthens the leverage ratio standards for the largest, most interconnected U.S. banking organizations. Beyond the new rules and regulatory reviews to ensure capital adequacy, the Federal Reserve Board has indicated that eight of the largest U.S. bank holding companies will be subject to a capital surcharge—an increase in their risk-based capital requirement—based on their size, complexity, and interconnectedness. Federal Reserve Board officials have stated that the capital surcharge is intended to force the largest bank holding companies to internalize the costs they could impose on the financial system from their systemic footprint. Federal Reserve Board and Treasury officials said that this capital surcharge could also help to offset any funding cost advantages that remain from market perceptions that the government would not allow the largest bank holding companies to fail. Higher capital and liquidity requirements for banks can increase their funding and other costs. For example, higher capital requirements can require banks to increase the portion of their funding that comes from equity capital rather than debt, which can increase funding costs. In prior work, we have summarized the results of studies by the Bank for International Settlements and others on the benefits and costs of increasing capital requirements for banks, but these studies generally estimated cost impacts to the economy rather than the incidence of increased costs for the institutions themselves. While banks can respond to additional costs in a variety of ways, including passing on some costs to borrowers by charging higher interest rates on loans, a Federal Reserve Board official noted that costs associated with the GSIB capital surcharges—which will not apply to most banks and will not apply evenly among the GSIBs—may be more difficult for the largest bank holding companies to pass on to customers. In theory, increasing the required proportion of equity funding relative to debt funding should not affect a firm’s overall cost of funding as it reduces the risk that the firm will fail, thereby reducing the returns demanded by both equity and debt holders. However, certain government policies make equity financing (such as through issuing stock to investors) more expensive for financial institutions than debt financing. For example, interest on debt is tax deductible, while dividends on equity securities are not. In addition, bank deposits benefit from federal guarantees and the interest rates a bank pays on its insured deposits may not fall as capital levels and the perceived safety of the firm increases. periodically submit resolution plans to the Federal Reserve and FDIC, as well as to conduct company-run stress tests semiannually. Regulators and industry officials have stated that SIFIs have devoted significant staffing resources to developing these resolution plans. According to industry representatives, stress testing requires newly covered firms to incur significant compliance costs associated with building information systems, contracting with outside vendors, recruiting personnel, and developing stress testing models that are unique to their organization. Furthermore, changes to the market infrastructure for swaps—such as clearing and exchange-trading requirements—and real-time reporting requirements for designated major swap dealers or major swap participants will require firms to purchase or upgrade information systems. Industry representatives and regulators said that while some compliance costs of the derivatives reforms could be recurring, a large part of these costs will come from one-time upfront investments to update processes and technology. Additionally, by generally prohibiting banks from engaging in proprietary trading and limiting their ability to sponsor or invest in hedge and private equity funds, the Volcker rule restrictions could eliminate past sources of trading and fee income for some banks. As we have noted in prior work, measuring the costs of financial regulation is challenging because of the multitude of intervening variables, the complexity of the financial system, and data limitations. For example, the extent to which regulated institutions pass on a portion of their increased costs to their customers may be impacted by competitive forces or other factors. Other sources of uncertainty, such as the potential for regulatory arbitrage, add to the challenges of estimating the act’s potential costs. For example, increased regulation could cause certain financial activities in the United States to move to foreign jurisdictions with less stringent regulations. U.S. regulators have acknowledged the importance of harmonizing international regulatory standards and noted that it can be advantageous for the United States to be the leader in implementing new regulatory safeguards. Officials from FSOC and its member agencies have stated that financial reforms have not completely removed too-big-to-fail perceptions but have made significant progress toward doing so. In a December 2013 speech, Treasury Secretary Jack Lew said there is growing recognition of the Dodd-Frank reforms and that market analysts are factoring them into their assumptions. However, he noted that there is still more work to be done. Under the Dodd-Frank Act, FSOC is, among other things, charged with promoting market discipline by eliminating expectations on the part of shareholders, creditors, and counterparties of large bank holding companies that the U.S. government will shield them from losses in the event of failure. FSOC and its member agencies monitor progress in addressing expectations of government support primarily through monitoring progress in implementing relevant Dodd-Frank reforms. FSOC’s 2014 annual report includes a discussion of progress made on OLA, enhanced prudential standards, and other relevant reforms. According to Treasury officials, several key areas require continued progress: International regulatory reform. In its 2013 annual report, FSOC writes that international coordination of financial regulation is essential to mitigate threats to financial stability. FDIC officials said they continue to work with foreign regulators to address issues related to creating a viable process for effecting the orderly resolution of a failing financial institution with significant cross-border activities. For example, FDIC is working with foreign counterparts on changes needed to ensure that derivatives contracts under other countries’ laws include a stay similar to that which applies to U.S. contracts under Dodd-Frank to prevent termination of these contracts by counterparties of a firm pulled into resolution. Federal Reserve Board staff said U.S. regulators are considering steps that may be needed to help ensure that foreign regulators do not take disruptive actions with respect to foreign operations of a U.S. firm pulled into resolution. They noted that global U.S. SIFIs may need to create intragroup loss absorbency arrangements that provide clarity and assurance to foreign regulators about how loss absorbency from the U.S. holding company will be made available to support foreign operations during a resolution. The Federal Reserve’s long-term debt requirement. The Federal Reserve Board has identified the implementation of a long-term debt requirement as a regulatory priority that it and other agencies are actively considering. In testimony before the Senate Banking Committee, Federal Reserve Board Governor Daniel Tarullo said that successful resolution without taxpayer assistance would be most effectively accomplished if a firm has sufficient long-term, unsecured debt to absorb additional losses and to recapitalize the business transferred to a bridge operating company. General education of market participants on reforms. Treasury officials identified the education of market participants as a key area for progress. Public outreach and education often take the form of speeches from agency officials and meetings with industry stakeholders. Regulators also solicit feedback on proposed rulemakings and regulations during public comment periods. For example, on December 18, 2013, FDIC published a public notice on the framework for a SPOE approach for resolution of failed financial institutions under OLA and solicited comments from the public through February 18, 2014, before subsequently extending the comment period through March 20, 2014. Treasury officials also monitor market trends and outside research to inform their assessment of progress in addressing too-big-to-fail perceptions. Treasury staff have looked at trends in bond prices, credit- default-swap prices, and other market data for bank holding companies of different sizes for evidence that investors have reduced their expectations of government support. Treasury staff also monitor relevant outside research, including a growing body of research by academics and others that has used quantitative approaches to analyze the existence and size of potential funding cost advantages that the largest bank holding companies could receive because of market expectations of government support. The next section of this report includes a summary of selected studies in this literature and discusses the strengths and limitations of the methods they use. FSOC and Treasury staff have reviewed these studies and noted that while the studies have limitations, their findings are consistent with a reduction in expectations of government support following the Dodd-Frank Act. Our analysis and the results of studies we reviewed provide evidence that the largest bank holding companies had lower funding costs than smaller bank holding companies during the 2007-2009 financial crisis but that differences may have declined or reversed in more recent years. To inform our econometric approach, we reviewed studies that estimated funding cost differences between large and small financial institutions that could be associated with the perception that some institutions are too big to fail. Studies we reviewed generally found that the largest financial institutions had lower funding costs during the 2007-2009 financial crisis but that the difference between the funding costs of the largest and smaller financial institutions has since declined. In some cases these findings could be interpreted as evidence of advantages driven by too- big-to-fail perceptions; however, these empirical analyses are imperfect and contain a number of limitations that could reduce their validity or applicability to U.S. bank holding companies. Our analysis, which addresses certain limitations of these studies, also provides evidence that large or systemically important bank holding companies had lower funding costs than smaller bank holding companies during the 2007-2009 financial crisis, which may have been associated with expectations of government assistance. In addition, our analysis provides some evidence that funding cost differences may have declined or reversed in recent years and that large bank holding companies may have had higher funding costs since the crisis. However, we also analyzed what funding cost differences might have been since the crisis in hypothetical scenarios where levels of credit risk in every year from 2010 to 2013 are assumed to be as high as they were during the financial crisis. This analysis suggests that large bank holding companies might have had lower funding costs than smaller bank holding companies since the crisis if levels of credit risk had remained high, indicating that changes in funding cost differences over time may be due in part to improvements in bank holding companies’ financial conditions. Although our analysis improves on certain aspects of prior studies, important limitations remain and our results should be interpreted with caution. Studies we reviewed generally found that the largest financial institutions had lower funding costs than smaller ones during the 2007-2009 financial crisis but that the difference between the funding costs of the largest and smaller financial institutions has since declined. In some cases these findings could be interpreted as evidence of advantages driven by too- big-to-fail perceptions; however, these empirical analyses are imperfect and contain a number of limitations that could reduce their validity or applicability to U.S. bank holding companies. We reviewed studies that estimated the funding cost difference between large and small financial institutions that could be associated with the perception that some institutions are too big to fail. We evaluated studies that met the following criteria: (1) used a comparative empirical approach that attempted to account for differences across financial institutions that could influence funding costs, (2) included U.S. bank holding companies, and (3) included analysis of data from 2002 or later. See our scope and methodology section for more information on our criteria and approach. The 16 studies we reviewed made a wide variety of methodological decisions and came to a range of conclusions. We present the variety of methodological decisions along two key dimensions—which source of funding is analyzed (e.g., deposits, bonds) and time period of analysis—in table 2 below. The source of funding that is analyzed is an important methodological decision because investors may have differing expectations regarding the likelihood that various sources of funding might receive government support, and these expectations could differ by the size of the financial institution. Results could differ across studies because of differences in creditor priority (subordinated debt versus senior debt) or maturity (bonds that mature several years in the future versus deposits that can be demanded at any time). We also include information in table 2 on the reported affiliations of the study authors. Studies we reviewed generally found that the largest financial institutions had lower funding costs than smaller ones during the 2007-2009 financial crisis, but that the difference between the funding costs of the largest and smaller financial institutions has since declined. For example, one study estimated that large U.S. financial institutions had roughly 100 basis points lower bond funding costs than smaller ones in 2009, but this difference had declined to around 40 basis points by 2011. Similarly, a study of U.S. bank credit default swaps found that large U.S. bank holding companies had roughly 100 basis points lower funding costs in 2009, but In some this difference had declined to around 15 basis points in 2013.cases these differences could be interpreted as evidence of funding cost advantages driven by too-big-to-fail perceptions. In other cases, limitations in the studies make it difficult to eliminate other explanations of why funding cost differences might exist—such as greater liquidity or diversification that could be associated with size or spurious results driven by imperfect measures of funding costs. Time period of analysis was another important difference across studies we reviewed. Few studies in our review included data beyond 2011. Therefore, most results may not reflect recent changes in the regulatory environment and market expectations discussed earlier in the report. Studies also varied in their approach to identifying financial institutions that might be perceived as too big to fail, using a variety of size and other thresholds. For example, some studies measured too-big-to-fail status by a bank’s assets; however, the threshold between too-big-to-fail and other banks varied from $50 billion to $500 billion. Several papers estimated too-big-to-fail status by size relative to industry, such as the largest 20 banks or top 10 percent by assets. These different approaches indicate that there is no consensus within the literature on which financial institutions may be considered too big to fail for the purposes of comparing funding costs. The studies we reviewed can be grouped into categories based on their approaches. While all studies included in our review used standard approaches and attempted to address factors that might account for differences in funding costs, these empirical analyses remain imperfect. Regression. Most studies we reviewed adopted a regression methodology in which some measure of funding costs was explained by a variety of control variables, such as risk, liquidity, and maturity, to attempt to account for differences across financial institutions. These models are standard empirical tools and are flexible in terms of the information about financial institutions and markets that they can incorporate. In some instances these models rely on a small number of indicators that may only imperfectly measure underlying default risks. As a result, some analyses may not correctly estimate the size of any too-big-to-fail advantages because they omit important factors that influence funding costs. In other studies that account for a more thorough set of factors that influence funding costs, results may be sensitive to alternative measurements of these factors. For example, default risk is an underlying driver of funding costs, and studies may produce different results by using a bank’s earnings volatility as an indicator for default risk as opposed to other indicators such as the quality of a bank’s assets. In addition, liquidity is another important factor to account for when attempting to explain funding cost differences—investors charge banks more for less liquid sources of funding—and some studies do not adequately control for the liquidity of the funding source. Challenges similar to those involved in accurately capturing default risk arise in finding appropriate indicators for a bond’s liquidity. Equity-based. Three papers we reviewed measured the difference between observed credit default swap spreads (which approximate bond funding costs) and hypothetical credit default swap spreads (which are estimated based on information implied by equity prices). This approach estimates hypothetical spreads with a standard theoretical model used in finance that uses the risk of a firm’s equity to estimate the risk of a firm’s debt. In doing so, the approach assumes that hypothetical spreads derived from equity prices are not influenced by any expectations of government support, but that observed credit default swap spreads are influenced by such expectations. By comparing the two spreads the approach can estimate the magnitude of expectations of government support. While this approach has some advantages, it relies on critical assumptions about how a limited number of factors influence the risk of default. As a result, these analyses may also omit important factors that influence funding costs, such as earnings. Ratings-based. Two papers used Fitch credit ratings to estimate the funding cost difference that could be associated with potential government support. Models based on credit ratings offer a convenient way to incorporate all the factors the rating agency considers relevant to default risk and take advantage of the rating agency’s explicit separation of the impact of expected government support through, for example, the assignment of a standalone credit rating (assuming no government support) and a higher credit rating assuming government support. However, this approach assumes that all information about market expectations of default risk and government support are incorporated into credit ratings, which is a potentially weak assumption. Credit ratings had a limited impact on the views of large investors we interviewed, as previously discussed. Moreover, funding costs vary for firms within a particular rating. As a result, these studies may estimate funding costs with considerable error. Finally, results of these studies are sensitive to the credit rating agency used—for example, results based on Moody’s ratings could be quite different than other rating agencies because Moody’s removed expectations of government support for U.S. bank holding companies in 2013. In addition to the approach-specific limitations, a number of general limitations related to implementation of the various approaches exist across studies we reviewed that could reduce their validity or applicability to U.S. bank holding companies. For example, studies varied in the countries that were included in the analysis—some studies focused on the United States, while others included a broad cross-section of more than 20 countries. Studies that pooled a large number of countries in their analysis have results that may not be applicable to U.S. bank holding companies. For example, studies that included Switzerland and Iceland in their analyses may not apply to the United States because banking sectors in those countries are much larger relative to the economy. As noted above, because few studies included data past 2011, results may not reflect recent changes in the regulatory environment and market sentiment; for example, the Federal Reserve’s rule for enhanced prudential standards for large bank holding companies and FDIC’s proposed strategy for orderly liquidation. As a result of the limitations associated with these methodological choices, estimates of the size of the funding cost difference associated with a too-big-to-fail advantage based on this literature—while suggestive of general trends—are not definitive and should be interpreted with caution. We conducted our own analysis to assess the extent to which the largest bank holding companies have had lower funding costs as a result of perceptions that the government would not allow them to fail. Overall, our analysis provides some evidence that large or systemic bank holding companies had lower funding costs than smaller ones during the 2007- 2009 financial crisis that may have been associated with expectations of government assistance. Our analysis provides only limited evidence that large bank holding companies had lower funding costs since the crisis and instead provides some evidence that the opposite may have been true at the levels of credit risk that prevailed in those years. However, in hypothetical scenarios where levels of credit risk in every year from 2010 to 2013 are assumed to be as high as they were during the financial crisis, our analysis suggests that large bank holding companies might have had lower funding costs than smaller bank holding companies. Although our analysis improves on certain aspects of prior studies, important limitations remain and our results should be interpreted with caution. To conduct our analysis, we developed a series of econometric models— models that use statistical techniques to estimate the relationships between quantitative economic and financial variables—based on our assessment of relevant studies and expert views. These models estimate the relationship between bank holding companies’ bond funding costs and their size, while also controlling for other drivers of bond funding costs, including credit risk and bond liquidity. Key features of our econometric approach include the following: U.S. bank holding companies. To better understand the relationship between bank holding company funding costs and size in the context of the U.S. economic and regulatory environment, we only analyzed U.S. bank holding companies. In contrast, some of the literature we reviewed analyzed nonbank financial companies and foreign companies. 2006-2013 time period. To better understand the relationship between bank holding company funding costs and size in the context of the current economic and regulatory environment, we analyzed the period from 2006 through 2013, which includes the recent financial crisis as well as years before the crisis and following the enactment of the Dodd-Frank Act. In contrast, some of the literature we reviewed did not analyze data in the years after the financial crisis. Bond funding costs. We used bond yield spreads as our measure of bank holding company funding costs because they are a direct measure of what investors charge bank holding companies to borrow money and because they are sensitive to credit risk and hence expected government support. This indicator of funding costs has distinct advantages over certain other indicators used in studies we reviewed, including credit ratings, which do not directly measure funding costs, and total interest expense, which mixes the costs of funding from multiple sources. Alternative measures of size. Size or systemic importance can be measured in multiple ways, as reflected in our review of the literature. Based on that review and the comments we received from external reviewers, we used four different measures of size or systemic importance: total assets, total assets and the square of total assets, whether or not a bank holding company was designated a GSIB by the Financial Stability Board in November 2013, and whether or not a bank holding company had assets of $50 billion or more. Extensive controls for bond liquidity, credit risk, and other key factors. To account for the many factors that could influence funding costs, we controlled for credit risk, bond liquidity, and other key factors in our models. We included a number of variables that are associated with the risk of default, including measures of capital adequacy, asset quality, earnings, and volatility. We also included a number of variables that can be used to measure bond liquidity. Finally, we included variables that measure other key characteristics of bonds, such as time to maturity, and key characteristics of bank holding companies, such as operating expenses. Our models include a broader set of controls for credit risk and bond liquidity than some studies we reviewed and, as we discuss later, we directly assess the sensitivity of our results to using alternative controls on our estimates of funding costs. Multiple model specifications. In order to assess the sensitivity of our results to using alternative measures of size, bond liquidity, and credit risk discussed above, we estimated multiple different model specifications. We developed models using four alternative measures of size, two alternative sets of measures of capital adequacy, six alternative measures of volatility, and three alternative measures of bond liquidity to assess the impact of using alternative measures on our results. In contrast, some of the studies we reviewed estimated a more limited number of model specifications. Annual estimates of models. To allow for changes in investors’ beliefs about the likelihood of government rescues between the years of the financial crisis—when emergency government programs designed to assist financial institutions were available—and the years following the crisis, our models allow the relationship between bank holding company funding costs and size to vary over time. In contrast, some of the studies we reviewed assumed that the relationship between bank holding company funding costs and size was constant over time. Link between size and credit risk. To account for the possibility that investors’ beliefs about government rescues affect their responsiveness to credit risk, our models allow the relationships between bank holding company funding costs and credit risk to depend on size. Altogether, we estimated 42 different models for each year from 2006 through 2013 and then used these models to compare bond yield spreads—our measure of bond funding costs—for bank holding companies of different sizes but with the same level of credit risk. Figure 1 shows our models’ comparisons of the difference between bond funding costs for bank holding companies with $1 trillion in assets and average credit risk and bond funding costs for similar bank holding companies with $10 billion in assets, for each model and for each year. Each circle and dash in figure 1 shows the comparison of bond funding costs for a different model. Circles show model-estimated differences that were statistically significant at the 10 percent level, while dashes represent differences that were not statistically significant at that level.dashes below zero correspond to models suggesting that bank holding companies with $1 trillion in assets have lower bond funding costs than bank holding companies with $10 billion in assets, and vice versa. Our estimates of the relationship between the size of a bank holding company and the yield spreads on its bonds are limited by several factors and should be interpreted with caution. These factors present challenges to using our results and the results of other studies as the basis for public policy responses to concerns about the risks posed by large financial institutions. Investors’ beliefs about the likelihood of government support are composed of several different elements, including the likelihood that a bank holding company will fail, the likelihood that it will be rescued by the government if it fails, and the size of the losses that the government may impose on investors if it rescues the bank holding company. Like the methodologies used in the literature we reviewed, our methodology does not allow us to precisely identify the influence of each of these factors. As a result, changes over time in our estimates of the relationship between bond funding costs and size may reflect changes in one or more of these components, but we cannot identify which with certainty. For example, if bond funding costs for a bank holding company with $1 trillion are less than those for a bank holding company with $10 billion but the difference decreases over time, this trend may indicate that investors believe that the larger bank holding company is relatively less likely to fail, which could be the case if the level of credit risk is falling over time, either due to market pressure, regulatory requirements, or other reasons. This trend could also indicate that investors believe that the government has become less likely to rescue the larger bank holding company if it fails or more likely to impose losses on investors in a rescue. In addition, our estimates of differences in bond funding costs for bank holding companies of different sizes may reflect factors other than investors’ beliefs about the likelihood of government support. We have taken into account many of the factors that may help explain differences in yield spreads for bank holding companies of different size, such as credit risk and bond liquidity. However, we may not have taken into account all possible factors. If a factor that we have not taken into account is associated with size, then our results may reflect the relationship between bond funding costs and this omitted factor instead of, or in addition to, the relationship between bond funding costs and bank holding company size. Our estimates of differences in bond funding costs for bank holding companies of different sizes may also reflect differences in the characteristics of bank holding companies that choose to issue bonds. Bank holding companies that issue bonds may differ from those that do not in ways that may or may not be observable. If such differences exist and are unobservable, then our models’ comparisons are likely to be consistently either too high or too low depending on the relationship between size and the relevant unobservable characteristic. However, if bank holding companies that issue bonds differ from those that do not in ways that are observable in our model, then our models’ comparisons of bond funding cost differences for bank holding companies of different sizes are unlikely to be consistently either too high or too low. We found some evidence that this may be the case. Investors with whom we spoke told us that larger bank holding companies are generally more likely to issue bonds than smaller ones because they can issue a large enough quantity of debt to satisfy investors’ demand for liquidity and to allow investors to make a large enough investment to cover their transaction costs. Thus, size, which is observable, may be an important difference between bank holding companies that issue bonds and those that do not. Importantly, bank holding company size matters in this case because it is associated with bond issue size, which we control for, not because it is associated with investors’ beliefs about government rescues. In general, our estimates of the impact of size on bond funding costs may reflect a relationship between size, credit risk, or other explanatory variables and the part of bond funding costs that is not explained by our model (endogeneity). This could occur if any of our control variables are influenced by bond funding costs. In this case, our estimates of the magnitude of the association between size and bond funding costs will be imperfect and our ability to infer a causal relationship between size and bond funding costs will be limited. Historical estimates of differences in bond funding costs for bank holding companies of different sizes are not indicative of future differences. As we have discussed, our estimates of the historical relationship between bank holding company size and bond funding costs vary from year to year. Thus, it is likely that the relationship between bond funding costs and bank holding company size may change in the future. As we have noted, the Dodd-Frank Act imposes new and higher fees on large bank holding companies and requires the Federal Reserve Board to subject large bank holding companies to enhanced regulatory standards for capital, liquidity, and risk management. These enhanced standards may help to reduce the likelihood that a large bank holding company will fail, which may in turn alter investors’ beliefs about the likelihood of government support and thus affect the size of any differences in yield spreads on bonds issued by large and small bank holding companies. Improvements in economic conditions, such as faster economic growth and lower unemployment, may have a similar effect. Finally, changes in the structure of financial markets, such as an increase in the share of credit provided by nonbank financial companies that reduces the systemic importance of large bank holding companies, could also lead investors to change their beliefs about government rescues in future episodes of individual or system-wide distress. Finally, our estimates of the differences in bond funding costs for bank holding companies of different sizes do not necessarily reflect the harm that the failure of a large bank holding company could do to the economy. Bond funding costs reflect the risk that a bank holding company might fail and not be able to fully repay its investors. However, parties other than investors may be harmed if a bank holding company fails. For example, the customers of a failed bank holding company may be harmed if they have less access to credit or to specialized services provided by the bank holding company, which could be the case if the bank holding company has a large enough share of the market. We made copies of the draft report available to FDIC, the Federal Reserve Board, FSOC, OCC and Treasury for their review and comment. We also provided excerpts of the draft report for technical comment to Fitch, Moody’s, Standard and Poor’s, and the International Monetary Fund. All of these agencies and third parties, except for FSOC, provided technical comments, which we have incorporated, as appropriate. In its written comments, which are reprinted in appendix II, Treasury generally agreed with the results of our analysis and commented that our draft report represents a meaningful contribution to the literature. Treasury further commented that the Dodd-Frank Act makes clear that shareholders, creditors and executives—not taxpayers—will be responsible if a large company fails and that our results reflect increased market recognition that the Dodd-Frank Act ended “too big to fail” as a matter of law. While our results do suggest bond funding cost differences between large and smaller bank holding companies may have declined or reversed since the 2007-2009 financial crisis, we also report that a higher credit risk environment could be associated with lower bond funding costs for large bank holding companies than for small ones. Furthermore, as we have noted, many market participants we spoke with believe that recent regulatory reforms have reduced but not eliminated the perception of “too big to fail” and both they and Treasury officials indicated that additional steps were required to address “too big to fail.” As discussed in the final section of our report on page 56, changes over time in our estimates of the relationship between bond funding costs and size may reflect changes in one or more components of investors’ beliefs about government support—such as their views on the likelihood that a bank holding company will fail and the likelihood it will be rescued if it fails—but we cannot precisely identify the influence of each of these components with certainty. A decline or reversal of funding cost advantages for large bank holding companies could indicate that investors believe that the government has become less likely to rescue a large bank holding company if it fails or more likely to impose losses on investors in a rescue. This trend could also indicate that investors believe that large bank holding companies are less likely to fail. On separate dates in July 2014, Treasury, the Federal Reserve Board, OCC and FDIC provided via email technical comments related to the draft report’s analysis of funding cost differences between large and small bank holding companies. We summarize their most significant comments and our responses below. Treasury provided comments on our presentation of the impact of a higher credit risk environment on our analysis of bond funding costs and the statistical robustness of these results. In response to these comments, we revised text on the Highlights and in the report body to clarify that the results of this analysis reflect hypothetical scenarios and to provide greater attention to the potential impacts of regulatory reforms. With respect to the statistical robustness of these results, we note that the draft report contained clear information about the statistical significance of our results. Importantly, we note that whether one considers the estimates from all 42 models for 2013 or only the 10 models with statistically significant results, higher credit risk substantially increases (1) the number of models that suggest bond funding costs would have been lower for the largest bank holding companies than for smaller bank holding companies and (2) the size of funding cost differences in 2013. In addition, we amended the draft to clarify that our results for the hypothetical scenario for 2013 differ from our results for 2008, in which all 42 models predicted lower funding costs for larger bank holding companies. Treasury and the Federal Reserve Board provided comments related to the draft report’s presentation of statistical significance in figures 1, 2, and 3. In response to these comments, we made formatting changes to the figures to more clearly differentiate estimates that are statistically significant from those that are not. In addition, we note that table 7 on pages 79-81 of the report contains some of the data used to create figures 1, 2 and 3 and differentiates between estimates that are statistically significant at the 1 percent, 5 percent, and 10 percent levels. Finally, while statistically insignificant estimates may be viewed as weaker evidence than statistically significant estimates and may influence how our results are interpreted, we note that statistical significance is not the only relevant characteristic of an econometric estimate and that by presenting the full range of results one can better assess their magnitude and economic significance. Treasury and the Federal Reserve Board also commented that in comparing bond funding costs for large and small bank holding companies, a bank holding company with $10 billion in assets is too small to make a meaningful comparison to a bank holding company with $1 trillion in assets. They commented that a bank holding company with at least $50 billion in assets would provide a more relevant comparison for this analysis. While we agree that bank holding companies with $50 billion in assets may be more similar to $1 trillion bank holding companies than bank holding companies with $10 billion in assets, we used a smaller size for small bank holding companies because bank holding companies with $50 billion or more in assets may be viewed by investors as “large” and systemically important, in part because $50 billion in assets is the size threshold for Dodd-Frank Act requirements related to enhanced regulatory standards. While we agree that bank holding companies of different sizes have different characteristics, we compared estimated funding costs for bank holding companies assuming their credit risk and other characteristics are identical. Finally, increasing the size of the small bank holding company in our comparisons would not have a substantive impact on the sign or statistical significance of our estimated differences in funding costs, nor would it change the trends in estimated differences in funding costs over time. The Federal Reserve Board and OCC commented that few of the estimated coefficients on the variables measuring size and size interacted with credit risk reported in table 5 were individually statistically significant, suggesting that there is little statistical evidence of a relationship between bond funding costs and size. To address this concern, we conducted hypothesis tests that the coefficients on the size and size-credit risk interaction terms are jointly equal to zero. The results of these hypothesis tests suggest that the coefficients on the size and size-credit risk interaction terms are jointly significant at the 5 percent level, suggesting that there is statistical evidence of a relationship between bond funding costs and size. We report the results of our joint hypothesis tests in table 5 on pages 74 and 76 of the report. In addition, we note that the draft report only contains coefficient estimates for the 6 baseline models of the 42 total models for 2008 and 2013 and that those 6 models are presented as examples and do not fully reflect the impact of size in all the specifications in those years or in other years. However, we believe the regression-level detail on the 6 baseline models included in the report is sufficient to assist readers looking to understand our methodology and conclusions. OCC suggested that selection bias and omitted variables bias could reduce the validity of our econometric results. We agree that these biases are potential limitations of the model and are among the reasons the results should be interpreted with caution. We discuss the potential impact of these concerns on pages 56-57. OCC and FDIC commented on the endogeneity of some independent variables and the impact this could have on our results. We agree that endogeneity is a potential limitation of the model and is among the reasons the results should be interpreted with caution. In response to this comment, we added a paragraph discussing the potential impact of endogeneity on our results on page 57 of the report. We are sending copies of this report to FDIC, the Federal Reserve Board, FSOC, OCC, Treasury, interested congressional committees, members, and others. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-4802 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To assess the extent to which the largest bank holding companies have received funding cost advantages as a result of perceptions that the government would not allow them to fail, we conducted an econometric analysis of the relationship between a bank holding company’s size or systemic importance and its funding costs. Bank holding companies obtain funds from investors—such as depositors, creditors, or shareholders—which they use to finance assets, such as various types of credit. The prices that bank holding companies pay to obtain these funds are influenced by several factors, including credit risk—the likelihood that bank holding companies will repay the funds they borrowed as agreed— and other factors. Funding cost advantages may arise if investors believe that the government is more likely to support larger bank holding companies in distress than smaller bank holding companies in distress. This belief may lead investors to view larger bank holding companies as having less credit risk than smaller bank holding companies and thus charge larger bank holding companies a lower price to borrow than smaller bank holding companies. We used a multivariate regression model to estimate the relationship between bank holding companies’ funding costs and their size while controlling for factors other than size that may also influence funding costs. Our general regression model is the following: In this model, 𝑏 denotes the bank holding company, 𝑞 denotes the quarter, 𝑓𝑢𝑛𝑑𝑖𝑛𝑔 𝑐𝑜𝑠𝑡𝑏𝑞 is the bank holding company’s cost of funding in a quarter, 𝑠𝑖𝑧𝑒𝑏𝑞 is a measure of the bank holding company’s size at the beginning of the quarter, 𝑐𝑟𝑒𝑑𝑖𝑡 𝑟𝑖𝑠𝑘𝑏𝑞 is a list of proxies for the bank holding company’s credit risk, 𝑋𝑏𝑞 is a list of other variables that may influence funding costs, 𝜀𝑏𝑞 is an idiosyncratic error term, and 𝛼,𝛽,𝛾,𝛿, and Θ are parameters to be estimated. The parameter 𝛽 captures the size. The parameter 𝛿 captures the indirect relationship between a bank direct relationship between a bank holding company’s funding cost and its holding company’s funding cost and its size that exists if the size of a bank holding company affects the relationship between its funding cost and credit risk. If greater values of the size measure are associated with larger bank holding companies, if greater values of the credit risk proxies are associated with greater credit risk, and if investors view larger bank holding companies as less risky than smaller bank holding companies due to beliefs that the government is more likely to rescue larger bank holding companies in distress, then either 𝛽 is less than zero, 𝛿 is less than zero, or both. However, the parameters 𝛽 and 𝛿 may also reflect factors other than these beliefs. We used a measure of funding costs based on bonds issued by bank holding companies that is available for bank holding companies with a wide variety of sizes. Bank holding companies use a variety of funding types from different sources, including various types of deposits, bonds, and equity. We used bond yield spreads—the difference between the rate of return on a bond and the rate of return on a Treasury bond of comparable maturity to measure a bank holding company’s cost of bond funding. Treasury securities are widely viewed as a risk-free asset, so the yield spread measures the price that investors charge a bank holding company to borrow to compensate them for credit risk and other factors. We focused on bond funding costs for several reasons. First, bonds are traded in secondary markets, so timely information about changes in their yield spreads, which reflect investors’ perceptions of the credit risk of the bond’s issuing bank holding company, is easily observable. In contrast, some uninsured deposit products are not traded in secondary markets, so changes in the prices of those deposits, which may reflect depositors’ perceptions of the riskiness of the bank holding company, may be less easy to observe. Second, bond yield spreads are a direct measure of bank holding companies’ funding costs. In contrast, credit ratings are indirect measures of bank holding companies’ funding costs because funding costs can vary for firms with the same rating. Similarly, total interest expense as reported on a bank holding company’s balance sheet is an imperfect measure of funding costs because total interest expense may aggregate the prices of liabilities with many important differences, including term and creditor priority. Third, bonds generally rank higher in a bank holding company’s capital structure than equity, so bondholders are less likely to suffer losses and more likely to be repaid if a bank holding company becomes distressed. Bondholders are thus more likely to benefit if a distressed bank holding company is rescued by the government. In contrast, equity holders generally rank lowest in a bank holding company’s capital structure and are the first to suffer losses if a bank holding company becomes distressed. Shareholders are thus the least likely to benefit if a distressed bank holding company is rescued by the government. It follows that the cost of bond funding is more likely to reflect investors’ beliefs about the likelihood of government support than the cost of equity funding. Fourth, bank holding companies with a wide variety of sizes issue bonds, including some with less than $10 billion in assets. In contrast, credit default swaps—the prices of which likely reflect perceptions of a bank holding company’s credit risk—are available for only a small number of large bank holding companies. We used Bloomberg to identify U.S. bank holding companies with more than $500 million in assets that were active in one or more years from 2006 to 2013, and to identify all plain vanilla, fixed-rate, senior unsecured bonds issued by these bank holding companies, excluding bonds with a government guarantee. companies and bonds we analyzed. In addition to the contact named above, Karen Tremba (Assistant Director), John Fisher (Analyst in Charge), Bethany Benitez, Michael Hoffman, Risto Laboski, Courtney LaFountain, Rob Letzler, Marc Molino, Jason Wildhagen, and Jennifer Schwartz made significant contributions to this report. Other assistance was provided by Abigail Brown, Rudy Chatlos, Stephanie Cheng, and José R. Peña. Acharya, Viral V., Deniz Anginer, and A. Joseph Warburton. “The End of Market Discipline? Investor Expectations of Implicit State Guarantees.” Social Science Research Network Working Paper. December 2013. Araten, Michel and Christopher M. Turner. “Understanding the Funding Cost Differences between Global Systemically Important Banks (G-SIBs) and Non-G-SIBs in the United States.” Social Science Research Network Working Paper. March 2012. Balasubramnian, Bhanu and Ken Cyree. “Has Market Discipline on Banks Improved after the Dodd-Frank Act?” Journal of Banking & Finance, vol. 41 (2014): 155-166. Balasubramnian, Bhanu and Ken Cyree. “The End of Too-Big-to-Fail? Evidence from Senior Bank Bond Yield Spreads around the Dodd-Frank Act.” Social Science Research Network Working Paper. June 2012. Barth, Andreas and Isabel Schnabel. “Why Banks are Not Too Big To Fail—Evidence from the CDS Market.” Economic Policy, vol. 28, no. 74. April 2013: 335-369. Bertay, Ata Can, Asli Demirgüç-Kunt, and Harry Huizinga. “Do We Need Big Banks? Evidence on Performance, Strategy and Market Discipline.” Journal of Financial Intermediation, vol. 22 (2013): 532-558. International Monetary Fund. “How Big is the Implicit Subsidy for Banks Considered Too Important to Fail?” Global Financial Stability Report, Ch. 3 (Washington, D.C.: 2014). Jacewitz, Stefan and Jonathan Pogach. “Deposit Rate Advantages at the Largest Banks.” Social Science Research Network Working Paper. September 2012. Keppo, Jussi and Jing Yang. The Value of Too-Big-To-Fail Subsidy. Unpublished working paper. September 2013. Kumar, Aditi and John Lester. Do Deposit Rates Show Evidence of Too Big To Fail Effects? An Updated Look at the Empirical Evidence through 2012 among US Banks (Oliver Wyman, Inc., 2014). Kumar, Aditi and John Lester. Do Bond Spreads Show Evidence of Too Big To Fail Effects? Evidence from 2009-2013 among US Bank Holding Companies (Oliver Wyman, Inc., 2014). Li, Zan, Shisheng Qu and Jing Zhang. Quantifying the Value of Implicit Government Guarantees for Large Financial Institutions (Moody’s Analytics, 2011). Santos, João. “Evidence from the Bond Market on Banks’ ‘Too Big To Fail’ Subsidy.” Federal Reserve Bank of New York Economic Policy Review, vol. 20, no. 2. March 2014. Tsesmelidakis, Zoe and Robert C. Merton. “The Value of Implicit Guarantees.” Social Science Research Network Working Paper. September 2012. Ueda, Kenichi, and B. Weder di Mauro. “Quantifying Structural Subsidy Values for Systemically Important Financial Institutions,” Journal of Banking & Finance, vol. 37, no. 10. October 2013: 3830–3842. Völz, Manja and Michael Wedow. “Does Banks’ Size Distort Market Prices? Evidence for Too-Big-To-Fail in the CDS Market.” Deutsche Bundesbank Discussion Paper, Series 2: Banking and Financial Studies, no. 6 (2009).
“Too big to fail” is a market notion that the federal government would intervene to prevent the failure of a large, complex financial institution to avoid destabilizing the financial sector and the economy. Expectations of government rescues can distort investor incentives to properly price the risks of firms they view as too big to fail, potentially giving rise to funding and other advantages for these firms. GAO was asked to review the benefits that the largest bank holding companies (those with more than $500 billion in assets) have received from perceived government support. This is the second of two GAO reports on government support for bank holding companies. The first study focused on actual government support during the 2007-2009 financial crisis and recent statutory and regulatory changes related to government support for these firms. This report examines how financial reforms have altered market expectations of government rescues and the existence or size of funding advantages the largest bank holding companies may have received due to perceived government support. GAO reviewed relevant statutes and rules and interviewed regulators, rating agencies, investment firms, and corporate customers of banks. GAO also reviewed relevant studies and interviewed authors of these studies. Finally, GAO conducted quantitative analyses to assess potential “too-big-to-fail” funding cost advantages. In its comments, the Department of the Treasury generally agreed with GAO's analysis. GAO incorporated technical comments from the financial regulators, as appropriate. While views varied among market participants with whom GAO spoke, many believed that recent regulatory reforms have reduced but not eliminated the likelihood the federal government would prevent the failure of one of the largest bank holding companies. Recent reforms provide regulators with new authority to resolve a large failing bank holding company in an orderly process and require the largest bank holding companies to meet stricter capital and other standards, increasing costs and reducing risks for these firms. In response to reforms, two of three major rating agencies reduced or removed the assumed government support they incorporated into some large bank holding companies' overall credit ratings. Credit rating agencies and large investors cited the new Orderly Liquidation Authority as a key factor influencing their views. While several large investors viewed the resolution process as credible, others cited potential challenges, such as the risk that multiple failures of large firms could destabilize markets. Remaining market expectations of government support can benefit large bank holding companies if they affect investors' and customers' decisions. GAO analyzed the relationship between a bank holding company's size and its funding costs, taking into account a broad set of other factors that can influence funding costs. To inform this analysis and to understand the breadth of methodological approaches and results, GAO reviewed selected studies that estimated funding cost differences between large and small financial institutions that could be associated with the perception that some institutions are too big to fail. Studies GAO reviewed generally found that the largest financial institutions had lower funding costs during the 2007-2009 financial crisis but that the difference between the funding costs of the largest and smaller institutions has since declined. However, these empirical analyses contain a number of limitations that could reduce their validity or applicability to U.S. bank holding companies. For example, some studies used credit ratings which provide only an indirect measure of funding costs. GAO's analysis, which addresses some limitations of these studies, suggests that large bank holding companies had lower funding costs than smaller ones during the financial crisis but provides mixed evidence of such advantages in recent years. However, most models suggest that such advantages may have declined or reversed. GAO developed a series of statistical models that estimate the relationship between bank holding companies' bond funding costs and their size or systemic importance, controlling for other drivers of bond funding costs, such as bank holding company credit risk. Key features of GAO's approach include the following: U.S. Bank Holding Companies: The models focused on U.S. bank holding companies to better understand the relationship between funding costs and size in the context of the U.S. economic and regulatory environment. Bond Funding Costs: The models used bond yield spreads—the difference between the yield or rate of return on a bond and the yield on a Treasury bond of comparable maturity—to measure funding costs because they are a risk-sensitive measure of what investors charge bank holding companies to borrow. Extensive Controls : The models controlled for credit risk, bond liquidity, and other variables to account for factors other than size that could affect funding costs. Multiple Models : GAO used 42 models for each year from 2006 through 2013 to assess the impact of using alternative measures of credit risk, bond liquidity, and size and to allow the relationship between size and bond funding costs to vary over time with changes in the economic and regulatory environment. Credit Risk Levels : GAO compared bond funding costs for bank holding companies of different sizes at the average level of credit risk for each year, at low and high levels of credit risk for each year, and at the average level of credit risk during the financial crisis. The figure below shows the differences between model-estimated bond funding costs for bank holding companies with $1 trillion in assets and bank holding companies with $10 billion in assets, with average levels of credit risk in each year. Circles represent statistically significant model-estimated differences. Estimates from 42 Models of Average Bond Funding Cost Differences between Bank Holding Companies with $1 Trillion and $10 Billion in Assets, 2006-2013 Notes: GAO estimated econometric models of the relationship between BHC size and funding costs using data for U.S. BHCs and their outstanding senior unsecured bonds for the first quarter of 2006 through the fourth quarter of 2013. The models used bond yield spreads to measure funding costs and controlled for credit risk factors such as capital adequacy, asset quality, earnings, maturity mismatch, and volatility, as well as bond liquidity and other characteristics of bonds and BHCs that can affect funding costs. GAO estimated 42 models for each year from 2006 through 2013 to assess the sensitivity of estimated funding cost differences to using alternative measures of capital adequacy, volatility, bond liquidity, and size or systemic importance. GAO used the models to compare bond funding costs for BHCs of different sizes but the same levels of credit risk, bond liquidity, and other characteristics. This figure compares bond funding costs for BHCs with $1 trillion and $10 billion in assets, for each model and for each year, with average levels of credit risk. Each circle and dash shows the comparison for a different model, where circles and dashes below zero suggest BHCs with $1 trillion in assets have lower bond funding costs than BHCs with $10 billion in assets, and vice versa. All 42 models found that larger bank holding companies had lower bond funding costs than smaller ones in 2008 and 2009, while more than half of the models found that larger bank holding companies had higher bond funding costs than smaller ones in 2011 through 2013, given the average level of credit risk each year (see figure). However, the models' comparisons of bond funding costs for bank holding companies of different sizes varied depending on the level of credit risk. For example, in hypothetical scenarios where levels of credit risk in every year from 2010 to 2013 are assumed to be as high as they were during the financial crisis, GAO's analysis suggests that large bank holding companies might have had lower funding costs than smaller ones in recent years. However, reforms in the Dodd-Frank Wall Street Reform and Consumer Protection Act, such as enhanced standards for capital and liquidity, could enhance the stability of the financial system and make such a credit risk scenario less likely. This analysis builds on certain aspects of prior studies, but important limitations remain and these results should be interpreted with caution. GAO's estimates of differences in funding costs reflect a combination of several factors, including investors' beliefs about the likelihood a bank holding company will fail and the likelihood it will be rescued by the government if it fails, and cannot precisely identify the influence of each factor. In addition, these estimates may reflect factors other than investors' beliefs about the likelihood of government support and may also reflect differences in the characteristics of bank holding companies that do and do not issue bonds. Finally, GAO's estimates, like all past estimates, are not indicative of future trends.
As we discussed in our 2002 report, TPCC member agencies perform functions that include identifying export opportunities, providing financing and insurance, and working to create open markets for U.S. exports and investments. Chaired by the Secretary of Commerce, the TPCC currently has a staff of three Commerce trade professionals who work with other member agency officials on trade promotion initiatives and prepare the national export strategy. The TPCC generally meets once or twice a year at the head-of-agency level and quarterly or monthly at the deputy level (e.g., assistant secretary or under secretary); TPCC member agency staff discuss issues frequently but meet on an ad hoc basis. The TPCC’s national export strategies include a table showing member agencies’ budget authority for trade promotion activities. Since 2000, this table has included all or part of the budgets of 11 of the TPCC’s member agencies—the Departments of Agriculture, Commerce, Energy, Labor, State, and the Treasury; Ex-Im Bank; OPIC; SBA; the U.S. Trade and Development Agency; and the U.S. Trade Representative. Although the TPCC, together with the agencies, determines which agencies’ budgets are included in this table, the agencies themselves decide which of their programs or activities constitute trade promotion. However, the tables present only the total trade promotion budget authority for each agency without detailing the programs and activities. The Department of Agriculture counts nine programs as trade promotion, along with salaries and expenses for its Foreign Agricultural Service. The Commerce Department counts three units within its International Trade Administration—Trade Promotion and U.S. and Foreign Commercial Service, Manufacturing and Services, and Market Access and Compliance—and a grant for promoting foreign tourism within the United States. The State Department considers a portion of its budget to be related to trade promotion. This portion includes part of State’s budgets for its regional bureaus, some of which have overseas staff in locations with no Foreign Commercial Service officers. It also includes State’s budget for trade capacity building, advocacy, and promotion activities performed by, or funded through, department offices such as the Office of Commercial and Business Affairs. The national export strategies’ trade promotion program budget authority tables also include the entire budgets of Ex-Im Bank, OPIC, the U.S. Trade and Development Agency, and the U.S. Trade Representative as well as very small amounts of the budgets of the Departments of Energy, Labor, and the Treasury and SBA. To help agencies address barriers to working collaboratively, GAO has previously evaluated efforts such as export promotion that cut across more than one agency. In an October 2005 review of several joint agency efforts that was intended to help agencies address barriers to working collaboratively, we identified eight key practices that can help enhance and sustain interagency collaboration: Define and articulate a common outcome—that is, a measurable goal. Establish mutually reinforcing or joint strategies. Identify and address needs by leveraging resources. Agree on roles and responsibilities. Establish compatible policies, procedures, and other means to operate across agency boundaries. Develop mechanisms to monitor, evaluate, and report on results. Reinforce agency accountability for collaborative efforts through agency plans and reports. Reinforce individual accountability for collaborative efforts through performance management systems. GAO reported on the need for an export strategy before the creation of the TPCC: in 1992, we found significant problems associated with inefficiency, overlap, and duplication of U.S. trade promotion efforts. Since its inception, we have reviewed the TPCC’s progress in coordinating trade promotion several times. For example, in 1994, we recommended that the TPCC should establish priorities with a well-reasoned and strong analytical basis, and in 1996, we commented that the TPCC lacked measures of value added by export services and that this limited its ability to contribute to the budget process. We made similar comments in 1998 and 2002. In our 2002 report, we recommended that the Chairman of the TPCC ensure that its national export strategies consistently identify specific goals established by the agencies within the strategies’ broad priorities; identify allocation of agencies’ resources in support of their specific goals; and analyze the progress made in addressing the recommendations in the TPCC’s prior annual strategies. Since 2002, the budget authority for trade promotion reported in the national export strategies has fallen or remained relatively unchanged at TPCC member agencies, but the implication of these trends for agencies’ trade promotion activities is not clear. The total reported budget authority for trade promotion fell by more than one-third between fiscal years 2002 and 2007, primarily owing to decreased authority for Agriculture and Ex- Im Bank. At the same time, the trade promotion budget authority for other key TPCC agencies remained relatively flat. (See fig. 1.) The four agencies named in figure 1 account for more than 90 percent of TPCC member agencies’ combined budget authority related to trade promotion, which ranged from $2.2 billion in fiscal year 2002 to $1.5 billion in fiscal year 2006 (enacted) and $1.3 (requested) in fiscal year 2007. The Agriculture Department has consistently held the largest share, more than 40 percent over the last 4 years. During this period, Agriculture’s share dropped from a high of 63 percent to 44 percent in fiscal year 2007 as funding was reduced or eliminated in three program areas: export credit guarantee programs, Public Law 480 Title I food assistance, and the Market Access Program. The budget authorities related to trade promotion at the Departments of Commerce and State, while remaining relatively steady in dollar terms, rose from 15 to 27 percent and 6 to 15 percent of the total, respectively. Ex-Im Bank’s share of the total dropped sharply, from 38 percent in fiscal year 2002 to 4 percent in fiscal year 2004, and has remained at less than 10 percent since then. It is difficult to determine the effect of these budgetary trends on the availability of trade promotion resources. For example: Although Commerce’s trade promotion budget authority has not changed significantly, its trade promotion activities may nonetheless be affected by increases in related costs. According to Commerce officials, the Commerce budget data includes budget authority for security at overseas offices. The officials provided us with information showing that security costs for these offices have risen by 8 percent since fiscal year 2005, leaving fewer resources available for trade promotion activities. The decline in Ex-Im Bank’s budget authority, shown in figure 1, did not reduce its ability to provide export financing. OMB changed its method for determining expected loss rates for U.S. international credits, which took effect after fiscal year 2002 and contributed to lower Ex-Im Bank projections of subsidy costs and budget needs. Also, Ex-Im Bank had accumulated carryovers from prior years, which resulted in its requesting zero program appropriations beyond administrative expenses in fiscal year 2004 and program appropriations of less than $100 million in fiscal years 2005-2007. In addition, reasons for the national export strategies’ inclusion or exclusion of agencies’ budget authority as related to trade promotion are not always apparent. For example, until fiscal year 2007, Agriculture’s trade promotion budget authority reported in the strategies included Public Law 480 Title I food assistance, although the primary objective of this program is to assist developing countries in obtaining needed resources, rather than promoting trade. Similarly, since 2000, the strategies have excluded the U.S. Agency for International Development (USAID) from the program budget authority table, stating that the agency’s activities “support trade promotion indirectly through broad economic growth and reform, unlike other activities that more directly fund trade finance or promotion.” However, the 2002 national export strategy included USAID in a letter from 10 key TPCC agencies and portions of the 2002, 2003, and 2004 strategies were devoted to a possible joint USAID– Ex-Im Bank program to support capital projects in developing countries. In addition, the strategies’ budget tables include agencies such as the U.S. Trade Representative, Treasury, and Labor, which do not directly fund trade promotion activities. Coordination among TPCC agencies has improved, although there is still room for improvement. However, the national export strategies continue to provide little information on which to base future efforts to establish consistent, shared, measurable goals and align resources in agencies’ export promotion programs to focus on results. TPCC’s member agencies have pursued a number of efforts to improve coordination of trade promotion activities, responding to recommendations in its 2002 national export strategy. These recommendations resulted from a 2001-2002 survey of more than 3,000 small businesses and other research commissioned by the secretariat. According to agency officials, these efforts included several successful initiatives such as joint training and other activities that leverage resources from other TPCC agencies. For example: Interagency training. Since 2003, the TPCC has sponsored three annual interagency training sessions, attended by a total of 297 people. The sessions have included participants and presenters from a variety of member agencies, including Agriculture, Commerce, Ex-Im Bank, OPIC, SBA, State, the U.S. Trade and Development Agency, and USAID, and according to agency officials have fostered greater cooperation through the sharing of information. In one instance, a State Department official recounted how a training session led by a colleague resulted in collaboration between Commerce and State that (1) improved service to companies seeking U.S. visas for their foreign partners and (2) produced a list of State Foreign Service contacts who could assist exporters in important markets such as Africa, where many countries have no Foreign Commercial Service presence. Joint outreach. Ex-Im Bank and OPIC have partnered with SBA to improve outreach and service to small- and medium-sized businesses. In May 2002, Ex-Im Bank and SBA signed a memorandum of cooperation to increase small businesses’ awareness and use of each agency’s financing products, and the agencies share the same application form for their respective products. In September 2004, the two agencies signed a memorandum of understanding that provides for a Ex-Im Bank to co- guarantee loans to small-business exporters when SBA has already agreed to guarantee its established maximum amount. In addition, on April 5, 2006, Ex-Im Bank’s acting chairman and president announced a new position of Senior Vice President for Small Business and the establishment of a Small Business Committee to coordinate, evaluate, and enhance the agency’s services to small businesses. In September 2002, OPIC and SBA formally integrated their efforts to promote the expansion of U.S. small businesses into emerging markets; as part of this effort, each agency is to provide training on its programs to the other’s personnel. OPIC and SBA have signed a cooperative agreement, SBA has detailed staff to OPIC, and OPIC has streamlined its approval process for small business clients. Overseas support. In January 2005, State and the Commerce Department’s Foreign Commercial Service completed a strategic plan to provide coordinated support at embassies with no Foreign Commercial Service staff. According to State officials, as a result of this plan, 75 percent of State-funded export promotion activities at these embassies are now tied directly to regional Foreign Commercial Service offices, up from 25 percent in 2002. Further, we were told that more of these embassies are submitting the commercial guide for their country via a new, Web-based process to Commerce’s market research database, which is accessible to the public via www.Export.gov. State has also linked the embassies electronically to domestic U.S. Export Assistance Centers and recently developed an electronic commercial diplomacy toolbox for its Foreign Service officers in the field that helps them assist U.S. firms seeking to export. The toolbox provides links to joint State–Commerce strategic planning, interagency training, and Commerce and other TPCC agency Web sites that provide guidance on export-related issues such as business travel and export controls. According to State officials, these resources are primarily used by small businesses, which often lack the means to obtain such information on their own. However, despite this progress, coordination problems persist, according to member agency officials. For example, State Department officials said that concurrent realignments of Agriculture and Commerce overseas staff are not being coordinated, a situation that could lead to gaps in country coverage and thereby adversely impact U.S. commercial interests. State officials also described instances of poor coordination between some regional Foreign Commercial Service offices and nearby embassies that lack Commercial Service staff. They said that Commerce is working with State to improve this situation. For example, Commerce’s regional office in Johannesburg will host a training program for the 11 embassies in southern Africa without Commercial Service officers. Another agency official told us that staff from at least two TPCC agencies were not aware of how certain other TPCC initiatives or agencies support their own agencies’ trade promotion efforts. In addition, according to Agriculture officials, Commerce field staff have not adhered to roles, responsibilities, and procedures outlined in a January 2001 agreement between the two agencies and reiterated in a March 2004 cable. As a result, the officials told us, a joint Agriculture-Commerce effort to help U.S. companies export agricultural goods—described in the 2004 national export strategy as a “huge success”—was never fully implemented. Commerce officials acknowledged these issues but told us that coordination between the two agencies had improved. Citing a June 2005 Commerce report, the Agriculture officials noted that both agencies have formally agreed to increase their joint cooperation. As we found in 2002, the TPCC’s annual strategies provide limited information regarding agencies’ export promotion goals and progress. In addition, the strategies do not review agencies’ budget allocation or represent the goals of some key agencies, and the strategies’ focus varies yearly. Consequently, the TPCC’s ability to provide in the strategy a plan for coordinating federal trade promotion activities, as directed by Congress, is constrained. As in 2002, the national export strategies do not identify member agencies’ goals or assess their progress toward the TPCC’s broad trade promotion priorities. According to agency officials, the TPCC secretariat does not systematically collect or compare agency plans and performance measures to define agency goals and assess progress. In addition, the agencies have not articulated mutual, measurable goals for trade promotion. Some member agency officials noted that their agencies’ plans and performance evaluations are prepared independently of the TPCC. Although several agencies mention the TPCC in their strategic plans, each agency, as we noted in our 2002 report, generally measures the results of its export promotion activities according to the extent to which its own mandate emphasizes export promotion. Despite the TPCC’s mandate to propose an annual unified trade promotion budget, the TPCC’s annual strategy does not review member agency budgets in relation to their goals and the agencies do not adjust their budgets to reflect the national export strategy. As we reported in 2002, the TPCC does not have specific authority to direct member agencies’ allocation of their resources. Agency representatives told us, as they had during our 2002 review, that they would resist any effort by the TPCC to review their budgets; they said that each agency has its own statutory requirements and that TPCC agencies’ budgets are appropriated by different congressional subcommittees. The agencies submit their proposed budgets separately to OMB. TPCC officials told us that the TPCC does not recommend budget priorities to OMB, a practice that, as we noted in our 2002 report, was last performed in 2000. The national export strategies do not represent the goals of some key member agencies. For example, although Agriculture’s Foreign Agricultural Service has accounted for about half of TPCC member agencies’ combined budget authority over the past 4 years, the 2005 strategy contains only one notable reference to this agency. In addition, the 2005 strategy identifies Brazil as a “spotlight” market, although Agriculture does not consider it a high-priority market because it competes with the United States in exporting agricultural products. Further, the 2005 strategy included very little of the information that the Foreign Agricultural Service provided to the TPCC secretariat in commenting on a draft of the strategy. However, Agriculture officials told us that a draft of the 2006 strategy, due out in May, would likely incorporate more information about their agency. Regarding other agencies, recent strategies have focused on China, a market in which USAID and OPIC do little or no business. The focus of the national export strategies continues to change from year to year with little evaluation of previous efforts’ effectiveness. For example, although TPCC officials noted that the national export strategies have consistently focused on China, the strategies describe a series of new China-related initiatives without following up on the outcome of specific activities from one year to the next. The exception to this pattern was a 3- year focus on recommendations from the TPCC’s survey and other client research, from 2002 through 2004; however, new areas of focus continued to be introduced. For example, the 2003 strategy introduced capacity building, Russia, and transportation security. The 2004 strategy highlighted China and free trade agreements, as well as coordination in crisis regions (primarily Iraq and Afghanistan), which had resulted from the survey and other information gathering and had been briefly raised in the 2002 and 2003 strategies. The 2005 strategy covered free trade agreements, China, and six “growth markets” (Japan, South Korea, India, Brazil, Russia, and the European Union). Some member agency officials commented on the ad hoc nature of the national export strategies and the lack of staff-level meetings focused on specific issues. Although available data suggest that TPCC member agencies have involved small and medium-sized businesses in trade promotion activities, a lack of systematically collected information makes it difficult to assess progress or trends. First, member agencies measure small and medium- sized businesses’ participation in trade promotion activities to varying extents and using various indicators. For example: Department of Commerce. U.S. and Foreign Commercial Service officials stated that they have recorded about 10,000 transaction “successes” a year over the past 5 years involving small and medium-sized business export sales and that they are currently in the process of updating their client management system to enable them to observe a transaction for a given client as it develops over time, progressing through interactions with other TPCC member agencies such as Ex-Im Bank, SBA, and other agencies. The U.S. and Foreign Commercial Service also runs an Advocacy Center, which helps U.S. companies compete for specific foreign sales contracts on a case-by-case basis. According to information posted on the U.S. Government’s export-related website (www.Export.gov), the center has helped 28 small and medium-sized enterprises win contracts valued at a total of $637 million since June 2002. Commerce’s Office of Trade and Industry Information within its Manufacturing and Services unit compiles detailed statistics on small and medium-sized business exports. Ex-Im Bank. Ex-Im Bank tracks small business’ participation in its programs because Congress requires it to make available a certain percentage of its export financing to small businesses. For fiscal years 2000-2005, Ex-Im Bank reported that slightly less than 20 percent of the value of its financing directly benefited small businesses, and in recent years it has reported that about 85 percent of its authorized transactions directly benefited these clients. However, we recently found flaws in the bank’s data and methodology, including shortcomings in its system for estimating about one-third of its small business financing annually and conflicting records for the same companies. Department of Agriculture. The Foreign Agricultural Service’s Market Development Program tracks a variety of indicators related to small businesses, including the number of its activities that support small businesses, the number of small businesses making a first export sale, the number of small businesses with increased sales of 20 percent or more, and the value of small companies’ sales. OPIC. OPIC measures the number of small business projects that result from its outreach through a Small Business Center. OPIC’s 2003-2008 strategic goals include supporting these clients and reducing to 60 days the time it takes to process their applications for OPIC assistance. OPIC officials stated that the center targets small businesses with annual revenues of less than $35 million and that since the center’s establishment in 2002, the share of OPIC transactions involving such companies increased from 67 percent in 2002 to 80 percent in 2005. Department of State. State does not presently collect data on small and medium-sized businesses’ involvement in trade promotion activities. However, according to a State Department official who works with small businesses, the agency has recently initiated a system to track commercial “success stories.” The official said the system, which State anticipates will be operational by June 2006, will track requests for help with commercial transactions and will also include data on the requesting companies—such as their number of employees and annual sales—that will enable State to identify small- and medium-sized businesses. Finally, although 2006 TPCC agency goals include increasing the number of small and medium-sized businesses that use member agency programs, the TPCC does not collect information from member agencies on these businesses’ participation in agency programs and activities, according to TPCC officials. Moreover, the national export strategies provide anecdotal, rather than systematic, reporting on small and medium-sized business participation in trade promotion activities. Although the national export strategies describe activities such as a cooperative agreement between SBA and OPIC, they provide no comprehensive summary of small and medium-sized business participation in all member agency activities. Further, the strategies do not assess agency reporting on small and medium-sized enterprise participation during the current year or identify trends in such participation. This makes it difficult to assess progress or trends in participation across agencies. Our review of the TPCC’s efforts shows it has achieved some important progress. For example, the TPCC has pursued a number of initiatives to improve agency coordination of trade promotion activities. Since our most recent report in 2002, the TPCC has completed and implemented a number of changes as a result of its private sector outreach efforts, including joint training and other activities that leverage resources of other TPCC member agencies. Although coordination challenges continue, there appears to be more discussion among TPCC member agencies and a higher level of awareness of the activities of the other agencies. This is a noteworthy improvement over the situation that existed in the early 1990s, when GAO began to evaluate the federal government’s trade promotion efforts. Despite this progress, the TPCC continues to face challenges in its ability to achieve other aspects of its mission of coordinating federal export promotion activities. For example, the TPCC’s annual export strategies do not review or assess agency goals or activities. Moreover, despite its mandate to propose a unified federal trade promotion budget, the TPCC continues to have little influence over agencies’ allocation of resources for trade promotion. GAO has consistently reported on the TPCC’s lack of progress in these fundamental objectives. Based on our long record of oversight over the TPCC, we believe that it can continue to make improvements in agency coordination as well as lead future outreach efforts to the private sector. In addition, we believe that the TPCC can do a better job of tracking small and medium-size businesses’ participation in a consistent manner. However, we question whether the TPCC’s current structure will allow it to overcome the challenges associated with assessing agency goals and influencing the allocation of resources. We also question whether the TPCC’s current move into the office of the Assistant Secretary for Trade Promotion within the Commerce Department will help it overcome these challenges. As we noted in previous reviews of the TPCC, sustained high-level administration involvement is necessary for the TPCC to achieve its fundamental objectives. Mr. Chairman, this concludes my prepared remarks. I would be happy to address any questions that you may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 1992, Congress established the Trade Promotion Coordinating Committee (TPCC) to provide a unifying interagency framework to coordinate U.S. export promotion activities and to develop a governmentwide strategic plan. TPCC member agencies' activities include providing training, market information, advocacy, trade finance and other services to U.S. companies, especially small- and medium-sized businesses. These U.S. government agencies together have $1.5 billion in budget authority for export promotion programs and activities for fiscal year 2006. Each year, the TPCC submits to Congress a mandated national export strategy, reporting member agencies' activities and trade promotion budget authority and establishing broad priorities. The TPCC secretariat, which has no budget of its own, is housed in the Commerce Department, which chairs the committee. In this testimony, which updates findings from a 2002 report, GAO (1) reports on trends in TPCC member agencies' budget authority; (2) assesses TPCC's coordination of trade promotion and its national export strategies; and (3) discusses small- and medium-sized businesses' participation in trade promotion activities. TPCC's national export strategies for fiscal years 2002-2006 show that agencies' trade promotion-related budget authority dropped by about one third. This resulted mainly from budget changes at the Department of Agriculture and Ex-Im Bank, which account for more than half of U.S. trade promotion budget authority. At the same time, budget authority for two other key agencies, the Departments of Commerce and State, remained relatively steady. However, the effect of these trends on the agencies' trade promotion activities is unclear. For example, the decline in Ex-Im Bank's budget authority did not reduce its ability to provide export financing. TPCC member agencies have taken several steps, such as participating in interagency training and outreach to exporters, to improve coordination of trade promotion efforts. However, coordination challenges persist, for example, among the Departments of Commerce, State, and Agriculture regarding the allocation of overseas staff for trade promotion activities. In addition, as GAO found in 2002, the annual national export strategies have several limitations that affect the TPCC's ability to coordinate trade promotion activities. For example, the strategies do not identify or measure agencies' progress toward mutual goals or review their budget allocations. In addition, they focus on different topics each year without evaluating progress in addressing previous years' topics. GAO has made similar comments in several prior reviews of the TPCC. A lack of systematic information makes it difficult to assess progress or trends in small and medium-sized businesses' participation in trade promotion activities across agencies. TPCC agencies track small-business participation in a variety of ways. The national export strategies provide only anecdotal information on these businesses' participation in trade promotion activities.
The role of women in the military has evolved from the Women’s Armed Services Integration Act of 1948— which afforded women the opportunity to serve in the military services—to January 2013, when the Secretary of Defense and the Chairman of the Joint Chiefs of Staff directed the services to open closed units and positions to women by January 1, 2016. Figure 1 provides details about changes in military service opportunities for women. In January 1994, the Secretary of Defense issued the Direct Ground Combat Definition and Assignment Rule, which allowed women to be assigned to almost all positions, but excluded women from assignment to units below the brigade level whose primary mission was to engage in direct ground combat. The memorandum establishing the 1994 rule also permitted restrictions on assignment of women in four other instances where: (1) the service secretary attests that the costs of appropriate berthing and privacy arrangements are prohibitive; (2) the units and positions are doctrinally required to physically collocate and remain with direct ground-combat units that are closed to women; (3) the units are engaged in long-range reconnaissance operations and special operations forces missions; and (4) job-related physical requirements would necessarily exclude the vast majority of women service members. The memorandum also permitted the services to propose further restrictions on the assignment of women, together with justification for those proposed restrictions. In 2012, DOD issued a report to Congress reviewing the laws, policies, and regulations restricting the service of female members in the armed forces. In this report, the Secretary of Defense and the Chairman of the Joint Chiefs of Staff rescinded the co-location assignment restriction that had allowed the military services to prohibit the assignment of women to units and positions physically collocated with direct ground-combat units.department’s intent to open positions and occupations that had been closed under this restriction. Specifically, the Army opened 6 enlisted occupations (9,925 positions) and 3,214 positions in 80 units that had been closed to women based on the co-location restriction. Additionally, The report also contained notifications to Congress of the the Army, Marine Corps, and Navy requested exceptions to policy and DOD notified Congress of its intent to open positions and occupations at the battalion level within active-duty direct combat units to inform future recommendations on other positions with the potential to be opened in the future. In its report, DOD explained that the experience gained by assigning women to these positions would help the department assess the suitability and relevance of the direct ground-combat prohibition and inform future policy decisions. In July 2013, DOD issued a subsequent report to Congress that discussed the department’s implementation of these February 2012 policy changes, the services’ progress regarding elimination of gender- restrictive policy, and the rescission of the ground-combat assignment This report also included the total number of positions open and rule.closed to women in each of the military services. At the time, the Navy and the Air Force had the most positions open to women (91 and 99 percent, respectively), while the Army and the Marine Corps had fewer open positions (68 and 69 percent, respectively). SOCOM also stated that in July 2013, around 46 percent of its positions were open to women. Figure 2 generally illustrates the process used to implement the Secretary’s direction to open positions and occupations that have been closed to women. The military services traditionally have established two types of physical performance requirements. First, the military services have established general physical fitness standards to promote overall health and physical fitness among military personnel. These fitness standards apply to active and reserve servicemembers regardless of occupation and are not required by statute to be gender neutral. These standards are not intended to ensure performance in a particular occupation. Second, the services set job-specific physical performance standards to ensure that servicemembers are capable of performing the particular jobs to which they have been assigned. These job-specific standards refer to occupation-specific criteria that applicants must meet to enter or remain in a particular career field or specialty, and by statute these occupational performance standards must be gender neutral. The military services and SOCOM have opened selected positions and occupations to women since January 2013, and are in the process of determining whether to open the remaining direct ground-combat positions and occupations. As an alternative to opening a position or occupation, the Secretary of Defense permitted the services to recommend an exception to policy to keep positions or occupations closed to women; to date, the Navy is the only service to have recommended an exception to policy. The services are also conducting studies to identify integration challenges and ways to mitigate these challenges in areas such as unit cohesion, women’s health, equipment, facilities (e.g., separate restrooms and sleeping quarters), women’s interest in serving in ground-combat positions, and international issues. We also examined the issue of sexual assault and harassment in the integration process. In response to the January 2013 memorandum, most of the services— except for the Air Force—and SOCOM have opened selected positions and occupations, and the openings to date largely involve closed positions in open occupations.departments to submit detailed plans by May 15, 2013, to implement this direction to open closed positions to women, and required the implementation plans to be consistent with a set of guiding principles, goals, and milestones for the integration process. The memorandum also required the military departments to submit quarterly progress reports on The memorandum directed the military implementation.implementation plans, including goals and milestones, which were subsequently reviewed by the Secretary of Defense in May 2013. The services and SOCOM also provided quarterly progress reports on their efforts to open closed positions and occupations to women, starting with the third quarter of fiscal year 2013. In July 2014, OUSD(P&R) granted a request by the Joint Chiefs to change the progress report cycle from quarterly to biannual. However, an OUSD(P&R) official stated that the Chairman of the Joint Chiefs of Staff continued to receive quarterly updates, and the Under Secretary of Defense for Personnel and Readiness continued to provide the Secretary of Defense with verbal quarterly updates. All four services and SOCOM developed As of March 2015, the services have opened positions and occupations to women as shown in table 1. The services are working on integration plans for these positions and occupations that have been opened to women. For example, the Army is actively recruiting women to fill recently opened positions across the force, in order to place the best qualified soldiers, regardless of gender, in positions. Further, the Navy is expanding assignment opportunities for enlisted women to specific submarine classes and is participating in surveys and questionnaires to assess integration success and gather lessons learned. At the time of this report, the services and SOCOM were in the process of determining whether to open the remaining closed positions and occupations, and the timeframe for many of these recommendations was postponed until September 2015. As of March 2015, the positions and occupations that remain closed to women are shown in table 2. As of April 2015, all of the military services and SOCOM were working on efforts, such as the standards validation studies discussed below, to inform their recommendations on whether to open the remaining closed positions and occupations to women. The services’ implementation plans included timelines for making recommendations on whether to open positions and occupations to women or to request exceptions to keep positions or occupations closed. Initially, these timelines were established independently by each service and different services were scheduled to make recommendations about similar occupations at different times. For example, the Army was scheduled to make its recommendation about armor occupations in July 2015, while the Marine Corps was scheduled to make its recommendations about armor occupations in late 2014 and early 2015. Subsequently, service officials have stated that some of those recommendation timeframes have shifted to a later point to synchronize with the Marine Corps recommendations that are now scheduled to occur in late September and early October 2015, as shown in figure 3. One reason provided by Air Force officials to support the timeline shifts was to consider impacts of another services’ recommendation to open a closed occupation or position, such as when there is no viable career path in an occupation because the majority of positions serve with another services’ closed unit. Another reason expressed by Army officials was that the service heads recognize the need for coordination when making recommendations about similar occupations such as infantry. An OUSD(P&R) official explained that there has always been a desire to align the recommendation timelines, and that when service timelines started to shift in 2014, the topic was extensively discussed in various meetings. As an alternative to opening a position or occupation, the Secretary of Defense has permitted the services to recommend that the Chairman of the Joint Chiefs of Staff and Secretary of Defense approve an exception As of May to policy to keep positions or occupations closed to women.2015, the Secretary of the Navy was the only military department Secretary to have recommended approval of an exception to policy. The Secretary of the Navy has recommended keeping specific positions closed to the assignment of enlisted women on three classes of ships (frigates, mine countermeasure ships, and patrol coastal craft) that are scheduled to be decommissioned. The rationale for keeping these ship platforms closed to women is in part because they do not have appropriate berthing and because planned decommissioning schedules would mean that modifications would not be a judicious use of resources. Navy officials stated that, while these closed platforms would cause some positions to remain closed to enlisted women, it would not close any occupations to women as there are alternative positions within those occupations on different platforms that are open to women and which provide equal professional opportunity. As of May 2015, none of the other services have requested an exception to keep positions or occupations closed to women or have stated that they plan to request an exception, but the services have all retained the right to request an exception later in the process if they believe there are conditions under which it would be warranted. The services and SOCOM are conducting studies focused on identifying potential integration challenges and developing ways to mitigate these challenges, as shown in figure 4. The studies address issues such as unit cohesion, women’s health, equipment, facilities (e.g., separate restrooms and sleeping quarters), women’s interest in serving in ground-combat positions, and international issues. Most of these studies are ongoing, so it is too early to determine the extent to which the services and SOCOM will follow their planned methodologies for identifying challenges and mitigation strategies, or how the services will implement the findings of the studies. See appendix II for a listing of the studies that each service and SOCOM are conducting in their efforts to integrate women. A common challenge cited in integrating women into previously closed positions and occupations is the potential impact on unit cohesion. Some services are performing studies examining various elements that contribute to unit cohesion. For example, SOCOM, the Army, and the Marine Corps are conducting studies to gauge attitudes toward working with women in integrated units. SOCOM is conducting three studies related to unit cohesion, and SOCOM officials stated that the goal of these studies is to identify potential obstacles and steps to undertake to mitigate those obstacles in an effort to increase their chances of successfully integrating women. For example, SOCOM tasked the RAND Corporation to administer a survey to personnel in closed special operations occupations to discover the attitudes of special operations personnel on the integration of women, including barriers to successful integration and actions to increase the likelihood of success. SOCOM officials stated that initial steps to address concerns raised in the surveys included the Commander of SOCOM holding discussions with his subordinate commanders to provide them information to pass on to their personnel as well as sending an email to all SOCOM personnel to educate the force about what they are doing to validate the standards for special operations positions and why they are validating the standards, and to explain the Joint Staff’s guiding principles that govern the integration effort. The first two of the three studies have been completed, and the RAND study is expected to be completed by July 2015. The Army Research Institute is conducting activities such as surveys, interviews, and focus groups with male and female soldiers assigned to units with newly opened positions and occupations. According to an Army Research Institute official, the institute found that opinions expressed by male soldiers in units assessed at different times since 2012 were less negative a year after female soldiers’ integration, and showed a general shift to more neutral and positive perceptions. The official stated that information from these activities is regularly provided to the Army. These activities will likely be conducted until 2018 as additional occupations are opened, according to an Army Research Institute official. As part of its efforts to identify the potential impacts of integration on unit performance, unit cohesion, and unit members’ individual interactions, the Marine Corps also is conducting a study through the RAND Corporation. The tasks in this study include a review of literature on integration of women in ground combat and other physically demanding occupations, analysis to identify issues most likely to arise with gender integration of Marine Corps infantry as well as initiatives that might be taken to address them, and development of an approach for monitoring implementation of gender integration of the Marine Corps infantry. This study was scheduled to be completed in March 2015. The Marine Corps, the Army, and the Air Force are assessing specific health effects on women when operating in a combat environment. Service officials stated that as women enter direct combat positions, the military will need to make accommodations to address specific health and medical concerns to prevent health problems and to maintain military readiness. For example, the Marine Corps is studying injury prevention and performance enhancement for its training program, including identifying risk factors for injury. This study is scheduled to be completed in August 2015. In addition, according to an Army official, the Army has created a group to review research and data on physical and mental health issues, load carriage, attrition, and performance. Further, the Air Force verified the availability of appropriate medical and psychological support at training locations, and evaluated the medical retention standards for its closed occupations and determined that the existing medical standards were appropriate for both male and female airmen. According to officials from the Defense Advisory Committee on Women in the Services, proper combat equipment is essential to overall military readiness; women suffer injuries and do not perform up to their full potential when wearing ill-fitting equipment and combat gear designed for men’s bodies. The Marine Corps is conducting a study to identify how adapting equipment design, gear weight, physical fitness composition, or standard operating procedures may support successful completion of required tasks. Marine Corps officials explained that these adaptations could potentially remove impediments to success and thereby enable successful integration. For example, the study may be able to identify alternative methods for loading rounds in armored vehicles so that the task does not require as much upper-body strength. This study is scheduled to be completed by June 30, 2015. Further, according to an Army official, the Army has recently redesigned protective gear items and uniforms with specific fits for female soldiers. In addition, the Air Force has identified training locations that will need female-sized equipment and other equipment such as footgear, clothing, and swimsuits. In June 2015, the Under Secretary of Defense for Acquisition, Technology and Logistics issued guidance directing that the Secretaries of the military departments ensure that combat equipment for female servicemembers is properly designed and fitted, and meets standards for wear and survivability. These studies are not being conducted by SOCOM, but instead are being conducted by the services’ special operations components: Army Special Operations Command, Naval Special Warfare Command, Marine Corps Special Operations Command, and Air Force Special Operations Command. rooms for women. Further, all four of the special operations components conducted assessments that determined whether any facilities changes were needed to integrate women. All services are studying the propensity (i.e., interest or tendency) of women to serve in selected closed positions and occupations. Officials from the services noted concerns that large numbers of women may not be interested in serving in currently closed ground-combat positions and occupations. Officials from all of the services stated that the integration of women into previously closed positions and occupations would be an asset in finding the best person for the job, and that outreach and recruitment of women for the officer corps is critical to ensuring that our nation’s military has the strongest possible leaders. For example, the Marine Corps conducted a study using surveys, market research, available literature and other information to determine the interest of men and women in both the Marine Corps overall and in ground-combat specialties to better understand potential changes in the recruiting market due to the opening of ground-combat arms specialties and units. This The Army has joined other study was completed in November 2014. services in creating advertising campaigns to increase women’s interest in selected positions and occupations. SOCOM, the Marine Corps, and the Army are conducting or have conducted international studies analyzing various integration issues. Army Special Operations Command is studying the roles of women to determine how local forces and communities may react to female special forces soldiers. One of the tasks of this study is to provide insights on how the roles of women in different regions and countries may affect the response of local forces and communities to females as Army special forces soldiers. This study is scheduled to be completed before SOCOM is expected to submit its recommendations to the department in September 2015. The Marine Corps also worked with RAND to study other countries with gender-integrated militaries and the practices those countries used for their integration processes. This study was completed in March 2015. According to an Army official, the Army has worked with the U.S. Army Training and Doctrine Command on international comparisons with other countries with integrated armies. This effort was part of the Army’s gender-integration study, which is scheduled to be completed in September 2015. In addition to the challenges reviewed by the services in their studies, we examined the issue of sexual assault and harassment in the integration process. This issue was raised in materials from the Defense Advisory Committee on Women in the Services as a continuing concern related to tracking servicemembers who committed a sex-related offense. According to officials from all services and DOD’s Sexual Assault and Prevention Response Office—which has authority, accountability, and oversight of the department’s sexual assault prevention and response program—sexual assault and harassment are not inhibitors to the integration of women into previously closed positions and occupations. Officials from all of the services consistently noted that prevention of sexual assault and harassment is a department-wide effort and is not a specific focus of integration efforts. They noted that they consider it to be more of a leadership challenge than an integration challenge. DOD officials said that sexual assault and harassment is not a function of integration and is not gender specific only for women; it affects men and women, and exists in male-only units. In March 2015, we reported that based on survey data, it is estimated that in 2014, about 9,000 to 13,000 male active-duty servicemembers were sexually assaulted, and we also estimated that a much lower percentage of men report their sexual assaults compared to women. The military services and SOCOM are working to address statutory requirements and Joint Staff guidance for validating physically demanding occupational standards by initiating several studies. We identified five elements that the services and SOCOM must address as part of the standards validation process. We compared the five elements to the services’ and SOCOM’s planned steps and methodologies in their studies and determined that their study plans contained steps that, if carried out as planned, potentially address all five elements, as summarized in figure 5. However, the studies had not yet been completed at the time of our review; therefore, we could not assess the extent to which the studies will follow the planned steps and methodologies or report how results of the studies will be implemented. See appendix II for a complete listing of the planned studies that each service and SOCOM are conducting in their efforts to integrate women. The statutory requirements for validating gender-neutral occupational standards direct that any military career designator open to both men and women may not have different standards on the basis of gender. The statute further states that for military career designators where specific physical requirements for muscular strength and endurance and cardiovascular capacity are essential to the performance of duties, those requirements must be applied on a gender-neutral basis. To address this requirement, according to service and SOCOM officials and their respective plans, officials will develop one set of occupational standards for each position that will be applicable to both men and women. One example of this type of effort is the Marine Corps’ Ground Combat Element Integrated Task Force, which is to provide the Marine Corps the opportunity to review and refine gender-neutral occupational standards as it evaluates the performance of men and women in integrated units. All of the services’ efforts are to be completed by the end of September 2015. By statute, the Secretary of Defense must ensure that the gender-neutral occupational standards accurately predict performance of the actual, regular, and recurring job tasks of a military occupation, and are applied equitably to measure individual capabilities. The services’ and SOCOM’s plans for studies to validate operationally relevant and gender- neutral occupational standards involve identifying the physically demanding tasks required for the specific occupation under study. To address this requirement, all of the services’ and SOCOM’s plans that we reviewed are taking steps to identify the physically demanding tasks required for each occupation. For example, the Army and the Air Force have undertaken detailed job analyses to identify and define the critical physically demanding tasks and the physical abilities needed to perform them. By observing performance of the tasks and surveying subject- matter experts to confirm the specific tasks required for each occupation, the planned approach intends to confirm that the appropriate tasks have been identified and described. Additionally, the Marine Corps’ Ground Combat Element Integrated Task Force plans to quantify tasks, conditions, and standards for job tasks that have previously been qualitative. In March 2015, the Under Secretary of Defense for Personnel and Readiness provided implementing guidance for this statutory requirement, and directed the Secretaries of each military department to provide a written report regarding their validation of individual occupational standards by September 30, 2015, and to require each military department’s Inspector General to implement a compliance inspection program to assess whether the services’ occupational standards and implementing methodologies are in compliance with statutory requirements. Joint Staff guidance directs the services to validate their occupational performance standards. One of the Chairman’s guiding principles stated that the services must validate occupational performance standards, both physical and mental, for all military occupational specialties, specifically those that remain closed to women. To address this requirement, all of the services and SOCOM are conducting studies to validate the occupational standards for the positions that have been closed to women. The Army’s Training and Doctrine Command and Research Institute of Environmental Medicine are planning to complete by September 2015 the development and validation of gender-neutral occupational testing procedures for entry into the seven military occupational specialties that are closed to women. The Marine Corps opened certain entry-level training schools that previously were closed to women, such as Infantry Training Battalion and Infantry Officer Training, to obtain data on the physical and cognitive/academic demands on female volunteers in these schools. According to Marine Corps officials, this effort will be completed in June 2015. Another Marine Corps effort, projected for completion in June 2015 with a final report by August 2015, is the Ground Combat Element Integrated Task Force. This effort is expected to train female Marine volunteers in skills and tasks performed in closed occupations skills while a dedicated research team observes their performance in both entry-level training and operational environments. Both of these efforts are expected to assist the Marine Corps in validating its standards. In July 2014, the Navy Manpower Analysis Center reviewed all Navy positions to identify those that are physically demanding, and independently reviewed and updated occupational standards for all positions to ensure gender neutrality. The Air Force Air Education and Training Command is planning to complete by July 2015 a study that analyzes and validates physical tests and standards on Battlefield Airmen career fields. A second Air Force study is expected to revalidate physical and mental occupational entry standards across specialties; this study is expected to be completed in September 2015. The special operations components—the Army Special Operations Command, Naval Special Warfare Command, Marine Corps Special Operations Command, and Air Force Special Operations Command—are validating standards for those military occupational specialties that deploy with SOCOM; this is expected to be completed by the end of July 2015. The Chairman’s guiding principles also require that eligibility for training and development within designated occupational fields consist of qualitative and quantifiable standards reflecting the knowledge, skills, and abilities necessary for each occupation. To address this requirement, the services and SOCOM have planned studies that aim to validate and select tests to ensure the tests are measuring what they intend to. Further, these plans aim to ensure that scores or results from a test can be used to select individuals for a particular occupation or task. For example, the Air Force is designing physical task simulations, such as climbing a ladder (to simulate entering and exiting a helicopter, according to officials) and lifting and holding objects at different heights (to simulate holding an item to bolt onto an airframe, according to officials). These planned measures of performance are intended to ensure that simulations are good approximations of job tasks. Air Force officials explained that the Air Force’s planned approach is to use the operationally-relevant, occupationally-specific critical tasks it identifies as the anchor to develop appropriate physical tests and standards to evaluate the ability to successfully perform operational requirements. This study is expected to be completed by the end of fiscal year 2015. Another Chairman’s guiding principle requires the services to take action to ensure the success of the warfighting forces by preserving unit readiness, cohesion, and morale. To address this requirement, the services and SOCOM are taking steps to ensure that the integration of women maintains readiness. For example, officials from each of the services stated that the standards-validation efforts will ensure that service members in newly opened occupations are able to perform the mission and thus maintain readiness, operational capability, and combat effectiveness. By observing performance of the tasks and surveying subject-matter experts to confirm that specific tasks are required for each occupation, the services and special operations components plan to confirm those specific tasks that are required for each occupation. Further, as discussed earlier, the Army, Marine Corps, and SOCOM are conducting studies to determine the potential effect of integration on unit cohesion.the Services, a common challenge cited in integrating women into previously closed positions and occupations is the potential effect on unit cohesion. Unit cohesion contributes to strong morale and commitment to a mission. By taking steps to identify and address challenges related to unit cohesion, these services are working to ensure that readiness is maintained throughout the integration process. DOD has been tracking, monitoring, and providing oversight over the services’ and SOCOM’s efforts to integrate women into ground-combat positions, but has not developed plans to monitor long-term integration progress. Service requests for an exception to policy to keep positions closed to women receive attention from the Chairman of the Joint Chiefs of Staff and the Secretary of Defense. OUSD(P&R) and Joint Staff manage the statutorily required congressional notification process, which is part of a longer process before women can begin serving in newly opened positions and occupations. To oversee the services’ and SOCOM’s efforts to integrate women into combat positions, OUSD(P&R) and the Chairman of the Joint Chiefs of Staff have issued guidance, commissioned studies, and facilitated coordination and communication through regular meetings among the services and SOCOM. The Secretary of Defense’s memorandum rescinding the 1994 rule directed the military departments to submit implementation plans and quarterly progress reports to the Chairman of the Joint Chiefs of Staff and to the Under Secretary of Defense for Further, Standards for Internal Control in the Personnel and Readiness.Federal Government states that ongoing monitoring should be performed continually in the course of normal operations, and should include regular management and supervisory activities, separate evaluations, and policies and procedures to ensure that findings of reviews are promptly resolved. DOD, memorandum from the Under Secretary of Defense for Personnel and Readiness, Elimination of the 1994 Direct Ground Combat Definition and Assignment Rule (Feb. 27, 2013); DOD, memorandum from the Chairman of the Joint Chiefs of Staff, Women in the Service Implementation Plan (Jan. 9, 2013). standards.reports as part of its normal oversight process, OUSD(P&R) has discussed with the services topics such as past and upcoming milestones, recommendation timelines, and the status and progress of ongoing studies. A Joint Staff official explained that the reports are reviewed to ensure progress is being made in accordance with the services’ implementation plans. After the reports are reviewed by OUSD(P&R) and Joint Staff, the Chairman provides these reports to the Secretary of Defense. An OUSD(P&R) official stated that when reviewing these Further, to help in its oversight of the services’ and SOCOM’s standards validation efforts, OUSD(P&R) tasked the RAND Corporation to conduct a study concerning validation of gender-neutral occupational standards within the services and SOCOM; an OUSD(P&R) official stated that the study will provide an independent analysis of the services’ efforts to validate standards. The first objective of the RAND study is to describe best-practice methodologies for establishing gender-neutral standards for physically demanding jobs, tailored to address the needs of the military. The second objective is to review and evaluate the methodologies used by the services to set gender-neutral standards. In September 2013, RAND issued a draft report addressing the first objective; an OUSD(P&R) official stated that OUSD(P&R) provided a draft of this report to all of the services. In June 2015, RAND officials said that a draft of the second report, which will cover both objectives, is forthcoming. Moreover, OUSD(P&R) has regular quarterly meetings with the services to discuss topics such as developing the quarterly reports and how others are handling any issues with integration. The Joint Staff also has a meeting process with two different levels of meetings devoted solely to integration efforts: (1) a Joint Chiefs of Staff (four-star level) group, and (2) an Operations Deputies (three-star level) group. A Joint Staff official explained that these meetings provide a forum for the services to share implementation updates, discuss potential barriers, and highlight issues. RAND’s draft report identified as best practices a six-step process for establishing requirements for physically demanding occupations. These six steps are: (1) identify physical demands; (2) identify potential screening tests; (3) validate and select tests; (4) establish minimum scores; (5) implement screening; and (6) confirm tests are working as intended. The meetings occur at least once every quarter, but can occur more often if needed. OUSD(P&R) and Joint Staff officials stated that there are also frequent communications by other means for the same purposes. For example, in September 2014, SOCOM hosted a workshop for all of the services to review the standards validation process for special operations and the services. SOCOM officials stated that the purpose of this workshop was to ensure that all the services were using similar processes, that no one was working at cross purposes, and that there was no duplication of effort. Officials stated that a follow-up workshop was held in May 2015. The Secretary of Defense and Chairman of the Joint Chiefs of Staff directed that any recommendation for an exception to policy to keep an occupation or position closed to women must be personally approved first by the Chairman and then by the Secretary of Defense. The memorandum states that this approval authority may not be delegated. OUSD(P&R) and Joint Staff officials explained that before such requests are submitted to the Chairman, they are first reviewed for sufficiency by OUSD(P&R) and the Joint Staff. When reviewing any requests for an exception to policy to keep positions closed to women, the Secretary of Defense’s January 2013 memorandum states that “xceptions must be narrowly tailored and based on a rigorous analysis of factual data regarding the knowledge, skills and abilities needed for the position.”According to OUSD(P&R) and Joint Staff officials, if an exception to policy is requested, they will request all related supporting data and studies and review the request considering all of the factors involved. They stated that once they are satisfied that the Secretary’s criteria have been met, they will present the request to the Chairman and then the Secretary to determine whether the request meets the criteria for an exception. According to OUSD(P&R) and Joint Staff officials, they made a conscious decision not to provide or develop specific additional criteria or a format for exception to policy requests—beyond the guidance in the Secretary’s memorandum—because they did not want it to appear that there was a checklist for requesting an exception to policy. When OUSD(P&R) and Joint Staff first reviewed the Navy’s July 2014 exception to policy request for the three different ship classes, they jointly requested additional information from the Navy, such as actual modification costs to enable the ships to provide berths for women, officer assignment information, and information on the professional development impact if women do not serve on those ships. An OUSD(P&R) official explained that OUSD(P&R) and Joint Staff worked with the Navy so the Navy would better understand the additional analytical rigor being requested, and they established a deadline for the Navy to provide the requested information in February 2015. The Navy submitted the requested information, and as of April 2015, OUSD(P&R) and Joint Staff officials said the exception to policy request was under review by the Chairman of the Joint Chiefs of Staff, who will then forward his recommendation to the Secretary of Defense. SOCOM’s status as an operational command results in a slightly different process for any exception requests for positions associated with SOCOM. SOCOM officials explained that for any positions associated with SOCOM—whether there is a recommendation to open a position or a request for an exception to policy to keep a position closed to women— there are two recommendations provided. One recommendation comes from the position’s parent department Secretary. The second recommendation comes from the SOCOM Commander, and since SOCOM is not a military service that recommendation is then reviewed and approved by the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict, who serves a military department secretary function for SOCOM. The officials stated that to date there have not been differences between the recommendations from the services and from SOCOM. SOCOM officials explained that there is regular collaboration with the services about recommendations, but that in the event that there was a difference in the two recommendations, the Secretary of Defense would make the decision. As of May 2015, OUSD(P&R) had not developed plans for a mechanism or process to monitor the services’ progress in their efforts to integrate newly opened positions and occupations after January 1, 2016. As noted earlier, Standards for Internal Control in the Federal Government states that ongoing monitoring should be performed continually in the course of normal operations. An OUSD(P&R) official stated that OUSD(P&R) will continue to provide oversight as part of its normal responsibilities, and make associated changes in applicable DOD guidance. Further, as discussed earlier, the Under Secretary of Defense for Personnel and Readiness issued guidance that directed each military department to report on its validation of occupational standards, and to implement an inspection program to assess whether the services’ occupational standards comply with statutory requirements.an OUSD(P&R) official, that office does not envision undertaking a formal role in the implementation of the services’ recommendations to open closed positions following January 2016. Further, a Joint Staff official stated that an initial Joint Staff meeting would be held after the January 1, 2016 announcement, and it would be determined at that time whether any additional meetings would be held. OUSD(P&R)’s requirement for the services to submit quarterly progress reports ends in January 2016, and the services have varying plans to monitor implementation after that date. For example, Army officials stated that they have developed an implementation and follow-up plan for beyond 2016 that is being reviewed by senior leaders. Marine Corps officials explained that they have long-term research that will track integration of females, to help understand and shape institutional and individual success, while Navy officials explained that they had not developed any plans to monitor implementation after 2016 and were waiting for direction from OUSD(P&R). However, OUSD(P&R) and Joint Staff officials did not identify any plans to provide such direction for the services to monitor implementation. After the decisions have been made to open positions and occupations to women, there is a lengthy implementation process before women will be able to serve in the newly opened occupations. Officials from all of the services and SOCOM stated that before women can serve in newly opened positions and occupations they must first be recruited, accessioned, trained, tested, and assigned. As an example of the time involved in just one part of the implementation process, according to OUSD(P&R) officials, the general training timelines can vary by service and the position and occupation, but typically may require less than half a year to almost two years to complete the training part of the implementation process. Without ongoing monitoring of the services’ and SOCOM’s implementation progress in integrating previously closed positions and occupations, it will be difficult for DOD to have visibility over the extent to which the services and SOCOM are overcoming potential obstacles to integration and DOD will not have information for congressional decision makers about the department’s integration progress. OUSD(P&R) and Joint Staff manage the congressional notification process when positions and occupations are being opened to women. By statute, the Secretary of Defense must provide Congress with a report prior to implementing any proposed changes that would result in opening or closing any category of unit or position, or military career designator to women. As part of the process for opening formerly closed positions and occupations, an OUSD(P&R) official explained that OUSD(P&R) analyzes information provided by the military department secretaries and the Assistant Secretary of Defense for Special Operations and Low- Intensity Conflictspecialties (if applicable), that all appropriate additional skill identifiers are included, and that the correct number of positions to be opened is reflected. OUSD(P&R) officials then create a packet to send to Congress after they prebrief the House and Senate Armed Services Committees. A Joint Staff official stated that Joint Staff also reviews the notifications, and provides comments on the briefings given to Congress. and verifies items such as the correct occupational In the congressional notifications of the department’s intent to open positions and occupations to women, DOD is required to provide a detailed legal analysis regarding the legal implications of the proposed change with respect to the constitutionality of the application of the 1948 Military Selective Service Act to males only. This act empowers the President to require the registration of every male citizen and resident alien between the ages of 18 and 26. In 1981, the Supreme Court upheld the constitutionality of the male-only registration requirement. Currently, women serve voluntarily in the U.S. armed forces, but are not required to register with the Selective Service and would not be subject to a draft. DOD’s legal analyses in the congressional notifications submitted since January 2013 have not found that opening the positions and occupations to women would affect the constitutionality of the act. Officials from OUSD(P&R), the services, and the Defense Advisory Committee on Women in the Services have stated that if DOD decides to open ground-combat occupations such as infantry, artillery, and armor, DOD’s required legal analysis could raise concerns about the constitutionality of the act. DOD’s legal analysis in the March 2015 congressional notification to open the Army combat engineer occupation stated that “ver time, however, the opening of additional combat positions to women may further alter the factual backdrop to the Court’s decision in Rostker. Should the constitutionality of the [Military Selective Service Act] be challenged at a later date, the reasoning behind the exclusion of women from registration may need to be reexamined.” An OUSD(P&R) official explained that even if DOD’s legal analysis raises constitutionality concerns about the act, DOD could still submit the notification to Congress and take actions to implement opening those positions to women after completion of the waiting period. After a notification is provided to Congress, the Secretary of Defense is prohibited from implementing any proposed changes until “after the end of a period of 30 days of continuous session of Congress (excluding any day on which either house of Congress is not in session) following the date on which the report is received.” This waiting period allows Congress time to take any legislative actions that it deems necessary based on the notification and report provided by DOD. However, the congressional calendar has resulted in an average time period of about 90 calendar days before planned changes could be implemented; three of the twelve congressional notifications DOD submitted between April 2013 After the waiting period has and July 2014 have taken almost 5 months.passed, OUSD(P&R) notifies the appropriate elements within a service so that they can begin implementing actions to open the positions. Since the services are allowed to take actions to open positions only after the waiting period is over, Army and Navy officials said that the delays and unpredictability associated with the waiting period pose challenges in beginning the recruiting, accession, and training processes, and aligning assignments to newly opened positions with service promotion cycles. An OUSD(P&R) official stated that in 2014 DOD was requested to provide drafting assistance on a legislative proposal for a change that would have modified the waiting period from 30 days of continuous session of Congress to 60 calendar days, but said that Congress did not act at that time. In 2012, we assessed the military necessity of the Selective Service System and examined alternatives to its current structure. We found that because of its reliance and emphasis on the All Volunteer Force, DOD had not reevaluated requirements for the Selective Service System since 1994, even though the national security environment had changed significantly since that time. The registration system in fiscal year 2014 had an annual budget of $22.9 million;system provides a low-cost insurance policy in case a draft is ever necessary. In our 2012 report, we recommended that DOD (1) evaluate DOD’s requirements for the Selective Service System in light of recent strategic guidance and report the results to Congress; and (2) establish a process of periodically reevaluating DOD’s requirements for the Selective Service System in light of changing threats, operating environments, and strategic guidance. DOD officials stated that the In responding to these recommendations, DOD stated in February 2013 that there was no longer an immediate military necessity for the Selective Service System, but there was a national necessity because the registration process provides the structure for mobilization that would allow the services to rapidly increase in size if needed. DOD’s assessment was limited to a reevaluation of mission and military necessity for the Selective Service System. Regarding the second recommendation, DOD had not taken action as of June 2015, but agreed that a thorough assessment of the issue was merited, and should include a review of the statutes and policies surrounding the current registration process and the potential to include the registration of women. However, DOD officials stated that such a review should be part of a broader national discussion and should not be determined only by DOD. As we noted in our 2012 report, a reevaluation of the department’s personnel needs for the Selective Service System in light of current national security plans would better position Congress to make an informed decision about the necessity of the Selective Service System or any other alternatives that might substitute for it. For example, a 2013 Congressional Research Service report noted the Selective Service issue could become moot by terminating Selective Service registration or expanding registration requirements to include women. We agree that this is a broader issue. DOD is the agency that would use the Selective Service System in the event a draft was needed. Thus, we continue to believe that our 2012 recommendation has merit—that DOD should take the lead in conducting an evaluation of requirements for the Selective Service System and should establish a process of periodically reevaluating DOD’s requirements for the Selective Service System in light of changing threats, operating environments, and strategic guidance. The Secretary of Defense and the Chairman of the Joint Chiefs of Staff have ordered that women, to the extent possible, be integrated into direct ground-combat positions and occupations by January 2016. Although OUSD(P&R) and Joint Staff have been tracking, monitoring, and providing oversight of the services’ and SOCOM’s integration efforts, they do not have plans to monitor the services’ implementation progress after January 2016, as newly opened positions are integrated. Without ongoing monitoring of the services’ and SOCOM’s progress in integrating previously closed positions and occupations after January 2016, it will be difficult for DOD to have visibility over the extent that the services and SOCOM are overcoming potential obstacles to integration and DOD may not be able to provide current information for congressional decision makers about the department’s progress. Further, DOD has not established a process to reevaluate its requirements for the Selective Service System that could enable it to take into account these changes in expanding combat service opportunities for women. If DOD conducted a comprehensive reevaluation of the department’s personnel needs for the Selective Service System, the analysis would better position Congress to make an informed decision about the necessity of the Selective Service System or any other alternatives that might substitute for it. To help ensure successful integration of combat positions that have been opened to women, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to develop plans for monitoring after January 2016 the services’ implementation of their integration efforts and progress in opening positions to women, including an approach for taking any needed action. We provided a draft of this report to DOD for review and comment. In written comments, which are reprinted in their entirety in appendix III, DOD concurred with our recommendation. DOD noted that they recognize the importance of monitoring long-term implementation progress of expanding combat service opportunities for women. DOD also provided technical comments, which we have incorporated in the report where appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, and the Secretaries of the military departments. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report assesses the Department of Defense’s (DOD) efforts to expand combat service opportunities for women. Our scope included efforts of the four military services in DOD and U.S. Special Operations Command (SOCOM) since January 2013, when the Secretary of Defense eliminated the prohibition on women serving in combat positions. We did not include the Coast Guard in our review. Table 3 contains a list of the agencies we contacted during our review. To determine the status of service efforts to open previously closed positions and occupations and the extent potential challenges have been identified and mitigated, we analyzed documentation and spoke with officials to identify the positions and occupations that have been opened to women, that remain closed, timeframes for making decisions, whether any services planned to keep any positions or occupations closed to women, and any steps taken to identify potential challenges and develop approaches to overcome any such challenges. Specifically, we reviewed guidance provided to the services from the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the Under Secretary of Defense for Personnel and Readiness to determine what the services were required to do as part of their efforts to determine whether to open closed positions and occupations to women. We determined that the services were required to, among other things, develop implementation plans, follow five guiding principles when opening positions and occupations to women, and create and submit quarterly progress reports starting in the third quarter of fiscal year 2013. At the department level, the military departments were required to submit detailed implementation plans consistent with the guiding principles and goals and milestones provided by the Chairman. To determine whether the services and SOCOM met these requirements, we obtained and analyzed the services’ and SOCOM’s respective implementation plans, quarterly progress reports, congressional notifications, and Navy exception to policy documents, and discussed these documents with officials from the services, SOCOM, and the Office of the Under Secretary of Defense for Personnel and Readiness (OUSD(P&R)). To determine if the services and SOCOM met all of the implementation plan requirements, we analyzed the services’ and SOCOM’s implementation plans for required components—such as timelines and timeframes for opening positions and occupations to women, milestones for development of gender-neutral occupational standards, and consistency with the guiding principles. To determine if the services and SOCOM met all of the quarterly progress report requirements, we analyzed the quarterly and bi-annual reports for required components—such as updates on assessments and progress on positions that are slated for opening or currently being evaluated, analysis of any request for an exception to policy, discussion regarding the development status of gender-neutral standards, assessments of newly opened positions, identification of any limiting factors, and recommendations for additional openings. We analyzed how some of the timeframes changed through quarterly report progress updates and interviews with service officials, by comparing them to the original ones set in the implementation plans. To determine what positions and occupations had been opened to women since January 2013, we analyzed the congressional notifications that DOD had provided to Congress from January 2013 through March 2015, and discussed this data with officials from the services and OUSD(P&R). To determine what positions and occupations remain closed to women and to determine the services’ and SOCOM’s timeframes for making decisions about whether to open these positions and occupations to women, we analyzed the services’ and SOCOM’s implementation plans, and quarterly and biannual progress reports and interviewed officials from the services, SOCOM, and OUSD(P&R). In addition, we requested and obtained data from the services and SOCOM on the total number of positions and occupations closed to women as of March 2015, as well as the total number of positions and occupations in each service and in SOCOM. We analyzed the reliability of this data by obtaining information on how the data were collected, managed, and used through interviews with and questionnaires to relevant officials and by reviewing supporting documentation. To corroborate this data, we cross-referenced it with documentation on closed positions and occupations provided by OUSD(P&R), as well as similar data provided by the services and SOCOM in their progress reports. This data was also verified by officials from OUSD(P&R), the services, and SOCOM. Although we found some discrepancies in some of the data regarding the number of closed positions reported by the services, which officials explained were due in part to changes in force structure, we determined that the data were sufficiently reliable to report on the general number and percentage of positions and occupations that are closed to women in each of the services and in SOCOM. To determine any steps that DOD and the services took to identify potential challenges and develop approaches to overcome any such challenges, we analyzed service and SOCOM implementation plans, quarterly reports, and studies and study documentation. We also interviewed officials at OUSD(P&R), Joint Staff, Defense Advisory Committee on Women in the Services, Sexual Assault Prevention and Response Office, and within each of the services and SOCOM, and discussed potential challenges they have identified and approaches to mitigating these challenges. In inquiring about challenges, we asked about challenges in general, as well as specific issues that we had identified in the services’ implementation plans, reports by the Defense Advisory Committee on Women in the Services, and prior GAO work as potential areas of study. The specific issues that we asked about were the Military Selective Service Act, women’s health, sexual harassment and assault, unit cohesion, facilities issues (e.g., berthing, privacy), promotion and retention, and equipment. To determine the extent to which service efforts to validate gender-neutral occupational standards are consistent with statutory requirements and Joint Staff guidance, we identified requirements from statutes and Joint Staff guidance and compared these requirements against service plans for studies. To identify the requirements for validating gender-neutral occupational standards, we reviewed relevant laws as well as guidance issued by the Chairman of the Joint Chiefs of Staff. Specifically, to identify statutory requirements, we reviewed the National Defense Authorization Act for Fiscal Year 1994 and the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for 2015. To identify Joint Staff guidance, we reviewed the Chairman’s January 2013 memorandum that laid out guiding principles for the services to follow in integrating From these laws and guidance, we identified five specific women.elements the services must follow in validating their gender-neutral occupational standards. Two elements are from statutory requirements: (1) ensure gender-neutral evaluation and (2) ensure standards reflect job tasks. Three elements are from Joint Staff guidance: (1) validate performance standards; (2) ensure eligibility reflects job tasks, and (3) integrate while preserving readiness, cohesion, and morale. To determine if the services are following these requirements and guidance, we obtained plans for studies from each of the military services and SOCOM. These plans included descriptions of scope, methodology, and timeframes for completion. We then compared these plans against the requirements we identified to determine if these planned studies met the requirements for validating gender-neutral occupational standards. Two analysts independently reviewed and assessed the plans to determine whether they contain the two statutory elements provided by the National Defense Authorization Act for Fiscal Year 1994, as amended, and the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 and the three elements provided by the Chairman’s memorandum. The analysts then compared their results to identify any disagreements and reached agreement on all items through discussion. However, the results from these studies are not yet completed; therefore, we could not assess the extent to which the completed studies will follow the planned steps and methodologies or report how results of the studies will be implemented. We also interviewed and discussed these requirements and studies with DOD and service officials, particularly officials involved in conducting these studies. To determine the extent to which DOD is tracking, monitoring, and providing oversight over the military services’ plans to complete the integration of women in direct combat positions by January 2016, we obtained and analyzed documentation and discussed with officials from OUSD(P&R) and Joint Staff the nature and level of their tracking and monitoring, and their review of the military services’ and SOCOM’s efforts to integrate women into combat positions. Specifically, we assessed OUSD(P&R) and Joint Staff’s review of the military services’ and SOCOM’s implementation plans, and quarterly and biannual progress reports. We then compared these efforts to DOD guidance and internal control standards.with OUSD(P&R) officials a study being performed by the RAND Corporation for OUSD(P&R) as part of their oversight of the services’ and SOCOM’s efforts to validate gender-neutral occupational standards, and we met with RAND officials to discuss their work on this study. We also obtained and analyzed documentation related to the Navy’s request for an exception to policy to keep positions closed on three classes of ships, and we discussed with P&R, Joint Staff, and Navy officials the process and criteria used to review this request. 10 U.S.C. § 652. the Defense Advisory Committee on Women in the Services,Federal Advisory Committee on Gender-Integrated Training and Related Issues to identify changes in statutes and military guidance that have increased opportunities for women to serve in combat roles over the past several decades. We determined changes that have occurred in DOD’s workforce and environment over the past several decades and assessed the extent that these changes could have an effect on the utility of the Military Selective Service Act in meeting the department’s needs. We conducted this performance audit from September 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Online surveys with additional Active and National Guard Brigade Combat Teams. Provide assessment of potential issues associated with gender integration in newly opened occupations and currently closed occupations to be opened. Study Purpose Review the integration of women into Marine Corps aviation occupations and past studies on the performance of female Marines in aviation and logistics. Determine participants’ experiences regarding gender integration in newly-integrated ground combat units particularly with respect to potential effects on readiness, morale, and unit cohesion. Determine potential impact of integrating women into Marine Corps military occupational specialties, with a particular focus on the infantry. Create a systematic and sustained injury prevention and performance enhancement training program. Examine if changing the gender component of small, elite teams would affect team dynamics in a way that would compromise the ability of the team to meet a mission objective. Assess the range of potential obstacles to effective integration of women into Special Operations Forces, focusing on the unit- and team-level. Assess how indigenous definitions of women’s roles could affect the response of local forces and communities to female Army Special Forces soldiers. Identify impacts, evaluate psychological and social considerations, and review gender neutral standards that may be impacted by opening all Army Special Operations Command occupations and positions to women. Identify impacts, evaluate psychological and social considerations, and review gender neutral standards that may be impacted by opening all Marine Corps Forces Special Operations Command occupations and positions to women. Identify impacts, evaluate psychological and social considerations, and review gender neutral standards that may be impacted by opening all Naval Special Warfare occupations and positions to women. Identify impacts, evaluate psychological and social considerations, and review gender neutral standards that may be impacted by opening all Air Force Special Operations Command occupations and positions to women. In 2012, DOD approved an exception to the Direct Ground Combat Assignment Rule Policy for the Army, and enabled the Army to assign women to enlisted and officer positions at the battalion level in open occupations in nine Brigade Combat Teams. In addition to the contact named above, Kimberly C. Seay (Assistant Director), Thomas Beall, Margaret A. Best, Renee S. Brown, Adam Hatton, Aaron D. Karty, Amie Lesser, Richard Powelson, Michael Silver, Alexander Welsh, and Michael Willems made major contributions to this report. Military Personnel: DOD Has Taken Steps to Meet the Health Needs of Deployed Servicewomen, but Actions Are Needed to Enhance Care for Sexual Assault Victims. GAO-13-182. Washington, D.C.: January 29, 2013. National Security: DOD Should Reevaluate Requirements for the Selective Service System. GAO-12-623. Washington, D.C.: June 7, 2012. Gender Issues: Trends in the Occupational Distribution of Military Women. GAO/NSIAD-99-212. Washington, D.C.: September 14, 1999. Gender Issues: Perceptions of Readiness in Selected Units. GAO/NSIAD-99-120. Washington, D.C.: May 13, 1999. Gender Issues: Information to Assess Servicemembers’ Perceptions of Gender Inequities Is Incomplete. GAO/NSIAD-99-27. Washington, D.C.: November 18, 1998. Gender Issues: Improved Guidance and Oversight Are Needed to Ensure Validity and Equity of Fitness Standards. GAO/NSIAD-99-9. Washington, D.C.: November 17, 1998. Gender Issues: Information on DOD’s Assignment Policy and Direct Ground Combat Definition. GAO/NSIAD-99-7. Washington, D.C.: October 19, 1998. Gender Issues: Changes Would Be Needed to Expand Selective Service Registration to Women. GAO/NSIAD-98-199. Washington, D.C.: June 30, 1998. Gender Issues: Analysis of Methodologies in Reports to the Secretaries of Defense and the Army. GAO/NSIAD-98-125. Washington, D.C.: March 16, 1998. Selective Service: Cost and Implications of Two Alternatives to the Present System. GAO/NSIAD-97-225. Washington, D.C.: September 10, 1997. Gender Integration in Basic Training: The Services Are Using a Variety of Approaches. GAO/T-NSIAD-97-174. Washington, D.C.: June 5, 1997. Physically Demanding Jobs: Services Have Little Data on Ability of Personnel to Perform. GAO/NSIAD-96-169. Washington, D.C.: July 9, 1996. Basic Training: Services Are Using a Variety of Approaches to Gender Integration. GAO/NSIAD-96-153. Washington, D.C.: June 10, 1996. Women in the Military: Deployment in the Persian Gulf War. GAO/NSIAD-93-93. Washington, D.C.: July 13, 1993. Women in the Military: Air Force Revises Job Availability but Entry Screening Needs Review. GAO/NSIAD-91-199. Washington, D.C.: August 30, 1991. Women in the Military: More Military Jobs Can Be Opened Under Current Statutes. GAO/NSIAD-88-222. September 7, 1988. Women in the Military: Impact of Proposed Legislation to Open More Combat Support Positions and Units to Women. GAO/NSIAD-88-197BR. Washington, D.C.: July 15, 1988. Combat Exclusion Laws for Women in the Military. GAO/T-NSIAD-88-8. Washington, D.C.: November 19, 1987.
Since September 2001 more than 300,000 women have been deployed in Iraq and Afghanistan, where more than 800 women have been wounded and more than 130 have died. A 1994 rule prohibited women from being assigned to many direct ground-combat units, but on January 24, 2013, the Secretary of Defense and the Chairman of the Joint Chiefs of Staff rescinded the rule and directed the military services to open closed positions and occupations to women by January 1, 2016. Senate Report 113-176 had a provision for GAO to review the services' progress in opening closed positions and occupations to women. This report assesses the (1) status of service efforts to open positions and occupations to women, including steps to identify and mitigate potential challenges; (2) extent the services' efforts to validate gender-neutral occupational standards are consistent with statutory and Joint Staff requirements; and (3) extent DOD is tracking, monitoring, and providing oversight of the services' integration plans. GAO analyzed statutes, DOD guidance, and service reports and plans, and interviewed DOD officials. The military services and U.S. Special Operations Command (SOCOM) have opened selected positions and occupations to women since January 2013, as shown in the table below, and are determining whether to open the remaining closed positions and occupations. The services and SOCOM also are conducting studies to identify and mitigate potential integration challenges in areas such as unit cohesion, women's health, and facilities. As of May 2015, the Secretary of the Navy was the only military department Secretary to recommend an exception to policy to keep positions closed to women on three classes of ships that are scheduled to be decommissioned, due in part to high retrofit costs. The services and SOCOM are working to address statutory and Joint Staff requirements for validating gender-neutral occupational standards. GAO identified five elements required for standards validation. GAO compared these elements to the services' and SOCOM's planned methodologies and determined that their study plans contained steps that, if carried out as planned, potentially address all five elements. However, the services' and SOCOM's efforts are still underway; therefore, GAO could not assess the extent that the studies will follow the planned methodologies or report how the study results will be implemented. The Department of Defense (DOD) has been tracking, monitoring, and providing oversight of the services' and SOCOM's integration efforts, but does not have plans to monitor the services' implementation progress after January 2016 in integrating women into newly opened positions and occupations. While DOD requires the services and SOCOM to submit quarterly progress reports, this requirement ends in January 2016. Without ongoing monitoring of integration progress, it will be difficult for DOD to help the services overcome potential obstacles. Further, when opening positions to women, DOD must analyze the implications for how it meets certain resource needs. In 2012, GAO assessed the military necessity of the Selective Service System and examined alternatives to its structure. GAO recommended in 2012 that DOD establish a process of periodically reevaluating its requirements in light of changing threats, operating environments, and strategic guidance. DOD has not taken action to do this, but agreed that a thorough assessment of the issue was merited, and should include a review of the statutes and policies surrounding the registration process and the potential to include the registration of women. GAO continues to believe that DOD should establish a process of periodically reevaluating DOD's requirements for the Selective Service System. GAO recommends that DOD develop plans to monitor integration progress after January 2016. DOD concurred with GAO's recommendation. GAO previously recommended that DOD establish a process of periodically reevaluating DOD's requirements for the Selective Service System. DOD has not taken action but GAO continues to believe the recommendation is valid.
In 2008, the most recently available data, more than 153 million cattle, sheep, hogs, and other animals ultimately destined to provide meat for human consumption were slaughtered at about 800 slaughter plants throughout the United States that engage in interstate commerce. Under federal law, meat-processing facilities that engage in interstate commerce must have federal inspectors on site. FSIS classifies plants according to size and the number of employees. Specifically, large plants have 500 or more employees; small plants have from 10 to 499 employees; and very small plants have fewer than 10 employees, or annual sales of less than $2.5 million. Under HMSA, FSIS inspectors are to ensure that animals are humanely treated from the moment they arrive at a plant until they are slaughtered. FSIS deploys these inspectors from 15 district offices nationwide. Figure 1 shows the states and territories in each FSIS district. After livestock arrive at a slaughter plant, plant employees monitor their movements as they are unloaded from trucks to holding pens and eventually led into the stunning chute. Plant employees typically restrain an animal in the chute and stun it by using one of several devices—carbon dioxide gas, an electrical current, a captive bolt gun, or a gunshot—that, as required by HMSA regulations, is rapid and effective in rendering the animal insensible. (See fig. 2.) Under HMSA, animals must be rendered insensible—that is, unable to feel pain—on the first stun before being shackled, hoisted on the bleed rail, thrown, cast, or cut. According to the expert we consulted, animals on the bleed rail that exhibit any of the following signs are considered sensible and would therefore be need to be restunned: lifting head straight up and keeping it up (righting reflex), vocalizing. Figure 2 shows stunning methods consistent with HMSA. Once the animals are considered stunned, they are shackled and hoisted onto a processing line, where their throats are cut, and they are fully bled before processing continues. HMSA exempts only ritual slaughter, such as kosher and halal slaughter, from the HMSA requirement that animals be rendered insensible on the first blow. See appendix II for a more detailed description of the movement of livestock through the plant. FSIS has issued a variety of regulations and directives instructing FSIS inspectors on how to enforce HMSA. Overall, the regulations emphasize the minimization of “excitement and discomfort” to the animals and require that they are effectively stunned before being slaughtered. In 2003, FSIS guidance on humane handling enforcement stated that inspectors were to determine whether a humane handling incident does, or will immediately lead to, an injured animal or inhumane treatment. The guidance also specified the types of actions inspectors should take when these situations occur. Also in 2003, FSIS began providing “humane interactive knowledge exchange” scenarios as an educational tool to enhance inspectors’ understanding of appropriate enforcement actions. These eight written scenarios, available on FSIS’s Web site, provide examples of inhumane incidents and suggest enforcement actions. In 2005, the agency issued additional guidance specifying egregious humane handling situations. This guidance defines egregious as any act that is cruel to animals or a condition that is ignored and leads to the harming of animals. The guidance provided the following examples of egregious acts: making cuts on or skinning conscious animals, excessively beating or prodding ambulatory or nonambulatory disabled driving animals off semitrailers over a drop-off without providing adequate unloading facilities so that animals fall to the ground, running equipment over animals, stunning animals and then allowing them to regain consciousness, leaving disabled livestock exposed to adverse climate conditions while awaiting disposition, or otherwise intentionally causing unnecessary pain and suffering to animals. If inspectors determine that an egregious humane handling incident has occurred, they may suspend inspection at the plant immediately, effectively shutting down the plant’s entire operation, and determine corrective actions with plant management and the district office. In 2008, after the reported inhumane handling incident in California, which was at the Westland/Hallmark plant, FSIS expanded its guidance to include two more examples of egregious actions for which inspectors may suspend a plant: (1) multiple failed stuns, especially in the absence of corrective actions, and (2) dismemberment of live animals. According to FSIS guidance, when FSIS inspectors observe a violation of HMSA or its implementing regulations and determine that animals are being injured or treated inhumanely, they are to take both of the following enforcement actions, which may restrict a facility’s ability to operate: Issue a noncompliance report. This report documents the humane handling violation and the actions needed to correct the deficiency in cases where the animal may be injured or harmed. Inspectors are also directed to notify plant management when issuing a noncompliance report. Issue a regulatory control action. Inspectors place a regulatory control action or a reject tag on a piece of equipment or an area of the plant that was involved in harming or inhumanely treating an animal. This tag is used to alert plant management to the need to quickly respond to violations that they can readily address. The tag prohibits the use of a particular piece of equipment or area of the facility until the equipment is made acceptable to the inspector. When inspectors determine that an egregious humane handling incident has occurred, in addition to issuing a noncompliance report and regulatory control action, FSIS may also take the following actions: Suspend plant operations. An on-site FSIS supervisor—known as an inspector-in-charge—can initiate an action to suspend plant operations when an inspector observes egregious abuse to the animals. The inspector must document the facts that serve as the basis of the suspension action in a written memorandum of interview and promptly provide that information electronically to district officials. Ultimately, district officials assess the facts supporting the suspension, take any final action, and notify officials in headquarters. Withdraw the plant’s grant of inspection. If the plant fails to respond to FSIS’s concerns about repeated and/or serious violations, the district offices may decide to withdraw all inspectors. Without FSIS inspectors on site, the plant’s products cannot enter interstate or foreign commerce. The FSIS Administrator may file a complaint to withdraw the plant’s grant of inspection and if the grant of inspection is withdrawn, the plant must then reapply for and be awarded a grant of inspection before it may resume operations. FSIS employs inspectors at plants and in FSIS districts to help enforce HMSA and its food safety inspections. In the plant, FSIS employs inspectors-in-charge, online and offline inspectors, and relief inspectors. Inspectors-in-charge are the chief inspectors in the plant and may or may not be veterinarians. These inspectors are responsible for reporting humane handling activities for each shift, as well as carrying out food safety responsibilities, and making enforcement decisions in consultation with district officials when necessary. Online inspectors are typically assigned specific duties on the slaughter line, such as inspecting carcasses and animal heads; however, they may also perform some humane handling inspection duties as well. Offline inspectors conduct a variety of inspection activities throughout the plant and may also perform some humane handling inspection activities. FSIS also employs permanent relief inspectors, who step in for plant inspectors who are absent for a period of time, and may also observe humane handling. The plant inspectors and the inspectors-in-charge are supervised by frontline supervisors, who oversee multiple plants. Each plant has at least one FSIS veterinarian who is responsible for examining livestock prior to slaughter and performing humane handling activities. Some plants may require two veterinarians, depending on the volume of animals slaughtered at the plant and the number of operating shifts. Figure 3 provides an overview of FSIS personnel involved in the enforcement of HMSA. Although FSIS does not require inspectors to observe the entire handling and slaughter process during a shift, it requires inspectors-in-charge to record the amount of time that the FSIS inspectors collectively devoted to observing humane handling during one shift. The inspectors-in-charge enter this information into a data tracking system known as the Humane Activities Tracking System. At the district level, the DVMS in each of FSIS’s 15 districts serves as the liaison between the district office and headquarters on all humane handling matters. These employees are directed to visit each plant within their district over a 12- to 18-month period and review the humane handling practices at each plant. DVMSs may also coordinate the verification of humane handling activities and educate plant inspectors on relevant humane handling information in directives, notices, and other information from headquarters through the district office to inspectors in the field. Industry groups and animal welfare organizations have recently recommended actions to improve HMSA enforcement. As an expert witness, in 2008 testimony, Dr. Grandin proposed that FSIS guidance on humane handling be clearer—especially in determining when humane handling incidents at slaughter plants should be considered egregious violations of the HMSA. She has also suggested that FSIS adopt a numerical scoring system—which has been adopted by the American Meat Institute—to determine how well animals were being stunned and handled at the plants. The system has different standards for different species of animal and can be adjusted to fit plants that slaughter fewer animals. Overall, the system seeks to reduce the subjective nature of inspections by using objective measures to help slaughter plants improve their humane handling performance. In addition, the Humane Society of the United States has proposed a variety of reforms to strengthen HMSA enforcement, including requiring FSIS inspectors to observe the entire humane handling and slaughter process during a shift. According to our survey results and analysis of FSIS data, inspectors have not taken consistent actions to enforce HMSA once they have identified a violation. These inconsistencies may be due, in part, to weaknesses in FSIS’s guidance and training for key inspection staff. While FSIS expects its inspectors to use their professional judgment based on the guidance in deciding enforcement actions, industry and others are using other tools to assist their efforts to improve humane handling performance. Furthermore, although FSIS has taken steps to correct data weaknesses in HMSA reporting that we noted in 2004, it has not used these data to analyze HMSA enforcement across districts and plants to identify inconsistent enforcement. For these reasons, FSIS cannot ensure that it is preventing the abuse of livestock at slaughter plants or that it is meeting its responsibility to fully enforce HMSA. According to FSIS officials, inspectors are to use their judgment in deciding whether to suspend a plant’s operations or take the less stringent enforcement action (that is, issue a noncompliance report and a regulatory control action) when a humane handling violation occurs. For example, FSIS guidance is unclear on what constitutes excessive electrical prodding, such as the number of times an animal can be prodded before the inspector should consider the prodding to be excessive and therefore egregious. According to FSIS’s guidance, if the inspector determines that the action was egregious, the inspector may also choose to suspend plant operations but is not required to do so. U.S. meat industry representatives have expressed concerns in interviews about the inconsistency of HMSA enforcement across districts. For example, according to American Meat Institute officials, the inconsistency in HMSA enforcement is the single most critical issue for the industry; furthermore, one official noted that a number of the differences in interpretation of HMSA compliance are related to determining whether or not an animal is sensible after stunning. In addition, the expert we consulted testified in April 2008 that FSIS inspectors need better training and clear directives to improve consistency of HMSA enforcement. Our survey results indicate differences in the enforcement actions that inspectors reported they would take when faced with a humane handling violation. In our survey, we asked inspectors their views on electrically prodding over 50 out of 100 animals. Figure 4 shows the inspectors’ responses to questions concerning electrical prodding. Under FSIS’s guidance, inspectors are directed to issue a noncompliance report and take a regulatory control action in cases of excessive electrical prodding, but suspension is not required. However, the expert we consulted told us that she considers these cases to be egregious humane handling violations that should result in suspensions. In addition, according to an FSIS training scenario, electrical prods are never to be used on the anus, eyes, or other sensitive parts of the animal. As figure 4 shows, 49 percent of the inspectors surveyed reported that they would either take a regulatory control action, such as placing a reject tag on a piece of equipment or suspending a plant’s operations for electrical prodding of most animals, and 29 percent reported that they would take none of these actions or did not know what action to take for electrical prodding most animals. Furthermore, 67 percent of the inspectors surveyed reported that they would either take a regulatory control action or suspend operations for electrical prodding in the rectal area, and 10 percent reported that they would take none of these actions or did not know what action to take for electrical prodding in the rectal area. FSIS regulations prohibit electrical prodding that the inspector considers to be excessive. FSIS guidance also states that excessive beating or prodding of ambulatory or nonambulatory disabled animals is egregious abuse—and may therefore warrant suspension of plant operations. From inspectors’ compliance reports, we identified several specific incidents in which inspectors did not either take a regulatory control action or suspend plant operations. For example: In 2008, in the Denver district, the FSIS inspector reported observing a plant employee excessively using an electrical prod as his primary method to move the cattle—using the prod approximately 55 times to move about 46 head of cattle into the stun box. Cattle vocalized at least 15 times, which the inspector believed indicated a high level of stress. The FSIS inspector stated that this incident constituted excessive use of the electrical prod. As stated in FSIS guidance, excessive use of an electrical prod is an egregious violation that calls for the issuance of both a noncompliance report and a regulatory control action and for which an inspector may suspend plant operations. In this instance, the inspector stated that he had issued a noncompliance report. The inspector did not state that he took a regulatory control action and did not suspend operations at the plant, as the guidance allows. In the opinion of the expert we consulted, this was an egregious instance that should have resulted in a suspension. In 2007, in the Minneapolis district, an FSIS inspector reported observing plant employees using the electrical prods excessively to move hogs into the stunning chute. The animals became excited, jumping on top of one another, and vocalizing excessively. From the noncompliance report, it is unclear what, if any, regulatory actions were taken. According to FSIS regulations, electrical prods are to be used as little as possible in order to minimize excitement and injury; any use of such implements that an inspector considers excessive is prohibited. In 2008, in the Dallas district, the FSIS inspector reported that a plant employee used an electrical prod to repeatedly shock cows in the face and neck in an effort to turn them around in an overcrowded area. The inspector deemed the use of the electrical prod excessive, but the report does not indicate whether any regulatory control action was taken. With regard to stunning, our survey results and review of noncompliance records also show inconsistent enforcement actions when humane handling violations occurred. As figure 5 shows, 23 percent of inspectors reported they would suspend operations, while 38 percent would issue a regulatory control action for multiple unsuccessful captive bolt gun stuns. Similarly, 17 percent reported they would suspend operations for multiple misplaced electrical stuns, and 37 percent would issue a regulatory control action. According to FSIS guidance, egregious abuses that could result in a plant suspension include stunning animals and allowing them to regain consciousness and multiple attempts to stun an animal, especially in the absence of immediate corrective measures. However, it is unclear when a suspension is warranted, even if the acts are deemed to be egregious. FSIS’s guidance simply states that an inspector-in-charge may immediately suspend the plant if there is an egregious humane handling violation— however, there is no clear directive to do so in guidance. In the opinion of the expert we consulted, if over 10 percent of the animals require a second shot or if over 5 percent of pigs had experienced an improperly placed electrical stun, plant operations should be suspended. FSIS agreed that these incidents are troubling, and possibly egregious, but did not comment further. Figure 5 shows our survey results on stunning. We also identified several incidents in FSIS’s noncompliance reports in which inspectors did not suspend plant operations or take a regulatory control action. For example, In 2009, in the Raleigh district, a plant employee stunned a bull twice in the head with a captive bolt, but the bull remained sensible. Instead of restunning the animal with the captive bolt gun, the employee then drove a steel instrument used to sharpen knives into the open hole in the bull’s head in an attempt to make the animal insensible. The bull rose to its feet and vocalized in apparent pain until it was eventually rendered insensible with a bullet to the head. FSIS regulations do not recognize this steel instrument as an acceptable stunning method. However the inspector placed a reject tag on the stun box and cited the incident as egregious in the noncompliance report but did not suspend operations. In the opinion of the expert we consulted, this incident was an example of an egregious HMSA violation that should have resulted in a suspension. In 2008, in the Denver district, the inspector reported that the first attempt to stun a bull with a captive bolt stunner appeared to misfire, resulting in smoke and the smell of powder and no response by the bull. A second stunning attempt appeared to render the bull unconscious in the stun box. However, it was followed by a third stunning attempt while the bull was still in the stun box. The employee then allowed the bull to roll out into the pit for shackling. The bull appeared unconscious but still was breathing rhythmically, indicating that the animal was still sensible. The employee then entered the pit and stunned the bull again and started conversing with another employee. The bull once again started breathing rhythmically while being shackled, a sign that the bull still had not been rendered insensible to pain as the law requires. In response, the DVMS asked the employee to stun the bull again, and this stun rendered the bull unconscious and no longer breathing rhythmically. According to the report, the plant received a noncompliance report, but no regulatory control action was taken, as called for by guidance. In the opinion of our expert consultant, a regulatory control action should have been taken in this case because of multiple stuns that left the animal breathing rhythmically. We also identified several other types of humane handling violations for which inspectors took inconsistent enforcement actions. For example, according to FSIS’s regulations, animals are not to be moved from one area to another faster than a normal walking speed, with minimum excitement and discomfort. A faster speed could result in animals being driven over each other. Furthermore, animals in a holding pen are to have access to water and, if held longer than 24 hours, access to food. According to the expert we consulted, deliberately driving animals over the top of other others and failing to provide water for animals held over a weekend are egregious humane handling violations and, in her opinion, these actions should result in plant suspensions. However, as figure 6 shows, although most inspectors would take an enforcement action, including a regulatory control action, for these violations, 40 percent of inspectors surveyed would suspend plant operations for driving animals over each other, and 55 percent would suspend plant operations for failing to provide water over a weekend. The lack of consistency in enforcement actions is highlighted by inspectors’ responses to our question about when they would suspend plant operations. According to our survey results, less than one-third of the inspectors-in-charge in the very small and small plants reported that they would be likely to suspend plant operations for multiple incorrect placements of electrical stunners and electrical prodding of most animals. Inspectors-in-charge at large plants with more frequently reported plant suspensions had more stringent views on enforcement actions than those at very small plants. For example, inspectors-in-charge at large plants more frequently reported suspensions as the enforcement actions that should be taken compared with inspectors-in-charge at very small plants. Figure 7 illustrates three humane handling scenarios in which significant differences were observed between large and very small plants. For example, large plants were more likely than very small plants to suspend plant operations for multiple incorrect electrical stuns, driving animals over the top of others, and electrically prodding most animals. We found similar indications of inconsistent enforcement across districts. According to our analysis of FSIS data, from calendar years 2005 through 2007, 10 districts of 15 FSIS districts—responsible for overseeing 44 percent of all animals slaughtered nationwide—suspended 35 plants for HMSA violations. The remaining 5 districts—responsible for overseeing 56 percent of all livestock slaughtered nationwide—did not suspend any plants. For example, the Des Moines and the Chicago districts, which oversee the first and second highest volume of livestock slaughtered nationwide, respectively, were among the 5 districts that had never issued a suspension until February 2008, according to our analysis. Before 2008, these five districts issued noncompliance reports, sometimes with regulatory control actions, such as a reject tag on a piece of equipment, rather than suspending an entire plant’s operations. For example, in 2007, in the Lawrence district, a hog was observed walking around the stunning chute grunting and bleeding from the mouth and forehead. The animal had been stunned improperly, and plant personnel stated that both stun guns were not working and were being repaired. Because the plant did not have an operable stun device, the animal suffered for at least 10 minutes while the plant repaired the gun. The FSIS inspector applied a reject tag to the stunning box; stunning operations in the area were halted until the plant had taken corrective actions, but the record did not state the amount of time that stunning was stopped. According to FSIS’s guidance, however, stunning animals and then allowing them to regain consciousness is considered egregious. Suspensions increased overall following the February 2008 Westland/Hallmark incident in California. For calendar years 2007 and 2008, more than three-quarters of all suspensions were for stun-related violations for all districts. In the 10 districts that suspended operations for calendar years 2005 and 2006, over 40 percent of those suspensions were for stunning violations. (See app. III for detailed information on the number of HMSA enforcement actions over the period we reviewed.) Furthermore, following that incident, FSIS directed the inspectors to increase the amount of time they devoted to humane handling by 50 to 100 percent for March through May 2008. FSIS found that, when the amount of time spent on humane handling was increased, the number of noncompliance reports increased as well. The Westland/Hallmark incident highlighted the problems that could occur when inspection staff inconsistently apply their discretion in determining which enforcement actions to take for humane handling violations. According to the USDA Inspector General’s 2008 report that followed the Westland/Hallmark incident, between December 2004 and February 2008, FSIS inspectors did not write any noncompliance reports or suspend operations for humane handling violations at the Westland/Hallmark plant. Nevertheless, FSIS personnel acknowledged that at least two incidents of humane handling violations had occurred at the Westland/Hallmark plant during this period, both of which involved active abuse of animals. Instead of taking an enforcement action, the inspectors verbally instructed plant personnel to discontinue the action or practice in question. The report also stated that Westland/Hallmark had an unusual lack of noncompliance reports and that inspectors did not believe they should write a noncompliance report if an observed violation was immediately resolved. Finally, our analysis of FSIS enforcement data for calendar years 2005 through August 2009 shows that suspensions were not consistently used to enforce HMSA. Figure 8 shows the total number of suspensions over the period and reveals that suspensions spiked from a low of 9 in calendar year 2005 to a high of 98 in 2008—a nearly 11-fold increase overall—and, as of August 2009, FSIS had suspended operations at 50 plants. Based on our review of the suspension records, it appears that this spike followed the February 2008 Westland/Hallmark incident. Also, more than three- quarters of these suspensions resulted from failure to render at least one animal insensible on the first stun. From calendar year 2005 through 2008, the number of noncompliance reports issued for humane handling decreased overall, while the number of animals slaughtered increased from about 128 million in 2004 to about 153 million in 2008. While we cannot determine the extent to which HMSA violations were overlooked from FSIS data and inspection reports, we attempted to determine whether a much higher rate of enforcement actions were taken on the days that DVMSs conducted their audits for humane handling. However, according to FSIS officials, the records of DVMS audit visits are incomplete, and we were therefore unable to conduct a complete analysis. As a result, we could not fully determine how often DVMSs conducted humane handling audit visits nor whether there is a higher rate of enforcement actions on the days that DVMSs conducted their audits for humane handling. Furthermore, our survey found that 85 to 95 percent of inspectors-in-charge who had taken some type of enforcement action reported that their immediate supervisor, the DVMS, and other district management personnel were moderately or very supportive of their actions. We found that incomplete guidance and inadequate training may contribute to the inconsistent enforcement of HMSA. Specifically, according to our survey results, inspectors at the plants we surveyed would like more guidance and training in seven key areas, as figure 9 shows. Furthermore, an estimated 457 inspectors-in-charge, or those at more than half the plants surveyed, reported that additional FSIS guidance or training is needed on whether a specific incident of electrical prodding requires an enforcement action. In addition, of the 80 inspectors who provided detailed responses to our survey, 15 noted the need for additional guidance, including clarification on what actions constitute egregious actions. Similarly, 25 of the 80 inspectors who provided written comments identified a need for additional training in several key areas. With respect to guidance, in 2004, we had recommended that FSIS establish additional clear, specific, and consistent criteria for district offices to use when considering whether to take enforcement actions because of repeat violations. FSIS agreed with this recommendation and delegated to the districts the responsibility for determining how many repeat violations should result in a suspension. However, incidents such as those at the Bushway Packing plant in Vermont suggest that this delegation was not successful. To date, FSIS has not issued additional guidance. Operations at this Vermont plant were suspended three times in May, June, and July 2009 for egregious humane handling violations. Two of the suspensions were for dragging nonambulatory conscious veal calves that were about 1-week old. According to a document describing the third incident, an employee threw a calf from the second tier of a truck to the first so that the calf landed on its head and side. FSIS has not issued any guidance to the district offices on how many suspensions should result in a request for a withdrawal of a grant of inspection. If specific guidance had been available on when to request a withdrawal of grant of inspection, the district office might have decided to request such a withdrawal before the October 2009 incident. If FSIS ultimately withdrew the grant, it would have required the plant to reapply for, and be awarded, a grant of inspection license before it could resume operations. Regarding training, FSIS relies primarily on “on-the-job” training by DVMSs—who are directed to visit each plant within their district over a 12- to 18-month period. In addition, supervisory veterinarians and inspectors- in-charge provide on-the-job training. FSIS officials we spoke with said that the on-the-job training needs to be integrated into a formal training program and that efforts are under way to do so. FSIS also provides some humane handling training electronically. For example, in February 2009, all inspectors assigned to slaughter plants were required to complete a mandatory 1-hour basic humane handling course online, which the agency can track centrally. FSIS officials also stated that, since 2005, incoming inspectors have been required to complete some humane handling training during orientation. According to FSIS officials we spoke with, the agency has asked the districts to begin entering data on the completion of other humane handling courses so that this information can also be tracked centrally. Our survey results suggest, however, that even inspectors-in-charge who had to complete mandatory humane handling training in February 2009 may not have been sufficiently trained. For example, an estimated 449, or 57 percent, of the inspectors-in-charge at the plants we surveyed from May through July 2009, reported incorrect answers on at least one of six possible signs of sensibility. Specifically, an estimated 133, or 18 percent, of the inspectors–in-charge, failed to identify rhythmic breathing as a sign of sensibility. In addition, in 2004, we had reported that inspectors did not have the knowledge they needed to take enforcement actions when appropriate. At that time, most of the deputy district managers, and about one-half of the DVMSs, noted that an overall lack of knowledge among inspectors about how they should respond to an observed noncompliance had been a problem in enforcing the HMSA. Several outside observers have also commented on the need for better FSIS training. Specifically: In November 2008, USDA’s Office of Inspector General found that FSIS does not have a formal, structured developmental program and system in place to ensure that all of its inspection and supervisory staff receive both formal and on-the-job training to demonstrate that they possess the competencies essential for FSIS’s mission-critical functions. The Inspector General recommended a structured training and development program that includes continuing education to provide the organizational control needed to demonstrate the competency of the inspection workforce. The Inspector General also stated that the workforce needs to be certified annually. In 2009, the National Academies’ Institute of Medicine recommended testing and improved training, with special emphasis on the quality and consistency of noncompliance reports for food safety issues. The institute noted that the decision to issue a noncompliance report is subjective and inspectors’ experience levels and training differ. Supervisory review by inspectors-in-charge may likewise be variable or subject to bias and, therefore, unreliable. In 2009, representatives of the three major industry associations—the American Meat Institute, the American Association of Meat Processors, and the National Meat Association—told us that more training on humane handling is needed for FSIS inspectors. Specifically, the American Meat Institute identified insensibility as a critical issue in enforcement and noted that additional training on the signs of insensibility, such as blinking and the righting reflex, would be helpful. In 2009, the Humane Society of the United States recommended that FSIS inspectors receive adequate in-person, on-the-ground training so they can properly assess the conditions and treatment of animals. FSIS officials stated that it launched a voluntary HMSA training program for plant employees at small slaughter plants in 2009. These plants represent the highest humane handling risk, according to FSIS officials, because plant management may not have sufficient resources to fully train plant employees on HMSA practices. In recent years, the meat industry has adopted numerical scoring and video surveillance to improve plants’ humane handling performance overall. According to FSIS officials, the agency does not require the use of such objective measures or scoring to aid judgment for enforcement purposes because situations are highly variable, and inspectors and higher-level officials are to use their judgment in conjunction with FSIS guidance. However, in December 2009, FSIS provided DVMSs with guidance on what it characterized as, an objective system to facilitate determinations of the problems that plants in their districts need to address. Several of the DVMSs we interviewed acknowledged that they have been using a form of numerical scoring on their own to assist their efforts in evaluating HSMA enforcement at the plants. The numerical scoring system was developed in 1996 by Dr. Grandin to determine how well animals were being stunned and handled at the plants. The system has different standards for different species of animal and can be adjusted to fit plants that slaughter fewer animals. This system seeks to reduce the subjective nature of inspections and uses the scoring system to help identify areas in need of improvement. For example, in a large plant, if more than 5 out of 100 animals were not rendered insensible on the first stun, the plant would fail the evaluation. Other standards include the percentage rates for slips and falls and the number of animals moved by an electrical prod. Once the plant is aware of the weaknesses, it can consider its options to improve its humane handling performance, such as repairing equipment and floors to provide better footing for the animals and targeting employee training in those specific areas. The numerical scoring system has been adopted by industry and animal welfare organizations, as well as one federal agency. At the federal level, according to agency officials, USDA’s Agricultural Marketing Service uses this system to rate slaughter plants to determine whether to approve or deny them to provide meat to the National School Lunch Program. In addition, the American Meat Institute and independent audit firms employed by restaurant chains, such as Burger King and McDonald’s, have adopted this numerical scoring system to evaluate humane handling at their associated slaughter plants. According to industry experts, a publicized humane handling incident at their plants would potentially damage their business interests. Recently, the Canadian Food Inspection Agency proposed adoption of numerical scoring for federally inspected plants in Canada. FSIS officials have stated that while the numerical scoring system may be useful in helping plants determine their humane handling performance; it should not be used to assess compliance with HMSA. Because the numerical scoring system allows for a certain percentage of stunning failures, using it would be inconsistent with the HMSA requirement that all animals must be rendered insensible on the first blow. However, as we noted earlier, this requirement has not been met consistently by slaughter plants because of human error, equipment failures, and animal movement, leaving FSIS to exercise its discretion in determining which violations require enforcement action. Video surveillance is another tool being increasingly used by slaughter plants. Specifically, slaughter plants can hire specialized video technology companies to record plant operations and audit plant performance through remote video surveillance and the use of the American Meat Institute numerical scoring system to assess humane handling performance at the plant. These video technology companies can also provide slaughter plant management with continuous feedback and customized progress reports documenting humane handling performance at their plants. According to the testimony of one video surveillance company, this technology helps plant management provide positive reinforcement to the workers who are performing well and helps identify workers who may need further training. In November 2008, the Office of the Inspector General recommended that FSIS determine whether FSIS-controlled, in-plant video monitoring would be beneficial in preventing and detecting animal abuses. However, FSIS officials responded that FSIS-controlled video cameras would not provide the definitive data needed to support enforcement of humane handling requirements, as compared with the direct, ongoing and random verification of humane handling practices at the plants. According to the Humane Society of the United States, while video surveillance might serve as a supplemental tool, it does not negate the need for real-time inspectors’ observations. According to our survey results, between 52 to 66 percent of inspectors-in-charge at large plants reported that video surveillance would be moderately or very useful in each of the five plant areas. Figure 10 illustrates our survey results on the usefulness of video surveillance for all plants. FSIS officials recently told us that they are exploring potential uses of video surveillance, but the agency had not released any official policy change, as of November 2009. In addition, of 96 inspectors who provided written comments on the usefulness of video surveillance in our survey, most frequently reported that video surveillance would facilitate more inspections in different plant locations and provide a true picture of animal handling while plant staff do not know that the inspector is watching. Since video surveillance can provide continuous footage of ongoing activities in the plant, it may provide evidence regarding alleged violations when inspectors do not directly observe humane handling. For example, according to 39 percent of inspectors-in-charge at large plants, plant staff improved their handling behavior upon the inspectors’ arrival. Furthermore, 25 percent of inspectors-in-charge at the large plants in our survey reported that plant staff often, or always, alert each other about inspectors’ movements between areas by radio or whistle, for example. Although FSIS collects humane handling data, we found that it is not fully analyzing and using these data to help ensure more consistent HMSA enforcement. For example, we found substantial differences in the range of time devoted to humane handling for large plants that slaughter market swine when we compared the amount of time devoted to humane handling activities for plants of similar size and species in an effort to determine if there were any inconsistencies among districts. Specifically, out of the six slaughter plants that kill between 700,000 to 900,000 market swine, the average time that a plant would devote to humane handling ranged from 1.8 to 9.7 hours per shift in 2008. For the nine plants that slaughter between 2 and 3 million market swine, we found that the average amount of time per shift ranged from 2.7 to 5.2 hours per shift in 2008. In January 2004, we also reported that FSIS was not adequately analyzing the narrative found in noncompliance reports. As of November 2009, FSIS headquarters officials told us that they had not begun an effort to analyze the narratives in noncompliance reports. Instead, they told us, they rely on district officials to monitor whether plant inspectors have taken consistent enforcement action for each incident. Headquarters officials also stated that they only review the percentage of humane handling activities that are recorded as noncompliant in an FSIS database, known as the Performance-Based Inspection System. However, without analyzing the narrative, FSIS cannot readily provide the reasons for the noncompliance reports—for example, whether these reports were issued for one or two failed stuns, which is not uncommon, rather than three or four failed stuns, which might be considered an egregious violation. Thus, FSIS cannot easily analyze noncompliance reports across the districts to identify trends or patterns in plant violations or potential enforcement inconsistencies across districts. Also in 2004, we reported that FSIS was not tracking humane handling activities. In response to the tracking issue, FSIS created the Humane Activities Tracking System, a database that inspectors use to record the amount of time they devote to humane handling activities in each plant. Inspectors are directed to record the total amount of time devoted to humane handling activities for each plant shift in 15-minute increments. According to our survey results, inspectors have differing views on the accuracy of the amount of time recorded in the tracking system. Specifically, 19 percent reported that the time recorded in this system was slightly or not at all accurate. However, 45 percent of the inspectors reported that the time was very accurate, and 36 percent reported that the time was moderately accurate. Furthermore, of the 93 inspectors who provided written responses detailing inspectors’ views of the reasons for the tracking database’s inaccuracies, 56 pointed out that breaking out activities into 15-minute increments limited their ability to record their actual time spent, and 29 stated that humane handling activities are concurrent with other inspection activities. In addition, 14 responses noted that supervisors or district offices had placed either a minimum or maximum on the amount of time that could be charged to humane handling. Also, several of the DVMSs we interviewed reported that the Humane Activities Tracking System does not readily produce the types of reports that are needed to oversee and manage humane handling activities in their districts. For example, they reported that the system lacked the capability to readily produce comparative analyses of similar plants to help identify trends or anomalies across districts. FSIS began analyzing data across districts from the Humane Activities Tracking System in 2008—4 years after it developed the system. Also in 2008, FSIS established the Data Analysis Integration Group in headquarters, with staff in the regional field offices to support district offices’ data needs. The group began reporting quarterly on HMSA enforcement, including the amount of time inspectors have devoted to HMSA, the number of plants suspended, and the number of noncompliance reports issued in 2009, although FSIS has not analyzed the narrative in the noncompliance reports. FSIS cannot fully identify trends in its inspection resources—specifically, funding and staffing—for HMSA enforcement, in part because it cannot track humane handling inspection funds separately from the inspection funds spent on other food safety activities. Furthermore, FSIS does not have a current workforce planning strategy to guide its efforts to allocate staff to inspection activities, including humane handling. According to FSIS officials, funds for humane handling come primarily from two sources: (1) FSIS’s general inspection account and (2) the account used to support the Humane Activities Tracking System. The general inspection account supports all FSIS inspection activities, both food safety and other activities, including humane handling enforcement. Because the same inspectors may carry out these tasks concurrently, FSIS cannot track humane handling funds separately, according to FSIS officials. According to FSIS officials, for the most part, inspectors are to devote 80 percent of their time to food safety inspection activities and 20 percent of their time to humane handling inspection and other activities. However, our analysis of resources shows that this is not the case. As table 1 shows, we estimated that the percentage of funds dedicated to HMSA enforcement has been above 1 percent of FSIS’s total annual inspection appropriation, although it rose slightly in 2008, the year in which suspensions spiked following the 2008 Westland/Hallmark incident in California. While FSIS does not track humane handling inspection activities separately, FSIS’s budget office estimates the funds needed to carry out these activities. Using FSIS’s budget estimate for HMSA enforcement for fiscal years 2005 through 2008, we estimated the percentage of FSIS’s total annual appropriation for its federal food safety inspection account that would have gone to HMSA enforcement. In contrast to FSIS’s inability to track humane handling in its general inspection fund, FSIS officials noted, the DVMSs—whose primary responsibility is humane handling activities—have a special activity code that enables FSIS to track their portion of expenses, including salaries and travel; however, these expenses represent only a small portion of the total amount FSIS spends on humane handling inspection activities. Although FSIS does not track funds spent on humane handling inspection activities separately from other inspection activities, it does track the funds specifically dedicated to supporting the Humane Activities Tracking System. For fiscal years 2005 through 2009, Congress designated a total of nearly $13 million specifically for the Humane Activities Tracking System, and FSIS has spent roughly that amount on the system, according to our review of FSIS budget data. For fiscal year 2005 and for fiscal year 2006, FSIS was required to spend the funding designated for the Humane Activities Tracking System within 2 years of the appropriation. However, beginning with fiscal year 2008, Congress folded the funding for the Humane Activities Tracking System into a larger FSIS information technology initiative, and the funding is available to FSIS until it is expended. As of November 2009, FSIS had not completed integrating the Humane Activities Tracking System into the information technology initiative, and FSIS officials could not provide an estimate of when the agency expected to do so. Although FSIS cannot directly account for the funding designated for humane handling activities, Congress in recent years has required FSIS to devote a minimum amount of full-time equivalent (FTE) staff to humane handling. Accordingly, FSIS estimates the total number of FTEs devoted to humane handling and reports this information to Congress every year. FSIS develops this estimate using Humane Activities Tracking System data on time spent on humane handling inspection activities and average inspector and veterinarian salaries. Table 2 shows that FSIS has reported exceeding Congress’s minimum FTE requirements for humane handling enforcement, according to FSIS’s calculation. For fiscal year 2010, FSIS officials told us, they planned to use $2 million of their inspection funds to enhance oversight of humane handling enforcement by hiring 24 inspectors, including both public health veterinarians and inspectors. FSIS officials planned to strategically place these additional inspectors at locations where they are most needed to support humane handling enforcement in addition to their other food safety responsibilities. FSIS officials stated that the agency determined staffing needs on the basis of such factors as the highest number of animals condemned on postmortem, the number of animals inspected and passed for human consumption, and the amount of time spent conducting humane handling inspection activities. In addition, FSIS officials stated that the agency intends to establish a headquarters-based humane handling coordinator position. This coordinator will be primarily responsible for consistently overseeing humane handling activities. While FSIS has increased its hiring, it has not done so in the context of an updated strategic workforce plan. Such a plan would help FSIS align its workforce with its mission and ensure that the agency has the right people in the right place performing the right work to achieve the agency’s goals. In February 2009, we reported that the FSIS veterinarian workforce had decreased by nearly 10 percent since fiscal year 2003 and that the agency had not been fully staffed over the past decade. We reported that, as of fiscal year 2008, FSIS had a 15 percent shortage of veterinarians and the majority of these veterinarians work for slaughter plants. The FSIS 2007 strategic workforce plan—the most recently available—identifies specific actions to help the agency address some of the gaps in recruiting and retaining these mission-critical occupations over time. However, it does not address specific workforce needs for HMSA enforcement activities. FSIS officials stated that workforce planning occurs at the district level and is determined using regulations that govern the number of inspectors required at each slaughter plant. According to district officials, they have discretion in deciding where to deploy relief inspectors. Therefore, they can deploy these inspectors at plants that they believe may require more HMSA oversight. However, more than one-third of the inspectors, who provided written comments in our survey, noted the need for additional staff or the lack of time to perform humane handling activities. Furthermore, inspectors at 80 percent of large plants stated that covering for others’ responsibilities because of leave or vacancies has reduced the time spent on humane handling activities in those plants. While FSIS officials may need flexibility at the district level to allocate inspection resources, without an updated strategic workforce plan, the agency cannot effectively determine inspection needs across districts and adjust the inspection workforce to reflect changes in the industry and in FSIS resources. Although the strategic workforce plan indicates that the agency performs this assessment annually, FSIS officials acknowledged that the agency has not updated its strategic workforce plan since 2007. We recommended in January 2004 that FSIS periodically reassess whether the level of inspection resources is sufficient to effectively enforce HMSA. As of November 2009, FSIS officials had told us that they were in the process of developing a workforce strategy but could not provide an estimated completion date. Our body of work on results-oriented management calls for organizations to identify clearly defined goals that are aligned to available resources, develop time frames for achieving these goals, and develop performance metrics for measuring progress in meeting their goals. We have recommended that all agencies adopt strategies that include these key elements. By implementing results-oriented management principles, agencies demonstrate their efforts to resolve long-standing management problems that undermine program efficiency and effectiveness, provide greater accountability for results, and enhance congressional decision making by providing more objective information on program performance. Although FSIS has strategic, operational, and performance plans for its inspection activities, these plans do not specifically address HMSA enforcement. That is, they do not clearly outline the agency’s goals for enforcing HMSA, identify expected resource needs, specify time frames, or lay out performance metrics. Specifically, FSIS Strategic Plan FY 2008 through FY 2013 provides an overview of the agency’s major strategic goals and the means to achieve those goals. However, this plan does not clearly articulate or list goals related to HMSA enforcement. Instead, the plan generally addresses agency goals, such as improving data collection and analysis, maintaining information technology infrastructure to support agency programs, and enhancing inspection and enforcement systems overall to protect public health. FSIS Office of Field Operations officials agreed that the plan does not specifically address humane handling, but they explained, the operational plans and policy performance plans contain the details concerning humane handling performance. However, as we indicate below, we did not find that these two plans provide a comprehensive strategy for HMSA enforcement: Office of Field Operations’ Operational Plan identifies specific FSIS projects or initiatives and aligns them with the appropriate strategic goal identified in the FSIS Strategic Plan for FY 2008 through FY 2013. It also specifies the estimated dates for completion and recent information on the status of the project or initiative. According to our analysis of the July 2009 version of the operational plan, the most recent version available, humane handling activities fall under FSIS’s first strategic goal—enhance inspection and enforcement systems and operations to protect public health. While the plan identifies tasks related to humane handling inspection activities, it does not identify any humane handling program goals linked to these tasks or explain how these tasks can be completed. For example, one of the plan’s listed tasks is conducting humane handling information outreach, but the plan neither indicates how this task aligns with HMSA enforcement-related goals, nor does it specify resources needed. The plan also does not set priorities for proposed activities or identify milestones that could be used to measure progress or make improvements. Additionally, the document does not match the activities with resources needed to accomplish those tasks. According to FSIS officials, the Office of Field Operations’ operational plan is an evolving document that is continually updated throughout the course of the year. Office of Policy and Program Development Strategic Plan Fiscal Years 2008-2013 identifies policy goals that support the overall FSIS Strategic Plan. However, this plan does not clearly articulate or list goals related to HMSA enforcement. Furthermore, FSIS does not have a set of performance measures for assessing the overall performance of humane handling enforcement across the districts. For example, FSIS is unable to determine whether the districts have improved their ability to enforce humane handling or may be weak in their enforcement. Although FSIS officials stated that the agency collects information such as the number of noncompliance reports, the number of egregious humane handling violations, and the number of humane handling activities performed on a routine basis by the DVMS, there is no indication of how these activities demonstrate improved enforcement of HMSA. Collecting and analyzing this type of information could be useful in identifying gaps or anomalies in performance and then developing a strategy to address them. It is difficult to know whether the reported incidents of egregious animal handling at the slaughter plants in California and Vermont are isolated cases or indicative of a more widespread problem. Either way, it is evident from our survey results and our analysis of HMSA enforcement data that inspectors did not consistently identify and take enforcement action for humane handling violations for the period we reviewed. Furthermore, our survey results suggest that inspectors are not consistently applying their discretion as to which actions to take when egregious humane handling incidents occur, or when they are repeated, in part because the guidance is unclear. That is, the guidance states that inspectors-in-charge “may” suspend plant operations. Consequently, plants cited for the same type of humane handling incident may be subject to different enforcement actions. In January 2004, we recommended that FSIS establish additional clear, specific, and consistent criteria for enforcement actions to take when faced with repeat violations. FSIS responded by delegating this responsibility to the districts. However, incidents such as those at the Vermont plant suggest that this delegation has not been effective. While FSIS has stated that inspectors require discretion in enforcement, that discretion needs to be informed by an agency policy that ensures a consistent level of enforcement within plants and across districts. Without consistent enforcement actions, FSIS does not clearly signal its commitment to fully enforce HMSA. In addition, to improve plants’ humane handling performance, the Agricultural Marketing Service, DVMSs, and others have adopted objective industry tools, such as numerical scoring, to help identify weaknesses. However, inspectors-in- charge, who are responsible for assessing daily HMSA performance at the plants, are not directed to use such scoring tools. Effective oversight of HMSA enforcement also requires FSIS to use available data to effectively manage the program, including allocating resources. FSIS has only recently begun to do so. Until 2009, FSIS did not routinely track and evaluate HMSA enforcement data—by geographic location, species, plant size, and history of compliance across districts. Although these analyses will be useful, FSIS has yet to analyze the narratives of humane handling incidents found in noncompliance reports, which would also help the agency identify weaknesses and trends in enforcement and develop appropriate strategies. Furthermore, we reiterate our January 2004 recommendation, which FSIS has not yet acted on, to periodically reassess whether its estimates still accurately reflect the resources necessary to effectively enforce the act. Finally, because FSIS does not have a comprehensive strategy for enforcing HMSA that aligns the agency’s available resources with its mission and goals, and that identifies time frames for achieving these goals and performance metrics for meeting its goals, it is not well positioned to improve its ability to enforce HMSA. We are making the following four recommendations to the Secretary of Agriculture to strengthen the agency’s oversight of humane handling and slaughter methods at federally inspected facilities. To ensure that FSIS strengthens its enforcement of the Humane Methods of Slaughter Act of 1978, as amended, we recommend that the Secretary of Agriculture direct the Administrator of FSIS to take the following three actions: establish clear and specific criteria for when inspectors-in-charge should suspend plant operations for an egregious HMSA violation and when they should take enforcement actions because of repeat violations; identify some type of objective tool, such as a numerical scoring mechanism, and instruct all inspectors-in-charge at plants to use this measure to assist them in evaluating the plants’ HMSA performance and determining what, if any, enforcement actions are warranted; and strengthen the analysis of humane handling data by analyzing the narrative in noncompliance reports to identify areas that need improvement. To ensure that FSIS can demonstrate how efficiently and effectively it is enforcing HMSA, we recommend that the Secretary of Agriculture direct the Administrator of FSIS to develop an integrated strategy that clearly defines goals, identifies resources needed, and establishes time frames and performance metrics specifically for enforcing HMSA. We provided USDA with a draft of this report for review and comment. USDA did not state whether it agreed or disagreed with our findings and recommendations. However, it stated that it plans to use both our findings and recommendations to help improve efforts to ensure that establishments comply with HMSA and humane handling regulations. USDA also recognized the need to improve the inspectors’ ability to identify trends in humane handling violations and work with academia, industry, and others to identify practices that will achieve more consistent HMSA enforcement. USDA commented that the report contained some misstatements of fact that present a false picture of FSIS’s humane handling verification and enforcement program and policies. We believe that we have fairly described FSIS policy and guidance on HMSA enforcement. In response to updated information that FSIS provided, we made appropriate revisions to clarify certain points. For example, we revised our report by deleting the portion of our analysis related to suspension data that occurred on the days that DVMSs conducted humane handling audits because on the basis of new information provided we believe that FSIS records of DVMS audit visits are incomplete. USDA also questioned whether the results of our survey of FSIS inspectors provide evidence of systemic inconsistencies in enforcement. We believe they do, and would encourage USDA to consider the views of inspectors at the plants who are responsible for daily HMSA enforcement. Our survey results are based on strict adherence to GAO standards and methodology to ensure the most accurate results possible. Furthermore, our efforts were fully coordinated with FSIS before we distributed the survey. Specifically, we vetted all of the questions with FSIS management in advance to ensure that these questions elicit responses that would reveal whether or not inspectors-in-charge understand how to fully enforce HMSA. In addition, we conducted numerous pre-tests of the survey with inspectors to ensure that we would receive the most accurate responses possible. We also coordinated with several humane handling experts who serve as FSIS consultants on training and enforcement issues to ensure that our questions would elicit the most accurate responses. USDA also provided technical comments, which we have incorporated into this report as appropriate. USDA’s written comments and our responses are presented in appendix IV. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report examines (1) U.S. Department of Agriculture Food Safety and Inspection Service’s (FSIS) efforts to enforce the Humane Methods of Slaughter Act of 1978, as amended (HMSA); (2) the extent to which FSIS tracks recent trends in FSIS inspection resources for enforcing HMSA; and (3) FSIS’s efforts to develop a strategy to guide HMSA enforcement. To evaluate FSIS’s efforts to enforce HMSA, we interviewed officials and collected documents from FSIS’s Office of Field Operations; Office of Policy and Program Development; Office of Program Evaluation, Enforcement and Review; and the 15 district offices. We examined a nonprobability sample of FSIS noncompliance reports to provide illustrative examples of humane handling violations. In doing so, we searched for the words “prod” and “stun” in 533 noncompliance reports for 2007 and 589 noncompliance reports for 2008. Of these 1,122 reports, 272 reports included either the word “stun” or “prod” in reference to a violation. We then selected several of the reports that described violations appearing to be egregious and provided these reports to the expert we consulted for her assessment. This expert determined that the violations described in some of these reports were not sufficiently clear or detailed to determine whether they represented egregious violations, while others were clearly egregious in her judgment. We also reviewed FSIS suspension data, data from the humane handling tracking system and district veterinary medical specialist reports in all 15 of FSIS’s district offices for fiscal years 2005 through 2009. To assess the reliability of these data, we examined them for obvious errors in completeness and accuracy, reviewed existing documentation about the systems that produced the data, and questioned knowledgeable officials about the data and systems. We determined that the data were sufficiently reliable for the purposes of our review, with any limitations noted in the text. We also reviewed the HMSA enforcement reports produced by FSIS’s Office of Data Analysis and Integration Group, as well as meeting minutes from the monthly district veterinary medical conferences. To understand FSIS policy and guidance on humane slaughter enforcement, we reviewed relevant regulations and FSIS instructions. From May 2009 through July 2009, we also surveyed inspectors-in-charge—those responsible for reporting on humane handling enforcement in the plants—from a random sample of inspectors at 257 livestock slaughter plants that were stratified by size—very small, small, and large. We adopted FSIS definition for small, very small, and large plants. We obtained an overall survey response rate of 93 percent. Table 3 shows the population and sample size distribution of slaughter plants by large, small and very small plant size. Each of the inspectors-in- charge had a nonzero probability of being included, and that probability could be computed for any inspector-in-charge. Each inspector-in-charge was subsequently weighted in the analysis to account statistically for all the members of the population, including those who were not selected. We analyzed all responses, including the written responses that we received from the survey by conducting a content analysis and categorizing the responses accordingly. The results of our survey are presented in a special publication titled Humane Methods of Slaughter Act: USDA Inspectors’ Views on Enforcement that can be viewed at GAO-10-244SP. We met with key officials from FSIS’s Office of Field Operations who are responsible for implementing HMSA at the headquarters level. To understand district officials’ perspectives on HMSA enforcement, we conducted semistructured interviews with each of FSIS’s 15 district veterinary medical specialists (DVMS), 15 district managers, and 15 resource management analysts. We also performed a content analysis on all semistructured interviews to determine the districts’ perspective on training, guidance, and resources available for humane handling enforcement. To understand the perspective of animal welfare groups and the meat industry, we met with representatives from the Humane Society of the United States, the Animal Welfare Institute, the American Meat Institute, the National Meat Association, and the American Association of Meat Processors. We reviewed these organizations’ proposed reforms for HMSA enforcement. We also attended the 2009 American Meat Institute Humane Handling Conference in Kansas City, Missouri. To gain a better understanding of how the industry evaluates HMSA performance, we attended the Professional Animal Auditor Certification Organization training for meat plants in Denison, Iowa, in November 2008 and visited pork and beef slaughter plants that use a numerical scoring system. We also consulted animal handling expert Dr. Temple Grandin, who is a world-renowned expert on animal welfare who has served as a consultant to industry and FSIS, written extensively on modern methods of livestock handling, and designed slaughter facilities that have helped improve animal welfare in the United States and in other countries. Dr. Grandin provided her expert opinion on select humane handling incidents that we identified as possible HMSA violations. In addition to Dr. Grandin, we also spoke with animal welfare and food safety consultants to understand key principles of humane handling techniques and enforcement. We also met with representatives of the U.S. Department of Agriculture’s Agricultural Marketing Service to understand how the agency uses numerical scoring to evaluate humane handling at the plants that provide meat to the National School Lunch Program. In order to understand FSIS training efforts, we attended an FSIS training seminar for small and very small plants held in Dallas, Texas, in February 2009, and met with FSIS officials at the agency’s Center for Learning in Washington, D.C., as well as with FSIS consultants who provide training in HMSA enforcement. To identify the extent to which FSIS tracks recent trends in inspection resources for enforcing HMSA, we reviewed FSIS funding and staffing data for each district. We also conducted semistructured interviews with resource management analysts in each of FSIS’s 15 district offices and interviewed key officials in the Resource Management and Planning Office within the Office of Field Operations. We performed a content analysis on all semistructured interviews to determine each districts’ perspective on inspection resources available for humane handling enforcement. In order to understand how FSIS reports its annual full-time equivalent staff for humane handling to Congress, we collected funding and other relevant data and met with key officials in FSIS’s Office of Field Operations and Office of Management and Office of the General Counsel, as well as the U.S. Department of Agriculture’s Office of Budget and Program Analysis. To assess FSIS’s efforts to develop a strategy to enforce HMSA, we reviewed relevant FSIS strategies, including the FSIS Strategic Plan FY 2008 through FY 2013, and the FSIS 2007 Strategic Workforce Plan. We also reviewed the July 2009 version of the Office of Field Operations’ Operational Plan and the Office of Policy and Program Development Strategic Plan Fiscal Years 2008-2013. Furthermore, we reviewed humane handling performance data from the Office of Policy and Program Development. We met with representatives of the FSIS Office of Management on human capital issues and officials from the Office of Personnel Management in Washington, D.C. To identify the key elements of a strategic plan, we reviewed the Government Performance and Results Act of 1993, as well as past GAO reports. We conducted this performance audit for our work from October 2008 to February 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Figure 11 illustrates the areas in a typical, mid-sized plant from which inspectors can observe HMSA compliance, although inspectors are not always present in all areas. Figure 12 provides an overview of the percentage of plant suspensions for HMSA enforcement that occurred in each district for calendar year 2008. The percentages were determined based on the total number of plants in each districts and the number of reported suspensions. As the figure illustrates, the Jackson district had the highest percentage of suspensions. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated January 22, 2010. 1. Our report acknowledges FSIS’s efforts to increase its humane handling enforcement efforts since the events at Westland/Hallmark. However, FSIS did not provide source material for some of the data in its comments, making it difficult to determine the completeness and reliability of the data provided. Therefore we could not include in the report the data that FSIS provides in its comments. 2. We believe our report provides an accurate picture of FSIS’s humane handling enforcement activities. However, we have modified text in response to FSIS’s technical comments as appropriate or have explained why we disagree with FSIS’s comments, as noted below. 3. We revised the report to reflect the agency’s comments by deleting the portion of our analysis in our draft report that related to the suspension data that occurred on the days that DVMSs conducted humane handling audits. The report now states that the recods of DVMS audit visits are incomplete and that we were unable to conduct the complete analysis. As a result, we could not fully determine how often DVMSs conducted humane handling audit visits nor whether there is a higher rate of enforcement actions on the days that DVMSs conducted their audits for humane handling. Specifically, our original analysis of the DVMS visits was based on data that FSIS provided to us during the course of our review. Based on the information originally provided to us by FSIS during our audit, these data met all of GAO’s data reliability standards. In January 2010, after receiving a draft copy of this report for comment, FSIS provided us with revised suspension data and informed us that the original data it had provided were incomplete. However, after reviewing the January 2010 data, we believe the revised data contain incomplete information, and we are therefore unable to corroborate the DVMS humane handling audit visit data. 4. We modified the report to clarify that the FSIS Administrator may file a complaint to withdraw a grant of federal inspection. 5. We modified the report to clarify the difference between a withdrawal of inspectors and a withdrawal of the grant of inspection. We added that only the FSIS Administrator may file a complaint to withdraw a grant of federal inspection. However, the district office can still request such a withdrawal. In 2004, we recommended that FSIS establish additional, clear, specific and consistent criteria for district offices to use when considering whether to take enforcement actions because of repeat violations. We continue to believe that more specific guidance would be valuable to better address situations such as the one at the Bushway Packing plant in Vermont. It is also important to note that inspectors need to be trained to identify what actions may warrant such a request to ensure that FSIS is fully enforcing HMSA. 6. Although we did not state that numerical scoring is not regulatory in nature, we did state that using it to measure compliance would be inconsistent with the HMSA requirement that animals be rendered insensible to pain on the first blow. However, we believe that FSIS, in using its enforcement discretion, should identify some type of objective tool, such as a numerical scoring mechanism, and instruct all inspectors-in-charge at plants to use this measure to assist them in evaluating their plants’ HMSA performance and determining what, if any, enforcement actions are necessary in the agency’s exercise of its enforcement discretion. 7. We acknowledge in the report FSIS’s efforts to strengthen its analysis of humane handling data later this year. Although FSIS officials informed us of plans to implement the Public Health Information System, we found that those plans have experienced delays, and the system has yet to be implemented. For example, Public Health Information System was originally scheduled to be fully functional in the fall 2009—we now understand that the expected date has shifted to the end of 2010. Without the availability of this system, we analyzed the humane handling data that FSIS made available to us during the course of our review. 8. FSIS questioned whether our survey results provide evidence of systemic inconsistencies in enforcement. Our survey results are based on strict adherence to GAO standards and methodology to ensure the most accurate results possible, as summarized in appendix I of this report. From May 2009 through July 2009, we surveyed inspectors-in- charge—those responsible for reporting on humane handling enforcement in the plants—from a random sample of inspectors at 257 livestock slaughter plants that were stratified by size—very small, small, and large. We obtained an overall survey response rate of 93 percent. 9. Concerning FSIS’s comment on two of our survey questions, our survey results showed that 29 percent of the inspectors reported that they would not take any enforcement action or did not know what enforcement action to take for electrical prodding of most animals. Ten percent of the inspectors reported that they would take no enforcement action or did not know what action to take for electrical prodding in the rectal area. These figures suggest that FSIS may not be fully enforcing HMSA. While FSIS states that HMSA enforcement requires that inspectors make qualitative judgments since each livestock slaughter operation is unique, we found that humane handling experts in academia and industry firmly believe that such judgments need to be based on some type of objective standards, regardless of the size, construction, layout and staffing at the plants. We appreciate FSIS’s statement that it plans to examine the GAO survey results as it continues to improve its enforcement training and policies and urge FSIS to fully use the information in the survey results to identify practices that may achieve more consistent enforcement of HMSA. 10. We modified the report to clarify that HMSA exempts ritual slaughter from the requirement we discuss in the sentences immediately preceding the text in that section of the report—that an animal be rendered insensible to pain on the first blow—not to the general HMSA requirements. 11. Our report is correct as stated. FSIS refers to FSIS Directive 6900.2, Rev. 1, section VI (A) but FSIS does not refer to section VI (B), which states that if an inspector determines that “a noncompliance with humane slaughter and handling requirements has occurred and animals are being injured or treated inhumanely,” the inspector is to take two specific actions: (1) document the noncompliance on a noncompliance record and (2) take a regulatory control action. FSIS’s misapplication of the directive may further illustrate the lack of clarity in FSIS policy on humane handling enforcement, which may contribute to the lack of a clear understanding at the inspector level. 12. Nearly three-quarters of the inspectors-in-charge responding in our survey reported that they were not veterinarians. While 100 percent of the IICs at the large plants that we surveyed were veterinarians, 88 percent of those at very small plants in our representative survey were not veterinarians, and 57 percent of IICs at small plants were not veterinarians. In addition, we modified the text to clarify the responsibility of FSIS veterinarians prior to slaughter. 13. We modified figure 3 to show that patrol veterinarian only applies to some small and very small plants. 14. On page 31 of this report, we state that “FSIS began analyzing data across districts from the Humane Activities Tracking System in 2008— 4 years after it developed the system.” We also recognize that the Data Analysis Integration Group began “reporting quarterly on HMSA enforcement, including the amount of time inspectors have devoted to HMSA, the number of plants suspended, and the number of noncompliance reports issued in 2009.” In reviewing these reports, however, we found no analysis indicating that FSIS used these data to evaluate HSMA enforcement across the districts and plants to identify inconsistent enforcement. Also, FSIS officials acknowledged in our final meeting in November 2009, that it has never conducted any analysis of the noncompliance reports to determine patterns or trends in HMSA enforcement. Furthermore, although FSIS provided us with its monthly minutes of its DVMS conference calls from March through September 2009, these minutes did not identify any FSIS analysis of HMSA enforcement across the districts and possible inconsistent patterns. FSIS did not grant our request to attend the monthly DVMS conference calls in order to better understand the nature of the DVMS discussion and attempt to determine if such analysis was under way. 15. We modified the text to indicate that there is “no clear directive to do so in guidance.” Although regulations and policy documents describe when suspensions may take place, the agency has offered no clear directive as to when they should take place. 16. We changed the text to state “six possible signs of sensibility” to clarify, as noted in footnote 17 (now footnote 15), that the list of signs included two that, alone, do not generally indicate sensibility. In addition, we re-checked the coding used in our analysis to ensure that the calculations were correct. We found no discrepancies or errors. Therefore, these results clearly demonstrate that inspectors-in-charge may not have been sufficiently trained. 17. The National Academies’ Institute of Medicine study found weaknesses in the noncompliance reports, and as we stated, the institute recommended testing and improved training with special emphasis on the quality and consistency of noncompliance reports for food safety issues. Because FSIS’s inspection personnel are responsible for completing noncompliance reports for both food safety and humane handling violations, it is evident that improving training on the quality and consistency of those reports would be useful in supporting FSIS humane handling compliance efforts. 18. Our analysis of similar sized plants with similar slaughter volumes revealed substantial differences in the amount of time devoted to humane handling in different districts. This information might better inform FSIS officials to manage resources and/or training to help improve performance. 19. We disagree. We conducted this analysis in an effort to gain some perspective on the percent of FSIS annual appropriation for inspection devoted to humane handling and estimated that it has been above 1 percent of FSIS’s total annual inspection appropriation. FSIS officials informed us that 80 percent of their time should be devoted to food safety and 20 percent to humane handling inspection and other activities. Because FSIS cannot track humane handling funds separately, the agency was unable to provide the amount of funds that it devotes to humane handling activities. To provide context for the reader, we estimated the percentage of the total annual inspection appropriations dedicated to HMSA enforcement. We modified the text to expand the definition of FSIS inspection fund to include other activities such as livestock slaughter, poultry slaughter, processing inspection, egg inspection, import inspection, in-commerce compliance, district office activities and food safety enforcement activities. However, this clarification does not change the calculation. 20. We disagree. While the OIG report states that “events that occurred at Hallmark were not a systemic failure of the inspection processes/system as designed by FSIS,” it is important to note that its scope was based on observations at 10 cull cow (older and weaker) slaughter facilities. Nevertheless, the OIG report presented 25 recommendations to strengthen FSIS activities, and FSIS accepted all of these recommendations. Specifically, OIG recommended that FSIS needs to “reassess the inhumane handling risks associated with cull slaughter establishments and determine if more frequent or in-depth reviews need to be conducted.” The report also recommended “that a structured training and development program, with a continuing education component, be developed for both its inspection and management resources.” Furthermore, our survey results and analysis of HMSA enforcement data —that inspectors did not consistently identify and take enforcement action for humane handling violations for the period we reviewed—indicate a more widespread problem. Therefore, we continue to believe that it is difficult to know whether these incidents are isolated or not, and the extent of such incidents is difficult to determine because FSIS does not evaluate the narrative in noncompliance reports. In addition to the individual named above, other key contributors to this report were Thomas M. Cook, Assistant Director; Nanette J. Barton; Michele E. Lockhart; Beverly A. Peterson; Carol Herrnstadt Shulman; and Tyra J. Thompson. Important contributions were also made by Kevin S. Bray, Michele C. Fejfar, Justin Fisher, Carol Henn, Kirsten Lauber, and Ying Long.
Concerns about the humane handling and slaughter of livestock have grown; for example, a 2009 video showed employees at a Vermont slaughter plant skinning and decapitating conscious 1-week old veal calves. The Humane Methods of Slaughter Act of 1978, as amended (HMSA) prohibits the inhumane treatment of livestock in connection with slaughter and requires that animals be rendered insensible to pain before being slaughtered. The U.S. Department of Agriculture's (USDA) Food Safety and Inspection Service (FSIS) is responsible for HMSA. GAO was asked to (1) evaluate FSIS's efforts to enforce HMSA, (2) identify the extent to which FSIS tracks recent trends in resources for HMSA enforcement, and (3) evaluate FSIS's efforts to develop a strategy to guide HMSA enforcement. Among other things, GAO received survey responses from inspectors at 235 plants and examined a sample of FSIS noncompliance reports and suspension data for fiscal years 2005 through 2009. GAO's survey results and analysis of FSIS data suggest that inspectors have not taken consistent actions to enforce HMSA. Survey results indicate differences in the enforcement actions that inspectors would take when faced with a humane handling violation, such as when an animal was not rendered insensible through an acceptable stunning procedure by forcefully striking the animal on the forehead with a bolt gun or properly placing electrical shocks. Specifically, 23 percent of inspectors reported they would suspend operations for multiple unsuccessful stuns with a captive bolt gun whereas 27 percent reported that they would submit a noncompliance report. GAO's review of noncompliance reports also identified incidents in which inspectors did not suspend plant operations or take regulatory actions when they appeared warranted. The lack of consistency in enforcement may be due in part to the lack of clarity in current FSIS guidance and inadequate training. The guidance does not clearly indicate when certain enforcement actions should be taken for an egregious act--one that is cruel to animals or a condition that is ignored and leads to the harming of animals. A noted humane handling expert has stated that FSIS inspectors need clear directives to improve consistency of HMSA enforcement. According to GAO's survey, FSIS's training may be insufficient. For example, inspectors at half of the plants did not correctly answer basic facts about signs of sensibility. Some private sector companies use additional tools to assess humane handling and improve performance. FSIS cannot fully identify trends in its inspection funding and staffing for HMSA, in part because it cannot track HMSA inspection funds separately from the inspection funds spent on food safety activities. FSIS also does not have a current workforce planning strategy for allocating limited staff to inspection activities, including HMSA enforcement. FSIS has strategic, operational, and performance plans for its inspection activities but does not clearly outline goals, needed resources, time frames, or performance metrics and does not have a comprehensive strategy to guide HMSA enforcement.
Decisions made in setting requirements very early in a ship’s development have enormous impact on the total ownership costs. Total ownership costs include the costs to research, develop, acquire, own, operate, maintain, and dispose of weapon and support systems; the costs of other equipment and real property; the costs to recruit, retrain, separate, and otherwise support military and civilian personnel; and all other costs of DOD’s business operations. Navy analyses show that by the second acquisition milestone (which assesses whether a system is ready to advance to the system development and demonstration phase), roughly 85 percent of a ship’s total ownership cost has been “locked in” by design, production quantity, and schedule decisions while less than 10 percent of its total costs has actually been expended. (See fig. 1.) Figure 1 depicts the relative apportionment of research and development, procurement, and operating and support costs over the typical life cycle of a ship program (the complete life cycle of a ship, from concept development through disposal, typically ranges from 40 to 60 years). Research and development funds are spent at program initiation and generally comprise only a small fraction of a new ship’s total ownership costs. Then, in the next acquisition phase, procurement funds, comprising about 30 percent of total ownership costs, are spent to acquire the new ship. The vast majority of the total ownership costs, about 65 percent, is comprised of operating and support costs and is incurred over the life of the ship. Personnel costs are the largest contributor to operating and support costs—approximately 50 percent. Recognizing that fiscal constraints pose a long-term challenge, DOD policy states that total ownership costs of new military systems should be identified and that DOD officials should treat cost as a military requirement during the acquisition process. This approach, referred to as treating cost as an independent variable, requires program managers to consider cost-performance trade-offs in setting program goals. During the acquisition process, program managers are held accountable for making progress toward meeting established goals and requirements at checkpoints, or milestones, over a program’s life cycle. (See app. III for a discussion of the DOD acquisition process). These goals and requirements are contained in several key documents. The first to be generated is a mission need statement that describes a warfighting deficiency, or opportunity to provide new capabilities, in broad operational terms and identifies constraints such as crewing, personnel, and training that may affect satisfying the need. These capabilities and constraints are examined during the initial phase of the program in a second key document, a study called the analysis of alternatives. This study assesses the operational effectiveness and estimated costs of alternative systems to meet the mission need. The analysis assesses the pros and cons of each alternative and their sensitivity to possible changes in key assumptions. The analysis should consider personnel as both a life-cycle cost and a design driver. Systems engineering best practices dictate that the analysis of alternatives should be supported by a front-end analysis and trade-off studies so that better and more informed decisions can be made. Using the results of the analysis of alternatives, program objectives are formalized in an operational requirements document. This third key document specifies those capabilities or characteristics (known as key performance parameters) that are so significant that failure to meet them can be cause for the system to be canceled or restructured. In establishing key performance parameters, DOD officials specify both a threshold and an objective value. For performance, the threshold is the minimum acceptable value that, in the user’s judgment, is necessary to satisfy the need. For schedule and cost, the threshold is the maximum allowable value. The objective value is the value desired by the user and the value the program manager tries to work with the contractor(s) to obtain. During our review, DOD was revising its acquisition guidance. On October 30, 2002, the Deputy Secretary of Defense canceled three key DOD documents governing the defense acquisition process and issued interim guidance in a memorandum. DOD officials expect to issue a new acquisition guidance in the near future. The Deputy Secretary’s interim guidance retains the basic acquisition system structure and milestones, emphasizes evolutionary acquisition, modifies the requirements documents, and makes several other changes. For example, the mission need statement and the operational requirements document are replaced by three new documents: (1) the initial capability document replaces the mission need statement at milestone A, (2) the capability development document replaces the operational requirements document at milestone B, and (3) the capability production document replaces the operational requirements document at milestone C. (See app. III for a discussion of the acquisition process and milestones.) Human systems integration is a systems engineering approach to optimize the use of people. Optimized crewing for ships refers to the minimum crew size consistent with the ship’s mission, affordability, risks, and human performance and safety requirements. When initiated from the outset of a new ship acquisition (during concept exploration and prior to establishing key performance parameters) and continued through ship design, human systems integration has the potential to reduce workload leading to smaller, optimized crews; reduced operating and support costs; and improved operational performance. According to human systems integration experts, for Navy ship acquisitions, human systems integration may begin with a top-down requirements analysis that examines the ship’s functions and mission requirements and determines whether human or machine performance is required for each task. By reevaluating which functions humans should perform and which can be performed by technology, human systems integration minimizes personnel requirements while maximizing gains from technological applications. A human systems integration approach also ensures that a person’s workload and other concerns, such as personnel and training requirements, safety, and health hazards, are considered throughout the acquisition process. In a recent memorandum, the Assistant Secretary of the Navy for Manpower and Reserve Affairs stated, “failure to incorporate HSI [human systems integration] approaches can only lead to increasing manpower costs in the future that will threaten the ability of the Department to sustain the transformation, readiness and investment priorities we have established.” Human systems integration has been used successfully in military and commercial settings. MANPRINT, the Army’s human systems integration program, reports that the Comanche helicopter program, when fielded, will avoid $3.29 billion in operating and support costs ($2.67 billion of which resulted from personnel reductions) due to the application of human systems integration. Human systems integration has also been used in airplane cockpit design, aircraft maintenance, and in rear-center automobile brake lights design. Additionally, foreign navies’ efforts, such as those to develop British Type 23 and Dutch M-Class Frigates, achieved a 30 to 40 percent reduction in crew size relative to the previous generation of ships by employing a human systems integration approach. DOD’s acquisition policy for using human systems integration is general in nature but requires program managers to develop a human systems integration approach early in the acquisition process to minimize total ownership costs. The Navy’s acquisition guidance requires that human systems integration costs and impacts be adequately considered along with other engineering and logistics elements beginning at program initiation, but the guidance does not provide for specific procedures and metrics. Despite the potential of human systems integration to optimize crew size and reduce total ownership costs, the Navy’s use of human systems integration and goals to reduce crew size varied considerably across the four new ship acquisition programs we examined. Only the DD(X) destroyer program used human systems integration extensively to optimize crewing during the concept and technology development phase of the acquisition. In doing so, the program developed a comprehensive plan that describes the human systems integration objectives, strategy, and scope and mandated its use by means of key program documents. The T-AKE cargo ship program was required to apply human systems integration principles to the ship’s design, but not to the ship’s primary mission of intership underway replenishment. In contrast, the JCC(X) command ship and LHA(R) amphibious assault ship programs had not emphasized human systems integration early in the acquisition process or developed a comprehensive human systems integration approach. The Navy’s crew size reduction goals for the four ships range from an aggressive goal of about 60 to 70 percent on the DD(X) destroyer, to a lack of any formal reduction goal on the JCC(X) command ship and the LHA(R) amphibious assault ship. The inconsistent use of human systems integration to optimize ship crews and the lack of formal crew size reduction goals for three of the four programs we examined represent a missed opportunity to potentially achieve significant savings in total ownership costs. From the inception of the program through the selection of a design agent in 2002, the DD(X) program has had a significant crew size reduction goal and has used human systems integration to identify potential ways to achieve this goal. Requirements for using human systems integration and crew size goals were included in the key acquisition documents to which program managers are held accountable. The program began human systems integration activities in the first acquisition phase—concept and technology development—by inviting industry to develop conceptual designs to meet these goals and produce a human systems integration plan. Subsequently, the Navy restructured the program in November 2001 and is reevaluating the ship’s operational requirements, including crew size. However, the Navy’s contract with the design agent continues to specify a significant crew size reduction calling for a crew of between 125 and 175. These revised crew size requirements still represent a greater than 50 percent reduction when compared to the legacy ship it is replacing. From the earliest stages of the program and continuing through award of the design agent contract, the program maintained a focus on optimizing crew size. For example: The 1993 mission need statement directed “the ship must be automated to a sufficient degree to realize significant manpower reductions.” The document also required a human systems integration-type analysis, to recommend options to exploit technology to reduce crewing, personnel, and training requirements and directed that trade-offs to reduce these requirements be favored during design and development. The 1998 cost and operational effectiveness analysis (currently known as the analysis of alternatives) included an analysis of the ship crew and personnel requirements for the various alternatives that ultimately influenced the Navy’s decision to initially establish an aggressive crew size goal of 95 and identify human systems integration requirements to be included in the operational requirements document. This goal represents a greater than 70 percent reduction in crew size from that of the Arleigh Burke-class destroyers developed in the 1980s. In 1997, the DD(X) operational requirements document specified a crew size goal of between 95 and 150 as a key performance parameter. It also required that human systems integration be used to minimize life-cycle costs and maximize performance effectiveness, reliability, readiness, and safety of the ship and crew. In 1997, the program also established a ship crewing/human systems integration integrated process team whose charter requires a top-down functional analysis, the analytical centerpiece of the Navy’s human systems integration approach, in the early phases to obtain a major reduction in personnel. In 1998, the Under Secretary of Defense for Acquisition and Technology continued to hold DD(X) destroyer program managers accountable for achieving an aggressive crew size reduction when he required validation that the DD(X) crew size will meet the key performance parameter threshold before ship construction begins. The Phase 1 solicitation issued in 1998 for trade studies and analyses and development of two competitive system concept designs required that both contractors provide a human systems integration plan. The design agent contract awarded in 2002 requires the contractor to develop and demonstrate a human systems integration engineering effort that addresses the crewing, personnel, training, human performance, sailor survivability, and quality of life aspects of the DD(X) design. It also relaxed the original crew size goal, stating that crewing requirements shall not exceed 175. To achieve the proposed reductions, the DD(X) program plans to employ human-centered design and reasoning systems, advances in ship cleaning and preservation, a new maintenance strategy, and remote support from shore-based facilities for certain administrative and personnel services. For example, cleaning requirements are expected to be reduced by a ship design that capitalizes on commercial shipping practices such as cornerless spaces and maintenance-free deck coverings. The ship will also rely on an integrated bridge system that provides computer-based navigation, planning and monitoring, automated radar plotting, and automated ship control. DD(X) program officials stated that their experience in using the human systems integration engineering approach, establishing an aggressive crew size reduction goal early in the acquisition process, and including this goal as a key performance parameter in the operational requirements document has been critical in maintaining a focus on reducing crew size. Moreover, these practices led to examining innovative approaches from the beginning and holding program managers accountable during program reviews. Program officials anticipate that the emphasis on reducing crew size will help to minimize DD(X) operating and support and total ownership costs once the ship is built and enters the fleet. For illustrative purposes, we calculated that the Navy could avoid personnel-related costs of about $600 million per ship over a 35-year service life if it achieves a crew of 150 sailors rather than requiring the 365 sailors needed to operate its legacy ship, the Arleigh Burke-class destroyer. This could potentially save more than $18 billion for a class of 32 ships (both amounts are in fiscal year 2002 dollars). See appendix V for a comparison of crew functions and workload on the DDG 51 Arleigh Burke-class destroyer and those proposed for the DD(X). DD(X) program officials also stated that, even with sustained early emphasis on crew size reduction and the use of human systems integration for crew optimization, achieving such an aggressive crew size goal remains a significant technological challenge as the program is relying on a number of immature labor-saving technologies, such as those required to conduct damage control and run the ship’s computers. Program officials stated that informal goals or those established later in the acquisition process would not have been nearly as effective in getting the program to focus on achieving significant personnel reductions. However, in recognition of the technological challenge of achieving the crew size goal and several other technological challenges, the Navy restructured the DD(X) program in November 2001 to better manage the program’s risk. As such, it adopted an acquisition strategy consisting of multiple capability increments, or “flights.” The newly restructured program relaxed the crew size goals to between 125 and 175, which still represents a greater than 50 percent reduction below legacy ship levels, for the first of three planned DD(X) flights. While briefings prepared by Navy officials retain the original crew size goals for the third DD(X) flight, it is unclear whether these goals will be retained as key performance parameters in the operational requirements document currently under revision. In developing the T-AKE cargo ship, which is in procurement and is expected to become operational in 2005, elements of human systems integration were used to streamline intraship cargo handling and to refine the requirements for civilian mariners and active-duty personnel. However, human systems integration was not applied to the process of intership underway replenishment, the transfer of cargo between ships while at sea. Moreover, early acquisition documents for the T-AKE cargo ship program did not establish specific goals for reducing crew size, although they required the use of civilian mariners or Merchant Marines instead of active-duty Navy personnel and mandated the examination of cargo handling innovations to reduce crew workload. Use of Merchant Marines or Military Sealift Command personnel generally results in a smaller crew because these organizations employ more experienced seamen, have reduced watchstanding requirements, and use a different maintenance and training philosophy. The T-AKE will be operated by the Military Sealift Command, and its projected crew will be between 5 and 20 percent smaller than the crew of the command’s legacy ships and about 60 percent smaller than the legacy ships previously operated exclusively with Navy sailors. The following examples illustrate the strengths and limitations of the program’s use of human systems integration early in the acquisition process. The 1992 mission need statement lacked a direct reference to human systems integration, although it does indicate that the ship’s size will be the result of various trade-offs, including cost and crew size, and required that the ship’s design incorporate modern propulsion, auxiliary, and cargo handling systems to minimize operating and maintenance personnel requirements. The 2001 operational requirements document stated that “human engineering principles and design standards shall be applied to the design of all compartments, spaces, systems, individual equipment, workstations and facilities in which there is a human interface.” However, this document also required the T-AKE cargo ship to use U.S. Navy standard underway replenishment equipment because of the need to interface with other U.S. Navy and allied ships, the lack of any equivalent commercial system, and the costs to redesign existing Navy equipment and maintain nonstandard equipment. As a result, human systems integration was not applied to one of the main drivers of crew size—the number of crewmembers required to perform connected replenishment at each replenishment station. Program officials indicated that, because intership underway replenishment involves the interface between the T-AKE cargo ship and all other ship classes requiring replenishment at sea, redesign of the Navy’s process of underway replenishment was not within their purview and, therefore, was not addressed in the program’s human systems integration analyses. Instead, the program’s focus was to ensure that the T-AKE cargo ship’s design met the current requirements for performing underway replenishment and had the flexibility for future equipment modification. To address underway replenishment across ship platforms, in 2000 the Navy established a naval operational logistics integrated product team whose mission is to establish policy and doctrine for future operational systems and ensure the integration of operational logistics systems across ships. Since reexamining intership underway replenishment was beyond the scope of the ship program, program personnel said they focused on identifying ways to reduce crew workload. In the first acquisition phase, four contractors prepared trade studies on the integration of cargo handling functions on the ship. In the second acquisition phase, one of the contractors, National Steel and Shipbuilding Company, was awarded the contract to design and construct the ship. Ultimately, labor-saving innovations such as item scanners; an automated, rather than paper-based, warehouse management inventory system; and safer and easier to operate elevator doors were adopted. Although the T-AKE cargo ship is expected to require fewer personnel than its legacy ships, early acquisition documents did not establish a specific crew size goal as a key performance parameter and thus did not hold the program manager accountable for specific reductions. Rather, the operational requirements document required that the T-AKE be crewed largely by U.S. Merchant Marines or Military Sealift Command civilian mariners. The Navy currently estimates that the T-AKE will be crewed by 172 individuals: 123 civilian mariners, 13 active-duty sailors in the military department who perform cargo management/inventory functions, and 36 active-duty sailors in the aviation detachment who perform intership cargo transfer using a helicopter (vertical replenishment). The T-AKE cargo ship’s projected crew size of 172 personnel will be somewhat smaller than that of its Military Sealift Command legacy ships, the T-AE 26 Kilauea-class ammunition ships and the T-AFS 1/8 Mars-class and Sirius-class combat stores ships, which have crews of 182-215 personnel and also use civilian mariners. The T-AKE’s crew size is significantly smaller than when these legacy ships were crewed by active-duty personnel. When crewed entirely by active Navy personnel, these ships had crews of 435 and 508 sailors, respectively. Despite the smaller crew size, the T-AKE will have a greater carrying capacity for dry and refrigerated cargo than its legacy ships. Each T-AKE ship will be able to carry at least 63 percent of the combined cargo capacity of a T-AFS 1 and T-AE 26. Although the ship program did not perform the top-down analyses recommended by human system integration experts to optimize crewing, it did use elements of the approach to finalize staffing requirements. To finalize the requirement for civilian mariners, program personnel performed a functional analysis (which identified ship functions and their crew size requirements) and ultimately determined that the initial crew size estimate developed by the Navy could be reduced by 12, resulting in a final requirement for 123 civilian mariners. The size of the military department is based on an analysis that projects workload and personnel requirements for every ship function during the most labor-intensive operational scenarios and then allocates the workload and personnel requirements to the minimum number of billets and skill levels. The recently canceled JCC(X) command ship program made very limited use of human systems integration to optimize crew size and planned to wait until preliminary design in the next acquisition phase to begin human systems integration activities. The program also did not hold program managers accountable for reducing crew size below that of the legacy command ships. The following are examples. The mission need statement did not require the use of human systems integration. Instead, the document required that the ship “be automated wherever practical to reduce workload and manpower requirements” and directed that operation by Military Sealift Command personnel be considered for selected functions rather than Navy personnel. However, the document stated that “changes to manpower requirements are not expected.” The analysis of alternatives examined crew sizes ranging from 60 percent smaller to 50 percent larger than those of current command ships and using civilian mariners to perform JCC(X) crew functions to reduce crew size. The analysis found that using a mix of military and civilian personnel rather than all military personnel would reduce personnel costs by nearly a third, saving $2.3 billion for four ships over a 40-year service life. However, the analysis did not include a full human systems integration assessment of each design alternative. At the time of its cancellation, the program had not received approval of its operational requirements document, which would have established key performance parameters. Program officials stated that although achieving crew size reduction was not included in key program documents, they expected to achieve some crew size reductions on the JCC(X) when compared to existing command ships through the use of modern, more reliable equipment, for example, diesel propulsion instead of steam propulsion. Yet, despite the program’s informal interest in reducing the size of the crew needed to operate the ship, the analysis of alternatives did not examine optimizing via human systems integration one of the main drivers of crew size—the size of the embarked command staff. The total crew size of the JCC(X) equals the sum of the embarked joint command staff and the crew needed to operate the ship and perform basic ship functions. Navy analyses show that the crew size needed to operate the ship depends upon the joint command staff size and the mission equipment that is to be maintained by the crew. Yet, all of the Navy analyses examined joint command staff alternatives, ranging from 500 to 1,500 staff, which were larger than the fleet commander’s staff of 285 to 449 currently embarked on existing command ships. None of the analyses used human systems integration to determine the optimal size of the joint command staff. “The HSI team was not part of a larger JCC(X) System Engineering effort, as would be expected in a full-up proposal or system development activity. The HSI team also did not have contact with potential JCC(X) users or with Navy/Joint HSI Team members, as would be expected and desired in a normal system acquisition environment. This was due to the unique nature of a very limited scope manning study with very limited funds.” The study also urged the program to adopt a human systems integration approach stating that “a human-centered design approach, implemented at the front-end and as part of an integrated system engineering process, will yield an optimal crew size.” The study also stated that the same human systems integration tools could be effectively used to optimize the size for the embarked command staff. JCC(X) command ship program officials stated that the program planned to employ human systems integration to optimize crew size in the next acquisition phase by contracting with industry to perform a functional analysis. However, according to Navy officials, the program was canceled before these efforts began, in part because of the unacceptably high crew size estimated for the program. The LHA(R) program has not yet developed a comprehensive human systems integration strategy to outline the program’s human systems integration objectives and guide its efforts. In addition, officials told us that very little human systems integration work was done early in the acquisition process because officials plan to begin human systems integration activities during preliminary design in the next acquisition phase, called system development and demonstration. Also, early acquisition documents for the LHA(R) amphibious assault ship program did not establish formal goals to reduce the number of personnel required to operate the ship. The following are examples. The mission need statement required the use of human systems integration to optimize manning. However, it also stated that no changes to Navy personnel requirements were expected. Currently, the program plans only to not exceed the crew size of the older ships that perform similar missions. These legacy LHA 1 class ships have a crew of about 1,230 to operate the ship and can embark about 1,700 Marines. The analysis of alternatives stated that in order for the LHA(R) to achieve major reductions in personnel, significant new technology and research and development funds to integrate this technology into the LHA(R) design would be required as well as changes in culture (organization and procedures) to adapt reduced crew size practices of the commercial sector to the naval environment. At the time of our review, the operational requirements document for the LHA(R) had not been developed. The Navy’s plans for the LHA(R) are not in concert with the Chief of Naval Operations’ desire for major reductions in the personnel levels for all new shipbuilding programs. In August 2002, the Chief of Naval Operations commented on the size of the LHA-1 (the legacy ship that the LHA(R) is replacing) saying, “I don’t want any more ships like that. The more low technology systems that are on it, the more people we will need. And we will need more crewmembers for support services. It [the LHA-1’s replacement] will be built from the keel up to support the type of striking capability that you need in your aviation arm. It is going to be a totally different ship.” Program officials offered two major reasons for not conducting human systems integration early in the acquisition process: (1) they believed it was not appropriate to start human systems integration during the very early phases of the acquisition program (i.e., in concept and technology development) and (2) the program lacked funding to conduct human systems integration activities in the first acquisition phase. Program officials plan to conduct human systems integration efforts during the system development and demonstration acquisition phase when the program begins preliminary design efforts. Some of these efforts, scheduled to begin in February 2003, are to include a top-down requirements analysis and a total ship manpower assessment. In contrast to the opinions of LHA(R) program officials, the Navy’s human systems integration experts stated that human systems integration is a critical part of planning and design in the early stages of acquisition, including the concept and technology development phase. In addition, experience with the DD(X) program shows that the potential personnel- related cost savings resulting from the application of human systems integration early on in a program can be significant. Moreover, experts stated that every program, regardless of its funding levels or its reliance on legacy systems, can benefit from a comprehensive human systems integration approach, especially those developing crew-intensive platforms such as the LHA(R). The program managers and the human systems integration experts we spoke to identified four factors that inhibit the Navy’s ability to consistently implement human systems integration across programs. These factors are (1) neither DOD nor Navy acquisition policies establish specific requirements for using human systems integration, such as its timing and whether the approach should be addressed in the key acquisition documents; (2) funding challenges often result in decisions to defer human systems integration activities and use legacy subsystems when acquiring new ships to save near-term costs instead of investing in research and development to reduce costs over the long term; (3) DOD and Navy oversight of human systems integration activities is limited and the Naval Sea Systems Command’s role in certifying that ships delivered to the fleet have optimum crew sizes is unclear; and (4) the Navy lacks an effective process to change its long-standing culture and the extensive network of policies and procedures that have institutionalized current manning practices. As a result, some programs we examined set goals not to exceed the crew size of 30-year old ships, waited until preliminary design in the second acquisition phase to begin human systems integration efforts, and excluded primary and secondary ship functions from a rigorous analysis. In recognition of these impediments, the Navy has taken steps to resolve some of these issues. Recent DOD and Navy acquisition guidance provides program managers with latitude about the timing and extent of human systems integration activities and whether the approach should be addressed in key acquisition documents. DOD guidance on the role of human systems integration in acquisition is contained in two documents, the Defense Acquisition memorandum and the Interim Defense Acquisition Guidebook, issued by the Deputy Secretary of Defense, both dated October 30, 2002. Compliance with the Defense Acquisition memorandum is mandatory; compliance with the Interim Defense Acquisition Guidebook is discretionary. Both documents state that program managers will develop a human systems integration strategy early in the acquisition process to minimize total ownership cost. Neither document, however, specifies how early in the process these efforts should begin or requires that human systems integration analyses be performed on the various alternatives considered in the formal analysis of alternatives. The Navy’s main acquisition instruction requires that human systems integration costs and impacts be adequately considered along with other engineering and logistics elements beginning at program initiation but does not provide for specific procedures. The Navy’s section of the acquisition deskbook provides more detailed guidance on human systems integration (such as providing a format for the human systems integration plan and discussing the contents of a human systems integration program). However, because these sources provide only broad guidelines or are discretionary, a program manger can decide when, how, and to what extent they will use human systems integration in their acquisition program. The Navy also has developed other guidance on using human systems integration, but its use is also discretionary. For example, human systems integration experts developed a guide for the Office of the Chief Naval Operations, which states that a human systems integration assessment and trade-off of design alternatives should be conducted during the first acquisition phase. The Surface Warfare Program Manager’s Guide to Human Systems Integration also states that human systems integration cost, schedule, and design risk areas for each alternative concept should be identified and evaluated. The guidance also recommends that human systems integration assessments should be conducted at each milestone decision review. Because of the wording of DOD guidance and the discretionary nature of some Navy guidance, new ship program managers vary in when they use human systems integration during ship development. For example, the DD(X) program specified using the approach in the mission need statement and the analysis of alternatives further specified human systems integration requirements be included in the operational requirements document. In contrast, the program managers for both the JCC(X) command ship and the LHA(R) amphibious assault ship told us that they planned to begin their human systems integration efforts during preliminary design after the design alternative has been selected in the next acquisition phase--system development and demonstration. Neither program conducted human systems integration analyses of the alternative designs during the analysis of alternatives. As such, program officials lacked information on how each of the alternatives compared with respect to their proposed crew size and how their crew size would affect total ownership costs. Both JCC(X) and LHA(R) program officials cited challenges in funding a new acquisition program as a barrier to using human systems integration to optimize crew size and therefore reduce total ownership cost. These challenges affect whether programs conduct crew-optimizing human systems integration activities in the earliest phases of acquisition and whether the program will choose to invest in labor-saving technologies. JCC(X) program officials told us that achieving personnel reductions and using human systems integration to optimize crew size could increase acquisition costs. The Navy’s human systems integration experts stated that program managers have long been incentivized to hold down acquisition costs without considering how such choices may affect operating and support costs, such as personnel-related costs, over the life of the ship. According to the Navy’s human systems integration experts, labor-saving technology may add to the acquisition cost of a ship but may also reduce the operating and support costs incurred over the ship’s service life. Whether to use technology or sailors to perform a function should be determined by a systematic analysis of costs and capabilities performed as part of the human systems integration functional analysis— an effort not undertaken by the JCC(X) command ship program. Similarly, at the time the LHA(R) program was initiated in 2001, the Navy decided not to invest in human systems integration activities and research and development on new labor-saving technologies for the ship. The program plans to capitalize, where appropriate, on systems already in development for other ships such as the DD(X) destroyer and the CVN(X) aircraft carrier but has not yet identified any labor-saving technologies or processes that might be adapted from these programs. Program officials said the program was not resourced to develop new technologies, having received only $20 million in research and development funds from program initiation through fiscal year 2002. However, the up-front savings of not investing in research and development and human systems integration activities must be weighed against the higher operating and support costs incurred over the life of the ship and the foregone capability and quality of life improvements that can accompany new technology and human-centered design. For illustrative purposes, we calculated that a nominal 25 percent reduction in a 1,245-person crew could provide a personnel cost avoidance of nearly $1 billion over the service life of a ship, or nearly $4 billion for a 4-ship class. In addition, DD(X) destroyer program officials were uncertain about the extent to which programs now in development outside the DD(X) destroyer family of ships will be able to leverage its new technology, citing the costs associated with adapting technology to new platforms that perform different missions. Rather, DD(X) program officials told us that it is imperative for the new ship programs to use human systems integration to inform such decisions. Several offices within DOD and the Navy have an advisory role regarding the implementation of human systems integration, although they lack the authority to require that it be used to optimize crew size and that it be addressed in specific acquisition documents or at each acquisition milestone. The Offices of the Secretary of Defense, Personnel and Readiness, and the Chief of Naval Operations (Acquisition Division) Acquisition and Human Systems Integration Requirements Branch both review new program acquisition documents and provide guidance on human systems integration policy. Additionally, the Office of the Secretary of Defense, Personnel and Readiness, assists in the development of human systems integration policy and addresses policy issues at meetings of defense acquisition executives. The Office of the Assistant Secretary of the Navy (Research, Development, and Acquisition) Chief Engineer, uses human systems integration in its “system of systems” examination of capability above the individual ship level to ensure that systems can function together across various ships to perform the mission. In recognition of the need for an organization within the ship community to “lead the effort to institutionalize humans systems integration…,” the Navy, in October 2002, created the Human Systems Integration Directorate within the Naval Sea Systems Command whose missions include establishing human systems integration policy and standards for the Naval Sea Systems Command; ensuring the implementation of human systems integration policy, procedures, and best practices; assisting program offices in developing and sustaining human systems integration plans; and certifying that ships and systems delivered to the fleet optimize ship crewing, personnel, and training and promote personnel safety, survivability, and quality service. Because of its role as the certifying authority for human systems integration within the Naval Sea Systems Command, the directorate may have more authority than the previously mentioned organizations to ensure that human systems integration is implemented. However, the memorandum establishing the directorate and the instruction specifying its functions do not specify how certification will be accomplished, the acquisition stage at which it will be required, or consequences of noncompliance. Navy acquisition officials also identified the layers of Navy policies, procedures, and instructions that affect ship crew levels and cultural resistance to novel concepts as impediments to optimizing ship crews. They told us that even when human systems integration is used in the early stages of an acquisition program to identify ways to reduce crew size, it is difficult to achieve a consensus among numerous stakeholders within the Navy to change long-standing policies and practices so that labor-saving approaches or technologies can be implemented. To facilitate this process, the DD(X) destroyer program established a forum to evaluate policy barriers to proposed innovations and facilitate needed changes. However, this effort was limited to selected ships. Other programs such as the LHA(R) amphibious assault ship and the JCC(X) command ship had not established a similar forum to resolve the policy barriers to optimize crewing on these ships. As a result, the Navy currently lacks an ongoing process to facilitate examination of outmoded policies and procedures that may impede optimizing crewing in all new ship acquisition programs. Navy officials explained that changing policies and procedures is a complex and time-consuming task because the current way of doing business has been incorporated in instructions at all levels in the Navy, ranging from the Secretary of the Navy to commanders of the Atlantic and Pacific Fleets, and across a number of areas, such as recruiting, retention, training, quality of life, and the environment. In addition, new ways of doing business, such as those envisioned for the DD(X) destroyer, will affect and require modifications to Navy doctrine, tactics, and operational requirements. Furthermore, proposed changes must be evaluated for compliance with governing statutes in such areas as compensation, occupational safety and health, and aviation. As such, any change involves numerous stakeholders who must be consulted and grant approval. For example, DD(X) officials told us that it took about 18 months to coordinate with numerous stakeholders to change applicable policies to reduce the number of crewmembers required during flight operations from 48 to 15. Moreover, officials told us that this change is just the beginning since the DD(X) destroyer program has identified numerous Navy policies and procedures across a wide spectrum of topics that need to be changed in order to adopt the innovations proposed by industry to meet the DD(X)’s cost and capability requirements. Officials with the other programs we examined also viewed Navy policies as a barrier to optimized crewing. JCC(X) command ship program officials reported that current Navy policy and practice would have been a barrier to implementing potential crew size reductions had this program gone forward. Two examples cited by program officials are bridge watchstanding and main propulsion machinery monitoring. At present, Navy practice for bridge watch requires approximately 11 personnel in contrast to commercial practice, which requires 1 person on watch and 1 on stand by. Similarly, Navy practice for machinery monitoring requires personnel in the machinery space at all times to ensure that power is available. This contrasts with commercial practice, which permits putting machinery on automatic and using sensors with alarms routed to a watchstanders’ stateroom during certain hours. Officials stated that implementing these commercial practices would have required evaluating their appropriateness for a Navy operating environment and, if approved, would have required modifying existing policies and procedures. Furthermore, the LHA(R) analysis of alternatives concluded that significant changes in organization and procedures are crucial to achieving a substantial reduction in crew size. Cultural change is a particular challenge for the LHA(R) program because the amphibious mission is complex and both Navy and Marine organizations would be involved in developing and implementing changes. Navy officials stated that current funding practices in which personnel costs are funded from centralized accounts and not out of the operating fleets’ budget do not foster an awareness of the true cost of having sailors on board ships and encourage viewing sailors as a “free resource.” Additionally, because traditional, time-tested methods and crewing have proven successful in the past, officials told us that Navy commanders have little incentive to assume the risks associated with adopting new ways of accomplishing shipboard tasks with fewer crewmembers, especially when they lack awareness of and accountability for personnel costs. Because of the magnitude of changes needed to reduce and optimize crewing on the DD(X) destroyer, the program established an effort to identify and resolve policy barriers to implementing labor-saving approaches that conflict with current policy, statutes, or practice. This effort includes (1) reaching out to Navywide personnel development and training organizations and to Atlantic and Pacific Fleet commanders and (2) establishing the DD(X) Policy Clearinghouse Web-based tool to facilitate collaboration with multiple stakeholders and resolve policy impediments to implementing innovations planned for the DD(X) destroyer. The DD(X) clearinghouse was recently transferred to the Naval Sea Systems Command’s Human Systems Integration Directorate. However, there are currently no requirements for this forum to address the policy barriers to optimizing crewing encountered in all new ship acquisitions. Given the Navy’s recapitalization challenges, efforts to control personnel costs and minimize total ownership costs are becoming increasingly important. Applying human systems integration principles to optimize crew size has the potential to result in a host of cost and operational benefits, including saving billions of dollars by reducing total ownership costs and increasing operational performance and ship maintainability. The experience to date in the DD(X) destroyer program shows that requiring human systems integration from the earliest stages of a program (during concept and technology development) and using the results to establish a crew size reduction goal as a key performance parameter are effective strategies to holding program managers accountable during program reviews for making significant progress toward reducing crew size. The DD(X) experience also shows that even when these practices are followed, the program will still face challenges to achieving these goals and encounter pressures to relax the goals as the system design progresses, thereby supporting human systems integration experts’ view that human systems integration plans and activities should receive continued review and focus throughout the acquisition process. In contrast, programs such as the JCC(X) and LHA (R) that do not use human systems integration early and do not hold program managers accountable during program reviews for crew size reduction are less likely to achieve the meaningful reduction in crew size. Unless the Navy more consistently applies human systems integration early in the acquisition process and establishes meaningful goals for crew size reduction, the Navy may miss opportunities to lower total ownership costs for new ships, which are determined by decisions made early in the acquisition process. The Navy’s varied approach to applying human systems integration has occurred partly because Navy guidance allows program managers considerable discretion in determining the extent to which they apply human systems integration principles in developing new systems. In the absence of clear requirements that human systems integration programs will be a key feature of all future acquisition programs, efforts to optimize crew size will continue to vary due to the competing pressures placed on program managers, and the Navy is likely to continue to miss opportunities to reduce personnel requirements for future ships. As a result, the Navy’s funding challenges may be exacerbated, and it may not be able to build or support the number of ships it believes are necessary to support the new defense strategy. Although the Navy’s recent efforts to establish a focal point for human systems integration policy within the Naval Sea Systems Command is a positive step, the success of this office will depend on its authority to influence acquisition programs in their initial stages. Because the instruction establishing this office does not clearly explain the process this office will use to certify that ships delivered to the fleet will have optimized crews, there is a risk that the office may not have sufficient leverage to influence new programs in their early stages and that this may result in missed opportunities to reduce crew size and achieve long-term cost savings. Even when the Navy uses a disciplined human systems integration process early in an acquisition program to identify ways to optimize crew size, implementation of new technologies and procedures is often hindered by the Navy’s culture and traditions, which are institutionalized in a wide array of policies and procedures affecting personnel levels, maintenance requirements, and training. In recognition of these barriers, the DD(X) program and the operational logistics community have established processes to address these barriers for their particular ship or community. However, not all new ship acquisition programs have developed or have access to such a forum to facilitate removing barriers to optimized manning to ensure that costly outdated policies and procedures are systematically reexamined as new innovations are developed. To ensure that the nation’s multibillion-dollar investment in Navy ships maximizes military capability and sailor performance at the lowest feasible total ownership cost, we recommend that the Secretary of the Navy develop and implement mandatory policies on human systems integration requirements, standards, and milestones. Specifically, for each new system the Navy plans to acquire, the Secretary of the Navy should require that a human systems integration assessment be performed as concepts for the system are developed and alternative concepts are evaluated; human systems integration analyses, including trade-off studies of design alternatives, be used to establish an optimized crew size goal that will become a key performance parameter in the program’s requirements document; and human systems integration assessments be updated prior to all subsequent milestones. To strengthen the Naval Sea Systems Command’s role in promoting the use of human systems integration for new ship systems, we recommend that the Secretary of the Navy require the command to clarify the Human Systems Integration Directorate’s role in and process for certifying that ships and systems delivered to the fleet optimize ship crewing. To facilitate the review of possibly outdated policies and procedures as new labor-saving innovations are identified through human systems integration efforts, we recommend that the Secretary of the Navy require that the Naval Sea Systems Command’s Human Systems Integration Directorate establish a process to evaluate or revise existing policies and procedures that may impede innovation in all new ship acquisitions. In commenting on a draft of this report, DOD agreed with our recommendations and indicated that actions were underway or planned to implement them. DOD stated that actions taken in response to our recommendations would only enhance ongoing human systems integration initiatives; ensure more consistent application of human systems integration processes across all ship acquisition programs; and lead to optimized ship crews, increased system performance, and reduced life- cycle costs. The Navy intends to implement our recommendation that it require ship programs to use human systems integration to establish crew size goals and help achieve them, in part, by developing a new program called SEAPRINT (Systems Engineering, Acquisition and PeRsonnel INTegration), modeled after the Army’s MANPRINT program that we cite in our report. The Navy’s SEAPRINT program will develop Navywide policy that identifies, mandates, and establishes accountability for human systems integration analyses. This policy will mandate that human systems integration is to be addressed in a specific plan before the acquisition’s earliest milestone, the initial capabilities document (formerly called the mission needs statement), the capabilities development document (formerly called the operational requirements document), and assessments performed as part of concept exploration and development and updated prior to all subsequent milestones. DOD also stated that it endorses a manpower-related key performance parameter for all new ship acquisition programs. In response to our recommendation that the Navy clearly define human systems integration certification standards for new ships, DOD stated that the Navy is developing technical human systems integration criteria and metrics that will be used for measuring and certifying that ships and ship systems meet human systems integration standards. With regard to our recommendation that the Navy formally establish a process to examine and facilitate the adoption of labor-saving technologies and best practices across Navy systems, DOD stated that the Navy has established a new human systems integration clearinghouse, implemented a pilot study using the clearinghouse, and involved stakeholders from across the Navy. DOD also provided technical comments, which we incorporated where appropriate. DOD’s comments are included in appendix VI of this report. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretary of the Navy; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-4402 or e-mail me at [email protected]. Key staff members that contributed to this report were Roderick Rodgers, Jacquelyn Randolph, Suzanne Wren, Mary Jo LaCasse, Charles Perdue, and Jane Hunt. To assess the Navy’s use of human systems integration principles to optimize crews and goals to reduce crew size on the four new ship programs we were asked to review, we obtained and analyzed key acquisition documents such as mission need statements, analyses of alternatives, and operational requirements documents as well as human systems integration plans and analyses. We also interviewed Naval Sea Systems Command and Military Sealift Command officials who are responsible for the DD(X), T-AKE, JCC(X), and LHA(R) programs to discuss the use of human systems integration and crew size goals. We obtained current ship crewing documents from the Navy’s Manpower Analysis Center and the Military Sealift Command and compared the crew size goals for the four ship programs we reviewed to the crew size levels for older ships that perform similar missions. We also obtained data from the Naval Sea Systems Command on the Arleigh Burke-class destroyer program on crew sizing and workload to compare with the contractor’s crew size estimate for the DD(X). To understand the extent to which the T-AKE’s primary mission of underway replenishment affects crew size, we interviewed (1) experts from the Underway Replenishment Department at the Naval Surface Warfare Center (Port Hueneme Division) and the National Steel and Shipbuilding Company (which designed and will build the T-AKE) and (2) a subject matter expert on Navy underway replenishment. To gain an understanding of operational logistics and cargo storage and warehousing, we interviewed officials from the Chief of Naval Operations (Strategic Mobility/Combat Logistics) and St. Onge Company (a subcontractor for the T-AKE ship program) and visited the Defense Distribution Depot Susquehanna, Pennsylvania, one of the Department of Defense’s (DOD) largest and most automated distribution centers. To obtain information on the Navy’s methods of calculating total ownership costs, we interviewed officials from the Naval Center for Cost Analysis and the Center for Naval Analyses. To calculate the ship crewing cost avoidance potential for the DD(X) and LHA(R) programs, we used data from the Navy’s Cost of a Sailor study for capturing comprehensive personnel costs and converted the data to fiscal year 2002 dollars. To evaluate factors that may impede the Navy’s use of human systems integration principles, we obtained and analyzed DOD, Joint Staff, and Navy systems acquisition directives, instructions, and guidance (e.g., the internet-based Defense Acquisition Deskbook and the Program Management Community of Practice). We reviewed the interim defense acquisition guidance as it pertains to the acquisition process, human systems integration, and total ownership cost. We did not assess the ship programs’ compliance with the several prior versions of DOD and Navy acquisition guidance, but we did evaluate the extent to which human systems integration was applied and whether crew size goals were established. We also obtained and reviewed numerous articles on military and civilian applications of human systems integration. To obtain information on the formulation and oversight of human systems integration policy and guidance, we met with officials from the offices of the Secretary of Defense; the Assistant Secretary of the Navy for Research Development and Acquisition; the Assistant Secretary of the Navy, Chief Engineer; and the Chief of Naval Operations (Acquisition and Human Systems Integration Requirements Branch). To obtain additional information on the benefits of human systems integration and best practices, we interviewed subject matter experts with the Naval Sea Systems Command’s Human Systems Integration Directorate, the DD(X) Program Office, the Army’s Office of the Deputy Chief of Staff for Personnel, Manpower and Personnel Integration (MANPRINT) Directorate, Carlow International Incorporated, and the Office of Naval Research’s Human Systems Science and Technology Department, and we attended the American Society of Naval Engineers Conference on Human Systems Integration. To gain insight on labor-saving technologies and changes to policies and procedures required to implement these innovations, we met with officials from the Naval Sea Systems Command’s SMARTSHIP Program Office; met with officials and toured the Office of Naval Research’s Afloat Lab in Annapolis, Maryland; and met with officials responsible for the DD(X) Policy Clearinghouse and the Naval Sea Systems Command’s Human Systems Integration Directorate. We discussed the funding for human systems integration with the Naval Sea Systems program managers for the four ship programs we reviewed. We conducted our review from June 2002 through April 2003 in accordance with generally accepted government auditing standards. In 1995, the Navy established the 21st Century Surface Combatant program to develop the next generation of surface combatants that would replace retiring destroyers and frigates on a timely basis. In November 2001, the Navy restructured this program from one intended to develop a single ship class of 32 ships into its current form known as the DD(X). The new program aims to develop and acquire three new classes of surface combatants to include the DD(X) as the centerpiece, a cruiser called CG(X), and a smaller littoral combat ship. The first DD(X) destroyer is to be procured in fiscal year 2005 and enter service in fiscal year 2011. The initial DD(X) is viewed as a “test bed” for the host of new technologies under development. The Navy plans to employ a spiral acquisition strategy for the ship class in which new technology will be phased in over three distinct ship flights. Plans call for the DD(X) destroyer to have a number of new features and technologies, including an advanced electric-drive/integrated power system for propelling the ship that could become the basis for applying electric-drive technology more widely throughout the fleet, labor-saving technologies that may permit the ship to be operated with a crew of 125 to 175 people instead of the more than 350 needed to operate current Arleigh Burke-class (DDG-51) destroyers, a new hull design for reduced detectability, two new 155-mm Advanced Gun Systems for supporting Marine forces ashore, and 128 vertical-launch tubes for Tomahawk cruise missiles and other weapons. The Navy is now reevaluating many of the ship’s operational requirements and cost estimates (which were determined and approved under the earlier DD-21 program) and may make substantial changes to the originally envisioned capabilities, including relaxing the crew size and detectability goals, changing the type of gun and amount of munitions carried, and reducing the number of vertical launch tubes. Previously, the Navy projected the unit procurement cost for the DD-21 destroyer to be not more than $750 million in fiscal year 1996 dollars (the equivalent of about $795 million in fiscal year 2001 dollars)— somewhat less than the $950 million unit procurement cost of today’s Arleigh Burke-class destroyers. The DD-21 was also envisioned to have an operating and support cost of not more than $6,000 per hour—about one-third less than that of the Arleigh Burke-class, in large part resulting from the smaller crew planned for the future destroyer. In April 2002, the Navy selected Northrop Grumman Ship Systems as the design agent for the DD(X) and the program entered detailed design. The T-AKE cargo ship is the new combat logistics force ship to be operated by the Military Sealift Command. The ship’s primary mission is to shuttle food, ammunition, repair parts, supplies, and limited quantities of fuel to station ships and combatants. The new ship will replace T-AE 26 Kilauea-class ammunition ships and T-AFS 1/8 Mars-class and Sirius-class combat stores ships in the Military Sealift Command. The ship’s secondary mission is to operate with an oiler (T-AO 187 Kaiser-class) to provide logistics support to a carrier battle group. In this capacity, the T-AKE will replace AOE 1 Sacramento-class ships. The ship program initiated development in 1995 and began procurement in October 2001. The Navy has purchased 3 of the 12 planned ships for a total of almost $1 billion, with delivery expected in fiscal years 2005 and 2006. Current plans are to purchase the 4th through 12th ships between fiscal year 2003 and 2007 for delivery between fiscal year 2006 and 2010. Once all are purchased and delivered, T-AKE cargo ships will represent 41 percent of the recapitalized combat logistics force fleet (at full operating status). Military Sealift Command officials mentioned several factors—mission requirements and personnel policies—that explain why, in comparison to the Navy, they are able to operate combat logistics force ships with smaller crews. Logistics ships in the Military Sealift Command have fewer missions and therefore can operate with smaller crews. For example, unlike Navy ships, Military Sealift Command logistics ships do not carry weapons and therefore their crews do not require weapon operators. Military Sealift Command ships also incorporate several other crew reduction practices, including an unattended engine room, minimal bridge watch by use of integrated bridge system technology, self-service laundry facilities and food service initiatives. Command officials also said that because of their personnel policies, civilian mariners are more experienced than their Navy counterparts. Specifically, because there are no personnel policies requiring job rotation or that individuals leave the service if they are not promoted (“up or out”), civilian mariners are more likely to have been in their current job longer than active-duty Navy personnel. Command officials said that these personnel policies result in a workforce that is more experienced than their Navy counterparts. The Military Sealift Command’s operating policies also enable it to operate cargo ships with smaller crews than the Navy. For example, command officials said that their policy requires 9 crewmembers per underway replenishment station and that the Navy requires 20 per station. The Military Sealift Command also does not assign a safety officer to each underway replenishment station as the Navy does. In November 1999, the Navy established the Joint Command and Control (Experimental) or JCC(X) program to replace the Navy’s four aging command ships built in the late 1960s and early 1970s. In addition, the JCC(X) was intended to provide an afloat platform for performing joint command and control functions, such as those performed by a joint force commander without the need to obtain permission from host countries to establish a land-based headquarters operation. By November 2001, the Navy had received the Office of the Secretary of Defense’s endorsement for an afloat command capability and completed its formal analysis of alternatives. This analysis showed that the assigned Navy crew (the ship’s operators) would account for roughly half the life-cycle cost for a JCC(X). It also showed that a mix of Navy sailors and civilian mariners would be capable of performing the crew functions at two-thirds of the personnel cost, saving about $2 billion for four ships over a 40-year service life. The analysis further estimated that a newly designed ship sized for an embarked command staff of about 800 (these people are in addition to the ship’s crew) would cost about $1 billion for a lead ship in fiscal year 2006 and $850 million for a follow-on ship if three were built. Subsequent to this analysis, the Navy’s draft 2004 budget plan eliminated funding for the JCC(X) and instead directed another ship program, the Maritime Prepositioning Force (Future), to study developing joint command and control modules or variants. In 2001, the Navy established the Amphibious Assault Ship, General Purpose (Replacement) or LHA(R) program to replace its five aging LHA 1 Tarawa-class amphibious assault ships. These ships are primarily designed to move large quantities of Marines, their equipment, and supplies onto any shore during hostilities. The first LHA ship will be replaced by a Wasp-class amphibious assault ship, the LHD-8, in approximately fiscal year 2007, and the remaining ships will be replaced by a modified version of the LHD 8 no later than fiscal year 2024. The modified variant will be made longer and wider to accommodate the larger and heavier aircraft the Marines are developing, the MV-22 Osprey and the Joint Strike Fighter. The Navy estimates the cost for the first ship to be about $3 billion with the three successor ships costing about $2.1 billion each. The ship’s annual operating and support cost is estimated to be about $111 million. The LHA(R) program is currently in the first acquisition phase called concept technology and development. Although its regulatory structure is undergoing change, the Department of Defense’s (DOD) complex process to deliver a new ship class to the fleet occurs in three steps. First, the Navy’s requirements community establishes requirements for a new system. Second, the Navy’s acquisition organizations and contractors design and produce the ship. Finally, after building the ship, the warfighter assumes responsibility for operating and maintaining the ship. DOD’s policy is to acquire weapons systems using a disciplined systems engineering process designed to optimize total system performance and minimize total ownership costs. The regulation, requirements, and design aspects of the acquisition process are discussed below. Weapons systems acquisition is governed by a complex regulatory structure ranging from public laws to nonmandatory policies, practices, and guidance. Until recently, three major DOD regulatory documents guided the management of Defense acquisition: DOD Directive 5000.1, “The Defense Acquisition System;” DOD Instruction 5000.2, “The Operation of the Defense Acquisition System;” and DOD Regulation 5000.2-R, “Mandatory Procedures for Major Defense Acquisition Programs (MDAPs) and Major Automated Information Systems (MAIS) Acquisition Programs.” On October 30, 2002, the Deputy Secretary of Defense canceled all three documents and by memorandum issued interim guidance. On an interim basis, the DOD 5000.2-R was reissued as a guidebook, Interim Defense Acquisition Guidebook, to be used for best practices, lessons learned, and expectations; but its guidance is not mandatory. Additional, supporting, discretionary best practices; lessons learned; and expectations are posted on DOD’s internet Web site, DOD 5000 Series Resource Center. The interim DOD guidance retains the basic acquisition system structure (i.e., no new phases), emphasizes evolutionary acquisition, modifies the requirements generation documents, and makes several other changes. Policies and procedures for developing and approving requirements for new systems are also under revision. DOD’s acquisition process, as outlined in its interim guidance issued October 30, 2002, provides an ordered structure of tasks and activities to bring a program to the next major checkpoint. These checkpoints, called milestones, are the points at which a recommendation is made and approval sought regarding starting or continuing an acquisition program into one of three phases: concept and technology development, system development and demonstration, and production and deployment (see fig. 2). The phases are intended to provide a logical means of progressively translating broadly stated mission needs into well defined system-specific requirements and ultimately into effective systems. A fourth phase, operations and support, follows the system acquisition. This phase represents the ownership period of the system when a unit, in this case a ship, is fielded and operated by sailors for a period of 30 to 50 years. A program’s progress toward established program goals, or key performance parameters, is assessed at milestones. The concept and technology development phase has two major efforts: concept exploration and technology development. This phase begins with a milestone A decision to enter concept and technology development. Entrance into this phase depends upon a validated and approved initial capability document . Concept exploration typically consists of competitive, parallel, short-term concept studies guided by the initial capability document (mission need statement). The focus of these studies is to refine and evaluate the feasibility of alternative solutions to the initial concept and to provide a basis for assessing the relative merits of these solutions. Analyses of alternatives are used to facilitate comparisons. A project may enter technology development when a solution for the needed capability has been identified. This effort intends to reduce technology risk and to determine the appropriate set of technologies. A project exits technology development when an affordable increment of militarily-useful capability has been identified, the technology for that increment has been demonstrated in a relevant environment, and a system can be developed for production within a short time frame (normally less than 5 years). During technology development, the user is required to prepare the capability development document to support subsequent program initiation. An affordability determination is made in the process of addressing cost as a military requirement and included in the capability development document , using life-cycle cost or, if available, total ownership cost. The purpose of the system development and demonstration phase is to develop a system. This phase has two major efforts: system integration and system demonstration. The entrance point is milestone B, which is also the initiation of an acquisition program. The system integration effort intends to integrate subsystems and reduce system-level risk. The system can enter system integration when the program manager has a technical solution for the system, but has not yet integrated the subsystems into a complete system. The critical design review during system development and demonstration provides an opportunity for mid-phase assessment of design maturity. The system demonstration effort intends to demonstrate the ability of the system to operate in a useful way consistent with the validated key performance parameters. The program can enter system demonstration when the program manager has demonstrated the system with prototypes. This work effort ends when a system demonstrates its capabilities in its intended environment using engineering development models or integrated commercial items (in addition to several other criteria). The purpose of the production and deployment phase is to achieve an operational capability that satisfies mission needs. The decision to commit DOD to low-rate initial production takes place at milestone C. Continuation into full-rate production results from a successful full-rate production decision review. During this effort, units shall attain initial operational capability. Operations and support has two major efforts: sustainment and disposal. The objectives of this activity are the execution of a support program that meets operational support performance requirements and sustainment of systems in the most cost-effective manner for the life cycle of the system. When the system has reached the end of its useful life, it must be disposed of in an appropriate manner. Trade studies are required to support decisions throughout the systems engineering process. During a requirements analysis, requirements are balanced against other requirements or constraints, including cost. Requirements analysis trade studies examine and analyze alternative performance and functional requirements to resolve conflicts and satisfy customer needs. As part of the design competition for the DD(X) destroyer, the competing contractors conducted trade studies and analyses on their system concept designs and the related systems requirements. Table 1 highlights some of the 23 trade studies conducted by the winning design agent, Northrop Grumman Ingalls Shipyard and Raytheon. Plans for the DD(X) destroyer envision significant reductions when compared to previous destroyer ships in the number of crewmembers required to man watches, provide support functions, and perform special evolutions. For example, DD(X) plans call for 20 watchstations, requiring 60 billets, a significant reduction from the DDG 51 destroyer, which has 61 watchstations requiring 163 billets. Similarly, DD(X) ship crew sizing studies project that 833 hours will be required per week for own unit support functions such as administration, messing, and supply while the DDG 51 requires 5,500 for the same functions. To achieve these proposed reductions, the DD(X) plans to employ a new operational crewing concept, human-centered design and reasoning systems, advances in ship cleaning and preservation, a new maintenance strategy, an automated damage control system, and “reach back” technologies and distance support. Officials emphasized that the DD(X) plans will continue to evolve as the program matures. In addition, changes to the DD(X) destroyer’s operational requirements, which are currently being reevaluated, will likely further affect these estimates. The approach to operational crewing on the DD(X) destroyer will differ markedly from that employed on legacy ships. The older ship classes tend to have legacy systems and watchstations that are “stovepiped,” meaning that they maintain separate stations and databases for such things as sensors, weapon systems, and logistics, which are not linked together and which require people to be specially trained on these systems. This results in an inflexible work environment in which commanders are unable to level workload across watchstanders because they are trained in separate disciplines. It requires extra people, with little increase in capability. The DD(X) concept is to have watchstanders trained functionally across warfare areas who can be flexibly employed as the situation demands. This approach results in a more compact, flexible watch team, which requires fewer augmentations and which is designed to flexibly respond to a variety of tactical situations. Underpinning this concept is a strategy in which crewmembers will be highly trained across multiple warfare areas or maintenance tasks and advanced skills will apply across multiple disciplines with specialized skills only being used periodically. The DD(X) destroyer envisions reducing underway watchstanding through greater use of human-centered design and reasoning systems such as integrated bridge system technologies demonstrated in CG 47 Ticonderoga-class “smart ship” and many commercial ships that provide computer-based navigation, planning and monitoring, automated radar plotting, and automated ship control; the integrated command environment that provides reduced combat information center crewing by using “multi-modal watchstation” type displays, the ability to monitor more than one watchstation at each console, and the use of decision support systems to facilitate instantaneous situational awareness; computerized engineering control systems that are extensively used in the commercial shipping industry and machinery space design that permits zero underway crewing by using remote monitors and sensors; and a flexible watch team-type organization. The DD(X) destroyer plans to use advances in ship cleaning and preservation to free sailors from traditional maintenance and preservation duties and privatizing the preservation work that cannot be engineered away. Reliability-centered maintenance and condition-based maintenance concepts will be employed on the DD(X) instead of the traditional planned maintenance system currently used on DDG 51 destroyers. This change is expected to reduce noncorrective type maintenance and significantly reduce corrective maintenance induced by the planned maintenance system. In addition, routine maintenance on the DD(X) is projected to be reduced by increased equipment reliability and a strategy of replacing failed components on board instead of repairing them at sea. Lastly, cleaning is expected to be reduced by better ship design that capitalizes on commercial shipping industry best practices such as cornerless spaces and maintenance-free deck coverings. The DD(X) destroyer maintenance strategy focuses on allowing sailors to concentrate on war-fighting tasks and skills rather than on ship maintenance and preservation (i.e., “rust busting” skills). The DD(X) maintenance strategy envisions no organizational level repair conducted on the ship. As such, many repair watches have been eliminated. Three key elements of the DD(X) maintenance strategy include reducing maintenance requirements through improved system reliability and redundancy and to leverage labor-saving advances in corrosion control materials and technology, improving maintenance work efficiency by conducting condition-based maintenance instead of scheduled maintenance, and using reach back and remote monitoring support while deployed. The DD(X) destroyer will employ extensive automated damage control systems, integrated with an optimally manned damage control organization to quickly suppress and extinguish fires and control their spread. The DD(X) destroyer plans to use “reach back” technologies and distance support to reduce crew workload. “Tele-systems” initiatives are being studied for ship crew reduction in the areas of medicine, personnel, pay, training, and maintenance. DD(X) also envisions having real-time collaboration between the ship and shore, and between ships. Ships would access expertise from the systems commands, industry, and other deployed ships on a year round, around the clock basis. Table 2 compares the workload and crew composition for the DDG 51 Flight IIA and those proposed for the DD(X). In addition to the daily shipboard routine of standing watches in the various ship’s departments, designated crewmembers also have collateral duties to support special events, referred to as special evolutions. These evolutions involve activities such as underway replenishment of fuel, food and ammunition transferred from either helicopters or other ships, flight operations, small boat operations, and anchoring. The number of people required and the estimated labor hours per week for these special evolutions are other indicators of ship workload. Table 3 compares the number of billets and weekly workload required for selected special evolutions on the Arleigh Burke-class destroyer with those estimated for the DD(X) destroyer. Table 3 compares the billets and labor hours required per week for special evolutions on the DDG 51 Flight IIA and those proposed for the DD(X). The following is GAO’s comment on the Department of Defense’s letter dated May 12, 2003. 1. We disagree that the tone of our report implies a lack of interest or desire on the part of program managers to pursue manpower reductions. Rather, our report notes that a number of factors, including funding issues, create barriers that make it more difficult for program managers to pursue manpower reductions and develop robust human systems integration programs. Moreover, we agree that resourcing human systems integration and supporting analyses at the earliest stages of the program is a responsibility that does not wholly reside with the program manager but is shared by the Navy staff. As our report clearly points out, given the existing barriers and an absence of specific requirements to implement a comprehensive human systems integration approach, the JCC(X) and LHA(R) programs did not identify or request resources for performing human systems integration and related analyses to support the research and development required to pursue advanced technology that could have enabled workload and manpower reductions.
The cost of a ship's crew is the single largest incurred over the ship's life cycle. One way to lower personnel costs, and thus the cost of ownership, is to use people only when it is cost-effective--a determination made with a systems engineering approach called human systems integration. GAO was asked to evaluate the Navy's progress in optimizing the crew size in four ships being developed and acquired: the DD(X) destroyer, T-AKE cargo ship, JCC(X) command ship, and LHA(R) amphibious assault ship. GAO assessed (1) the Navy's use of human systems integration principles and goals for reducing crew size, and (2) the factors that may impede the Navy's use of those principles. The Navy's use of human systems integration principles and crew size reduction goals varied significantly for the four ships GAO reviewed. Only the DD(X) destroyer program emphasized human systems integration early in the acquisition process and established an aggressive goal to reduce crew size. The Navy's goal is to cut personnel on the DD(X) by about 70 percent from that of the previous destroyer class--a reduction GAO estimated could eventually save about $18 billion over the life of a 32-ship class. The goal was included in key program documents to which program managers are held accountable. Although the Navy did not set specific crew reduction goals for the T-AKE cargo ship, it made some use of human systems integration principles and expects to require a somewhat smaller crew than similar legacy ships. The two other ships--the recently cancelled JCC(X) command ship and the LHA(R) amphibious assault ship--did not establish human systems integration plans early in the acquisition programs, and did not establish ambitious crew size reduction goals. Unless the Navy more consistently applies human systems integration early in the acquisition process and establishes meaningful goals for crew size reduction, the Navy may miss opportunities to lower total ownership costs for new ships, which are determined by decisions made early in the acquisition process. For example, the Navy has not clearly defined the human systems integration certification standards for new ships. Several factors may impede the Navy's consistent application of human systems integration principles and its use of innovations to optimize crew size: (1) DOD acquisition policies and discretionary Navy guidance that allow program managers latitude in optimizing crew size and using human systems integration, (2) funding challenges that encourage the use of legacy systems to save near-term costs and discourage research and investment in labor-saving technology that could reduce long-term costs, (3) unclear Navy organizational authority to require human systems integration's use in acquisition programs, and (4) the Navy's lack of cultural acceptance of new concepts to optimize crew size and its layers of personnel policies that require consensus from numerous stakeholders to revise.
The military services annually determine their current and future munition procurement requirements in accordance with the Defense Planning Guidance. Historically, the Defense Planning Guidance has directed the military services to arm a given force structure to win two nearly simultaneous major theaters of war. In recent years, the Department of Defense has engaged in a number of military operations that vary in size and circumstance from a major theater of war; consequently, the current National Military Strategy and the Defense Planning Guidance call for the services to prepare for a number of small-scale contingency operations in addition to the two major theaters of war. The conditions under which small-scale operations are fought may differ from conditions in a major theater war, which may increase the services’ requirements for highly technical precision munitions designed to limit loss of life and expensive military assets. The increased use of precision munitions in recent conflicts reduced inventories and raised questions about whether adequate attention had been paid to the impact of small- scale contingencies on the ability of U.S. forces to respond and sustain operations for the two major theaters of war. Of the approximately $4.2 billion of munitions the services are planning to procure in fiscal year 2001, 46 percent (or $1.9 billion) will be used to procure precision munitions designed to reduce the number of conventional munitions needed to defeat enemy targets while at the same time limiting loss of expensive weapons systems and life. By fiscal year 2005, the services are planning to increase their procurement of precision guided munitions by about 5 percent. In 1994, to generate consistent munition requirements Department-wide, and to ensure that the military services have both an adequate supply and the appropriate types of munitions to address changing mission needs, the Department of Defense standardized the process by which the services determine their munition requirements. In 1997, the Department of Defense issued Instruction 3000.4, which sets forth policies, roles and responsibilities, time frames, and procedures to guide the services as they develop their munition requirements. This instruction is referred to as the Capabilities-Based Munitions Requirements process and is the responsibility of the Under Secretary of Defense for Acquisition, Technology and Logistics. The instruction describes a multiphased analytical process that begins when the Under Secretary of Defense for Policy develops, in consultation with the Chairman of the Joint Chiefs of Staff, the military services, and the warfighting Commanders in Chief, policy on munition requirements for the Defense Planning Guidance. The Defense Intelligence Agency uses the Defense Planning Guidance and its accompanying warfighting scenarios as well as other intelligence information to develop a threat assessment. This assessment contains estimates and facts about the potential threats that the United States and allied forces could expect to meet for each of the two major theaters of war scenarios. The warfighting Commanders in Chief, responsible for the major theaters of war scenarios, in coordination with the Joint Chiefs of Staff, use the threat assessment to allocate each service a share of the identified targets by phases of the war. Next, the services develop their combat requirements using battle simulation models and scenarios to determine the number and mix of munitions needed to meet the Commanders in Chief’s objectives separately by each major theater of war scenario. To develop these requirements, the services draw upon and integrate data and assumptions from the Defense Planning Guidance requirements, warfighting scenarios, and target allocations, as well as estimates of repair and return rates for enemy targets and projected assessments of damage to enemy targets and installations. Other munition requirements include munitions (1) needed for forces not committed to support combat operations, (2) to provide a post-major theater of war combat capability, and (3) to train the force, support service programs, and peacetime requirements. These requirements, in addition to the combat requirement, comprise the services’ total munitions requirement. The total munitions requirement is then balanced along with projected inventory and affordability to determine how many of each munition the services will procure within their specified funding limits and used to develop the services’ Program Objectives Memorandum and Presidential budget submission. Despite Department efforts to standardize the process and generate consistent requirements, many questions have been raised about the accuracy or reliability of the requirements determination process. Between the Department of Defense Inspector General and our agency, 20 reports have been issued that state that systemic problems — such as questionable and inconsistently applied data, inconsistency of processes among and between services, and unclear guidance — have inflated the services’ requirements for certain categories of munitions. A list of these reports is included in appendix II. The Department acknowledged these weaknesses and recognized that inflated requirements can negatively affect munitions planning, programming, and budget decisions, as well as assessments of the size and composition of the industrial production base. As a result, the Defense Planning Guidance for fiscal years 2000-2005, dated April 1998, directed that a Capabilities-Based Munitions Requirements working group develop recommendations to improve the accuracy of the process. In October 1998, the group recommended several corrective actions to address weaknesses identified by both the Inspector General and our agency. Coordinated threat assessment Revised repair rates for damaged targets and target damage Modified the target allocation process Revised risk assessments Based on the recommendations of the Capabilities-Based Munitions Requirements working group, the Department has improved several key components of the requirements determination process. Process improvements include Department-wide coordination of the threat assessment, updated projections as to the amount of time it takes a potential enemy to repair and return damaged targets to the battlefield and target damage assessments, modifications to the target allocation process, and a risk assessment that includes the impact of small-scale contingency operations. The Department expects these improvements to correct weaknesses in the process that can result in over- or understated munition requirements. The Defense Intelligence Agency develops an annual threat assessment that identifies potential threats that the United States and allied forces could expect to meet for each of the two major theaters of war scenarios. The Capabilities-Based Munitions Requirements instruction directs that the Commanders in Chief and the Joint Chiefs of Staff use the threat assessment to allocate targets to each of the services. The Department has identified weaknesses in this area and taken steps to strengthen this assessment. Defense Intelligence Agency officials stated that in the past, the services could, based on input from their own intelligence sources or direction from the warfighting Commanders in Chief, develop an independent threat analysis that could result in the services planning to destroy the same targets and, consequently, overstating munitions requirements. To resolve this issue, the working group directed that the Defense Intelligence Agency fully coordinate the threat assessment with the services and throughout the Defense intelligence communities. In accordance with this directive, the Defense Intelligence Agency coordinated the most recent threat assessment that describes the threat for the fiscal year 2002-2007 planning cycle. By adopting a coordinated threat assessment, the Department expects to be better able to ensure that the services’ munition requirements will be more accurate. Repair rates are a projection of the amount of time it takes a potential enemy to repair and return a target to the battlefield and determine the number of attacks needed to destroy a target, which directly influences munition quantities. Since the services use these rates as input into their warfighting simulation models to determine their munition requirements, these rates should be current and reflect a country’s existing repair capability. In response to a Department of Defense Inspector General review of this process, the Department has taken steps to address the quality of its data on projected repair rates. A Department of Defense Inspector General audit of service requirements for specific categories of munitions reported that the services used repair rates that overstated the requirement for these munitions. According to an official from the Joint Staff, the services were using repair rates for countries from the Cold War era that were able to repair and return damaged property to the battle more quickly than could countries used in today’s war planning scenarios. To address this issue, in December 1999, the Defense Intelligence Agency updated and standardized the repair rates the services used in their battle simulation models, and the Department expects these actions will address the issue of overstated requirements. Battle damage assessments are more critical to munitions requirement planning with the increased use of precision guided munitions and changes in warfighting. Previously, munitions were fired from a range that allowed a visual damage assessment, but precision guided munitions are often fired miles from the target, which eliminates the ability to visually assess whether the target has been damaged or destroyed. Knowing in advance the probability that a specific munition will destroy the target is necessary to accurately determine the number and mix of munitions that will be required. To improve battle damage assessments, the Defense Intelligence Agency developed battle damage assessment factors that measure (1) whether a target was hit, (2) the extent of the damage, and (3) whether the objective was met. These factors are more predictive if the munition has a guidance system that provides damage information to the launch site. According to a Navy official, using the newly developed battle damage assessment factors for the fiscal year 2002-2007 requirements planning cycle significantly reduced the requirement for certain categories of naval munitions. According to an official from the Joint Staff, these assessments have also reduced the potential for overstated munition requirements for the services’ air components. Allocating targets to the services is one of the most critical steps in the requirement determination process as it defines the services’ role in the war fight and determines the number and type of munitions for which the services need to plan. In accordance with the Capabilities-Based Munitions Requirements instruction, the warfighting Commanders in Chief are required to allocate targets to the services for their area of responsibility. This is an area that has proven problematic in reaching an agreement among the services, but the Joint Chiefs of Staff have provided direction to strengthen the process. In response to a Department of Defense Inspector General audit critical of the Central Command’s allocation process, a 1999 pilot project was initiated that transferred the U.S. Central Command’s target allocation role to the Joint Chiefs of Staff who, in coordination with the services, developed a methodology to allocate targets. According to officials at the Joint Staff and the Central Command, the methodology was intended to better align the Commanders in Chief’s near-term objectives (which generally cover a 2-year period) and the services’ long-term planning horizon (which is generally 6 years). Another benefit of the pilot was that the Joint Staff could validate the services’ munition requirements by matching requirements to target allocations. The Army, the Navy, and a warfighting Commander in Chief objected to the pilot’s results and criticized the methodology used to allocate the targets because it allocated significantly more targets to the Air Force and fewer targets to the Army. Army officials objected that the methodology did not adequately address land warfare, which is significantly different than air warfare. The Navy did not concur with the results, citing the lack of recognition for the advanced capabilities of future munitions. U. S. Central Command officials disagreed with the results, stating that a change in methodology should not in and of itself cause the allocation to shift. In July 2000, citing substantial concerns about the pilot, the Under Secretary of Defense, Acquisition and Technology suspended the target allocation for fiscal year 2000 and directed that the services use the same allocations applied to the fiscal year 2002-2007 Program Objectives Memorandum. In August 2000, the Joint Chiefs of Staff structurally changed the threat allocation process to address the services’ and the warfighting Commander in Chief’s objections. The warfighting Commanders in Chief will now prepare a near-term target allocation using a methodology developed by the Joint Chiefs. Each warfighting Commander in Chief will develop two allocations—one for strike (air services) forces and one for engagement (land troops) forces for his area of responsibility. The first will allocate specific targets to strike forces under the assumption that the air services can eliminate the majority of enemy targets. The second allocation will assume that less than perfect conditions exist (such as bad weather), which will limit the air services’ ability to destroy their assigned targets and require that the engagement force complete the mission. The Commanders in Chief will not assign specific targets to the engagement forces, but they will estimate the size of the expected remaining enemy land force. The Army and the Marines will then be expected to arm themselves to defeat those enemy forces. The Joint Staff will use the Commanders in Chief’s near-year threat distribution and extrapolate that information to the last year of the Program Objectives Memorandum for the purpose of the services’ munitions requirement planning. The Department expects that these modifications should correct over- or understated requirements and bridge the gap between the warfighting Commanders in Chief’s near-term interest and objectives and the services’ longer planning horizon. Until recently, the Department lacked an assessment of the impact of small-scale contingencies on munition requirements, and uncertainties existed regarding the impact on service abilities to meet the requirements of the two major theaters of war. However, the Department has taken action to better address this issue. In October 1999, the Joint Requirements Oversight Council directed that the Joint Staff coordinate an assessment of the risk associated with current and projected munition inventories available for two major theaters of war and inventories depleted by a challenging sequence of small-scale contingency operations. According to an official from the Joint Chiefs of Staff, the increased use of precision guided munitions during the contingency operation in Kosovo prompted several Department studies that addressed whether the military services have sufficient munitions to fulfill the two major theaters of war requirement. However, initial studies focused on the difference between the services’ two theaters of war requirement and the actual number of munitions procured, but did not demonstrate the impact of shortfalls of specific munitions on the services ability to respond to two major theaters of war. The assessment, completed in April 2000, which focused on inventories of precision guided munitions, concluded that small-scale contingencies would have a negligible impact on the Commanders in Chief’s ability to meet the two major theaters of war requirement. An official from the Joint Staff stated that the study’s conclusion was based on the assumption that in a major theater war, precision guided munitions might be used during the early phases of the war for critical targets and then other, less accurate munitions could be substituted. However, according to an Air Force official, the assessment did show that small-scale contingency operations negatively affect inventories of some precision munitions, which may limit the Commanders in Chief’s flexibility in conducting two major theater wars. Department officials added that the assessment should give the services information they need to plan for inventories of specific munitions that would be affected more than others during contingency operations. The Department is incorporating the actions that have been taken to improve the process into a revised Capabilities-Based Munitions Requirements instruction that it expects to issue in the spring 2001 and to be used to determine the services’ fiscal year 2004-2009 requirements. Munitions effectiveness data Warfighting scenarios Notwithstanding the corrective actions the Department has taken or has underway to improve the process, other key components have either not been completed or not been decided upon. The Department has not completed a database listing detailed target characteristics for large enemy installations based on warfighting scenarios and has not developed new munitions effectiveness data to address deficiencies the services and the Commanders in Chief have identified. Completion dates for these tasks have been exceeded or not established. Additionally, the Department has not determined whether to create more detailed warfighting scenarios in the Defense Planning Guidance or to rate scenarios in terms of their probability. Such an action could increase reliability of the requirement determination process and ensure consistency in the services’ analyses in support of their requirements. The Department is in the process of incorporating the completed actions into a revised Capabilities-Based Munitions Requirements instruction to be issued in the spring 2001 and used by the services to determine their fiscal year 2004-2009 munitions procurement requirements. However, the Department has no clear plan of action for resolving these issues or a time frame for their completion. Until the remaining tasks are completed and incorporated into the process, questions are likely to remain regarding the accuracy of the munition requirements process as well as the Department’s ability to identify munitions most appropriate to defeat potential threats. According to Department officials, the Department lacks a common picture of the number and types of targets on large enemy installations as identified in the warfighting scenarios and as a result, the services have been identifying targets on enemy installations differently. According to an official from the Joint Staff, the Department has been concerned that this lack of common target characteristics could over- or understate requirements for certain munition categories. To resolve this issue, the Joint Chiefs instructed the Defense Intelligence Agency, in coordination with the warfighting Commanders in Chief, to develop target templates that would provide a common picture of the types of potential targets on enemy installations. According to Defense Intelligence Agency officials, the services and the Commanders in Chief could also use this information to attack these targets with munitions that would minimize damage to the installation, reduce reconstruction costs after a conflict, and allow U.S. forces to use it if needed. An official from the Joint Staff stated that while the Defense Intelligence Agency was to complete the target templates by August 31, 2000, it has yet to do so and a specific completion date has not been established. How effective a munition is against a target can predict the number of munitions necessary to defeat it. According to an official at the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, funding to maintain the manual containing this information has historically been limited. The Department recognizes that munitions effectiveness data is a critical component for requirements planning and that outdated information could over- or understate munition requirements. To address this shortfall, the Department provided $34 million in fiscal year 2001 to update and publish munitions effectiveness data for use by the services in their battle simulation models. At the time of our review, the Department did not know when this project would be completed. The Defense Planning Guidance contains an appendix of warfighting scenarios that detail conditions that may exist during the conduct of the two major theaters of war; these scenarios are developed with input from several sources, including the Defense Intelligence Agency, the Joint Staff, and the services. This appendix provides a common base line from which the services determine their munition requirements. However, according to several Department officials, the warfighting scenarios in the Defense Planning Guidance need to include more detail. Specifically, these officials stated that information about the potential constraints under which the war will be fought and casualty and asset loss guidance can affect the types and numbers of munitions the services plan to procure. Some Department officials stated that the Defense Planning Guidance used to contain specifics on the conduct of the war fight; however, when the Department adopted the Capabilities-Based Munitions Requirements instruction, the detail was eliminated in favor of broader guidance. Conversely, other Department officials disagree with the need for increased guidance. According to an official from the Office of Secretary of Defense, Requirements and Plans, additional guidance and specificity is not necessary because the services should use the scenarios in the Defense Planning Guidance to plan their force structure rather than their munition requirements. Some Air Force and Army officials agree, stating that the Defense Planning Guidance provides sufficient guidance for munition planning for the mandatory two major theaters of war scenarios. The chief of the Army Combat Support War Reserve Branch suggested that specific guidance would only be necessary if the Army was required to plan for small-scale contingencies with restrictions on the conduct of the war fight. However, according to some Department officials, while the Defense Planning Guidance provides the services a basis for their force structure, it is also an integral part of the requirements determination process. From this vantage point, Department officials suggest that if small-scale contingency operations are becoming a part of an overall military strategy then the Defense Planning Guidance should reflect this by incorporating more detailed guidance on the conduct of such operations. By providing additional guidance on the conduct of the war fight, such as limiting loss of weapon systems and lives, the services would be better able to plan their munition requirements to ensure the stated conditions were met. In addition to lacking sufficient specificity on warfighting scenarios, the Defense Planning Guidance does not rank the scenarios by the probability of their occurrence. In 1998, we reported that the services were using the warfighting scenario that supported additional requirements for specific munitions. In addition, the requirement for a specific Army munition was inflated partly because the Army disregarded the Defense Planning Guidance scenarios and instead used two scenarios it had developed independently. Consequently, the requirement for the munition was tripled and the Army’s justification for the requirement was inconsistent with the Commanders in Chief’s objectives and the Army’s doctrine. To ensure that the services plan for the most likely scenario in the Defense Planning Guidance and not use unlikely events to support certain munitions, the Capabilities-Based Munitions Requirements working group requested that the Defense Intelligence Agency develop probability factors for the various warfighting scenarios. While the Defense Intelligence Agency has developed these factors, at the time of our review, the Department was still debating whether to prioritize the scenarios. The Department is working to ensure that the requirements determination process results in accurate numbers and types of munitions necessary to defeat threats as specified in the Defense Planning Guidance. While the Department has made progress and has identified specific areas still requiring attention, there is no clear plan with time frames for resolving key issues. Some of these issues have only been partially completed and others are in the early stages of evolution. Specifically, target templates have not been completed and munitions effectiveness data has not been updated, nor have decisions been made regarding more detailed warfighting scenarios and the ranking of scenarios. Consequently, the reliability of the services’ munitions requirements remain uncertain and could adversely affect munitions planning, programming, budgeting, and industrial production base decisions. Until these issues are resolved and a revised Capabilities-Based Management Requirements instruction is issued, the accuracy of the munitions requirements will remain uncertain. To ensure that additional actions are taken to improve the munitions requirements determination process we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to take the lead in establishing a plan for resolving outstanding issues. Such a plan should include time frames for resolving the outstanding issues, metrics for measuring progress, and milestones for implementing the proposed changes. Specific areas needing attention include completing target templates, publishing the updated munitions effectiveness data, resolving the issues involving the level of detail to include in the Defense Planning Guidance and whether to attach probability data to the warfighting scenarios, incorporating all improvements to the munitions requirement process in a revised Capabilities-Based Munitions Requirements instruction, and establishing a time frame for reassessing munitions requirements once all improvements have been implemented. The Director of Strategic and Tactical Systems in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics provided written comments to our report, which are included in appendix III. The Department concurred with the report and outlined actions underway addressing all aspects of the report’s recommendations such as resolving the issues involving the level of detail to include in the Defense Planning Guidance and whether to attach probability data to the warfighting scenarios, incorporating all improvements to the munitions requirement process in a revised Capabilities-Based Munitions Requirements instruction, and establishing a time frame for reassessing munitions requirements once all improvements have been made. The Department also provided technical comments, which we incorporated in the report as appropriate. We are sending copies of this report to the appropriate congressional committees; the Honorable Donald H. Rumsfeld, Secretary of Defense; the Acting Secretary of the Army, Joseph W. Westphal; the Acting Secretary of the Air Force, Lawrence J. Delaney; the Acting Secretary of the Navy, Robert B. Pirie, Jr.; the Director, Office and Management and Budget, Mitchell E. Daniels, Jr.; and the Director, Defense Intelligence Agency, Vice Admiral Thomas R. Wilson. Please contact me at (202) 512-8412 if you or your staff has any questions concerning this report. Major contributors to this report are listed in appendix IV. To assess the extent to which actions have been taken to improve the munition requirements determination process, we reviewed the Department’s Instruction 3000.4, Capabilities-Based Munitions Requirements to ascertain roles and oversight responsibilities and to identify required inputs into the process. We reviewed the Defense Planning Guidance for fiscal years 2000-2005 and the update for fiscal years 2001-2005 to determine what instruction the Department provided to guide the services as they determine their munition requirements. To identify factors that affect the accuracy of the requirements determination process, we reviewed 20 Department of Defense Inspector General and GAO reports relating to the Department’s munitions requirements determination process. We also reviewed the Joint Requirements Oversight Council memorandums to determine the focus of the Joint Staff’s study on the impact of small-scale contingency operations on inventories of specific munitions. We met with service officials to determine how each service develops its munition requirements and obtained data on the assumptions and inputs that go into its simulation models. We also obtained information on how each service reviews the outcome of its munitions requirement process. In addition, we obtained information on the Commanders in Chief’s Operating Plan, Integrated Priority List, and other planning data necessary to assist the services with their requirements planning. To address those areas needing additional action, we met with Department and service officials to obtain their views on the impact of how the unresolved issues could affect the accuracy of the requirements determination process. In addition, we obtained documentation pertaining to the areas still needing action. We met with senior officials and performed work at the Offices of Secretary of Defense, Washington, D.C.; the Joint Chief of Staff, Washington, D.C.; and the Defense Intelligence Agency, Bolling Air Force Base, Washington, D.C. We also interviewed senior officials from Army Combat Support War Reserve Branch, Washington D.C.; Navy Requirements Planning, Naval Air Acquisition Program, and Naval Surface Fire Support, Washington, D.C.; Air Force Munitions Requirements Weapons Division, Crystal City, Virginia; U.S. Pacific Command, Honolulu, Hawaii; U.S. Central Command, McDill Air Force Base, Tampa, Florida; and U.S. Force Korea, Seoul, Korea. We performed our review from December 1999 through November 2000 in accordance with generally accepted government auditing standards. Summary of the DOD Process for Developing Quantitative Munitions Requirements, Department of Defense Inspector General, Feb. 24, 2000. Air Force Munitions Requirements, Department of Defense Inspector General, Sept. 3, 1999. Defense Acquisitions: Reduced Threat Not Reflected in Antiarmor Weapon Acquisitions (GAO/NSIAD-99-105, July 22, 1999). U. S. Special Operations Command Munitions Requirements, Department of Defense Inspector General, May 10, 1999. Marine Corps Quantitative Munitions Requirements Process, Department of Defense Inspector General, Dec. 10, 1998. Weapons Acquisitions: Guided Weapon Plans Need to be Reassessed (GAO/NSIAD-99-32, Dec. 9, 1998). Navy Quantitative Requirements for Munitions, Department of Defense Inspector General, Dec. 3, 1998. Army Quantitative Requirements for Munitions, Department of Defense Inspector General, June 26, 1998. Management Oversight of the Capabilities-Based Munitions Requirements Process, Department of Defense Inspector General, June 22, 1998. Threat Distributions for Requirements Planning at U.S. Central Command and U.S. Forces Korea, Department of Defense Inspector General, May 20, 1998. Army’s and Marine Corps’ Quantitative Requirements for Blocks I and II Stinger Missiles, Department of Defense Inspector General, June 25, 1996. U.S. Combat Air Power – Reassessing Plans to Modernize Interdiction Capabilities Could Save Billions, Department of Defense Inspector General, May 13, 1996. Summary Report on the Audits of the Anti-Armor Weapon System and Associated Munitions, Department of Defense Inspector General, June 29, 1995. Weapons Acquisition: Precision Guided Munitions in Inventory, Production, and Development (GAO/NSIAD-95-95, June 23, 1995). Acquisition Objectives for Antisubmarine Munitions and Requirements for Shallow Water Oceanography, Department of Defense Inspector General, May 15, 1995. Army’s Processes for Determining Quantitative Requirements for Anti-Armor Systems and Munitions, Department of Defense Inspector General, March 29, 1995. The Marine Corps’ Process for Determining Quantitative Requirements for Anti-Armor Munitions for Ground Forces, Department of Defense Inspector General, Oct. 24, 1994. The Navy’s Process for Determining Quantitative Requirements for Anti-Armor Munitions, Department of Defense Inspector General, Oct. 11, 1994. The Air Force’s Process for Determining Quantitative Requirements for Anti-Armor Munitions, Department of Defense Inspector General, June 17, 1994. Coordination of Quantitative Requirements for Anti-Armor Munitions, Department of Defense Inspector General, June 14, 1994. In addition to those named above, Patricia Sari-Spear made key contributions to this report.
To determine the number and type of munitions needed, the military annually evaluates its munition requirements using a multiphase analytical process. The Department of Defense (DOD) is working to ensure that the requirements determination process yields accurate numbers and the types of munitions needed to defeat threats specified in the Defense Planning Guidance. Although DOD has made progress and has identified specific areas still requiring attention, there is no clear plan with time frames for resolving key issues. Some of these issues have only been partially completed and others are in the early stages of evolution. Specifically, target templates have not been completed and munitions effectiveness data has not been updated, nor have decisions been made on more detailed warfighting scenarios and the ranking of scenarios. Consequently, the reliability of the services' munitions requirements remain uncertain and could affect munitions planning, programming, budgeting, and industrial production base decisions. Until these issues are resolved and a revised Capabilities-Based Management Requirements instruction is issued, the accuracy of the munitions requirements will remain uncertain.
DOD has substantially restructured the JSF program over the past 15 months, taking positive actions that should lead to more achievable and predictable outcomes. Restructuring has consequences—higher development costs, fewer aircraft in the near term, training delays, and extended times for testing and delivering capabilities to warfighters. Key restructuring changes include the following: The total system development cost estimate rose to $56.4 billion and its schedule was extended to 2018. This represents a 26 percent increase in cost and a 5-year slip in schedule compared to the current approved program baseline established in 2007. Resources and time were added to development testing. Testing plans were made more robust by adding another development test aircraft and the use of several production aircraft; increasing the number of test flights by one-third; extending development testing to 2016; and reducing its overlap with initial operational testing. Near-term procurement quantities were reduced by 246 aircraft through 2016; the annual rate of increase in production was lowered; and the start of full-rate production moved to 2018, a 5-year slip from the current baseline. The military services were directed to reexamine their initial operational capability (IOC) requirements, the critical need dates when the warfighter must have in place the first increment of operational forces available for combat. We expect the Marine Corps’ IOC will slip significantly from its current 2012 date and that the Air Force’s and Navy’s IOC dates will also slip from the current dates in 2016. To address technical problems and test deficiencies for the Marine Corps’ STOVL variant, the department significantly scaled back its procurement quantities and directed a 2-year period for evaluating and engineering technical solutions to inform future decisions on this variant. DOD also “decoupled” STOVL testing from the other two variants so as not to delay them and to allow all three to proceed at their own speeds. The fiscal year 2012 Defense Budget reflects the financial effects from restructuring actions through 2016. Compared to estimates in the fiscal year 2010 future years defense program for the same 5-year period, the department increased development funding by $7.7 billion and decreased procurement funding by $8.4 billion reflecting plans to buy fewer aircraft. Table 1 summarizes the revised funding requirements and annual quantities following the Secretary’s reductions. Even after decreasing near-term quantities and lowering the annual rate of increase in production, JSF procurement still escalates significantly. Annual funding levels more than double and quantities more than triple during this period. These numbers do not include the additional orders expected from the international partners. At the time of our review, DOD did not yet know the full impact from restructuring actions on future procurement funding requirements beyond this 5-year period. Cost analysts were still calculating the net effects from deferring the near-term procurement of 246 aircraft to future years and from lowering the annual rate of increased procurement. After a Nunn- McCurdy breach of the critical cost growth threshold and DOD certification, the most recent milestone must be rescinded, the program restructured to address the cause of the breach, and a new acquisition program baseline must be approved that reflects the certification approved by the milestone decision authority. The Secretary has not yet granted new milestone B approval for the JSF nor approved a new acquisition program baseline; officials expect to do so next month. We expect future funding requirements will be somewhat higher than currently projected. This could reduce the quantities considered affordable by the U.S. and allies, further driving up unit costs. Affordability—in terms of the investment costs to acquire the JSF, the continuing costs to operate and maintain it over the life-cycle, and its impact on other defense programs—is a challenging issue. Including the funding added by the restructuring actions, system development cost estimates have increased 64 percent since program start. (Appendix III summarizes the increases in target prices and major cost drivers for the air system and primary engine development contracts.) Also, the estimated average unit procurement price for the JSF has about doubled since program start and current forecasts indicate that life-cycle costs will be substantially higher than the legacy aircraft it replaces. Rising JSF costs erode buying power and may make it difficult for the U.S. and its allies to buy and sustain as many aircraft as planned. Going forward, the JSF will require unprecedented demands for funding in a period of more austere defense budgets where it will have to annually compete with other defense and nondefense priorities for the discretionary federal dollar. Figure 1 illustrates the substantive annual development and procurement funding requirements—almost $13 billion on average through program completion in 2035. This reflects the program’s estimate at the time of the fiscal year 2012 budget submission. As discussed earlier, defense cost analysts are still computing the long- term procurement funding requirements reflecting the deferral of aircraft to future years. The JSF program established 12 clearly stated goals in testing, contracting, and manufacturing for completion in calendar year 2010. It had mixed success, achieving 6 goals and making varying degrees of progress on the other 6. For example, the program exceeded its goal for the number of development flight tests but did not deliver as many test and production aircraft as planned. Also, the program awarded its first fixed-price contract on its fourth lot of production aircraft, but did not award the fixed-price engine contract in 2010 as planned. Table 2 summarizes JSF goals and accomplishments for 2010. Although still hampered by the late delivery of test aircraft to testing sites, the development flight test program significantly ramped up operations in 2010, accomplishing 3 times as many test flights as the previous 3 years combined. The Air Force CTOL variant significantly exceeded the annual plan while initial limited testing of the Navy’s CV variant was judged satisfactory, below plans for the number and hours of flight but ahead on flight test points flown. The Marine Corps’ STOVL, however, substantially underperformed in flight tests, experienced significant down times for maintenance, and was challenged by several technical issues unique to this variant that could add to its weight and cost. The STOVL’s problems were a major factor in the Secretary’s decision to give the STOVL a 2-year period to solve engineering issues, assess impacts, and inform a future decision as to whether and how to proceed with this variant. Table 3 summarizes 2010 flight test results for each variant. After completing 9 years of system development and 4 years of overlapping production activities, the JSF program has been slow to gain adequate knowledge to ensure its design is stable and the manufacturing process is ready for greater levels of annual production. The JSF program still lags in achieving critical indicators of success expected from well- performing acquisition programs. Specifically, the program has not yet stabilized aircraft designs—engineering changes continue at higher than expected rates long after critical design reviews and well into procurement. Engineering drawings are still being released to the manufacturing floor. More changes are expected as testing accelerates. Also, manufacturing cost increases and delays in delivering test and production aircraft indicate a need for substantial improvements in factory throughput and performance of the global supply chain. Engineering drawings released since design reviews and the number and rate of design changes exceed those planned at program outset and are not in line with best practices. Critical design reviews were completed on the three aircraft variants in 2006 and 2007 and the designs declared mature, but the program continues to experience numerous changes. Since 2007, the program has produced 20,000 additional engineering drawings, a 50-percent increase in total drawings and about five times more than best practices suggest. In addition, changes to drawings have not yet decreased and leveled off as planned. Figure 2 tracks and compares monthly design changes and future forecasts against contractor plans in 2007. The monthly rate in 2009 and 2010 was higher than expected and the program now anticipates more changes over a longer period of time— about 10,000 more changes through January 2016. With most of development testing still ahead for the JSF, the risk and impact from required design changes are significant. In addition, emerging concerns about the STOVL lift fan and drive shaft, fatigue cracks in a ground test article, and stealth-related issues may drive additional and substantive design changes. Manufacturing and delivering test jets took much more time and money than planned. As in prior years, lingering management inefficiencies, including substantial out-of-station work and part shortages, continued to increase the labor needed to manufacture test aircraft. Although there have been improvements in these factors, final acceptance and delivery of test jets were still delayed. Total labor hours required to produce the test aircraft increased over time. The cumulative actual labor hours through 2010 to complete the 12 test aircraft exceeded the budgeted hours estimated in 2007 by more than 1.5 million hours, a 75 percent increase. Figure 3 depicts forecasted and actual labor hours for building test jets. DOD began procuring production jets in 2007 and has now ordered 58 aircraft on the first four low-rate initial production lots. The JSF program anticipated the delivery of 14 production aircraft through 2010, but none were delivered during that period. Delivery of the two production jets ordered in 2007 has been delayed several times since the contract was signed and the first aircraft was just delivered this month. The prices on each of the first three cost-reimbursable production contracts have increased from the amounts negotiated at contract awards and the completion dates for delivering aircraft have been extended over 9 months on average. We are encouraged by DOD’s award of a fixed-price incentive fee contract for lot 4 production and the prospects for the cost study to inform lot 5 negotiations, but we have not examined contract specifications. Accumulating a large backlog of jets on order but undelivered is not an efficient use of federal funds, tying up millions of dollars in obligations ahead of the ability of the manufacturing process to produce. The aircraft and engine manufacturers now have significantly more items in production flow compared to prior years and are making efforts to implement restructuring actions and recommendations from expert defense teams assembled to evaluate and improve production and supply operations. Eight of 20 key recommendations from the independent manufacturing review team have been implemented as of September 2010. Until improvements are fully implemented and demonstrated, the restructuring actions to reduce near term procurement quantities and establish a more achievable ramp rate are appropriate and will provide more time to fully mature manufacturing and supply processes and catch up with aircraft backlogs. Improving factory throughput and controlling costs—driving down labor and material costs and delivering on time— are essential for efficient manufacturing and timely delivery to the warfighter at the increased production rates planned for the future. Since the first flight in December 2006, only about 4 percent of JSF capabilities have been completely verified by flight tests, lab results, or both. The pace of flight testing accelerated significantly in 2010, but overall progress is still much below plans forecasted several years ago. Furthermore, only a small portion of the extensive network of ground test labs and simulation models are fully accredited to ensure the fidelity of results. Software development—essential for achieving about 80 percent of the JSF functionality—is significantly behind schedule as it enters its most challenging phase. Development flight testing was much more active in 2010 than prior years and had some notable successes, but cumulatively still lagged behind previous expectations. The continuing effects from late delivery of test aircraft and an inability to achieve the planned flying rates per aircraft substantially reduced the amount and pace of testing planned previously. Consequently, even though the flight test program accelerated its pace last year, the total number of flights accomplished during the first 4 years of the test program significantly lagged expectations when the program’s 2007 baseline was established. Figure 4 shows that the cumulative number of flights accomplished by the end of 2010 was only about one-fifth the numbers forecast by this time in the 2007 test plan. By the end of 2010, about 10 percent of more than 50,000 planned flight test points had been completed. The majority of the points were earned on airworthiness tests (basic airframe handling characteristics) and in ferrying the planes to test sites. Remaining test points include more complex and stringent requirements, such as mission systems, ship suitability, and weapons integration that have yet to be demonstrated. The JSF test program relies much more heavily than previous weapon systems on its modeling and simulation labs to test and verify aircraft design and subsystem performance. However, only 3 of 32 labs and models have been fully accredited to date. The program had planned to accredit 11 labs and models by now. Accreditation is essential to validate that the models accurately reflect aircraft performance and it largely depends upon flight test data to verify lab results. Moreover, the ability to substitute ground testing for some flight testing is unproven. Contractor officials told us that early results are providing good correlation between ground and flight tests. Software providing essential JSF capability is not mature and releases to the test program are behind schedule. Officials underestimated the time and effort needed to develop and integrate the software, substantially contributing to the program’s overall cost and schedule problems and testing delays, and requiring the retention of engineers for longer periods. Significant learning and development work remains before the program can demonstrate the mature software capabilities needed to meet warfighter requirements. The JSF software development effort is one of the largest and most complex in DOD history, providing functionality essential to capabilities such as sensor fusion, weapons and fire control, maintenance diagnostics, and propulsion. JSF has about 8 times more on- board software lines of code than the F/A-18E/F Super Hornet and 4 times for than the F-22A Raptor. While good progress has been reported on the writing of code, total lines of code have grown by 40 percent since preliminary design review and 13 percent since the critical design review. The amount of code needed will likely increase as integration and testing efforts intensify. A second software integration line added as part of the restructuring will improve capacity and output. Delays in developing, integrating, and releasing software to the test program have cascading effects hampering flight tests, training, and lab accreditation. While progress is being made, a substantial amount of software work remains before the program can demonstrate full warfighting capability. The program released its second block, or increment, to flight test nearly 2 years later than the plan set in 2006, largely due to integration problems. Each of the remaining three blocks— providing full mission systems and warfighting capabilities—are now projected to slip more than 3 years compared to the 2006 plan. Figure 5 illustrates the actual and projected slips for each of the 5 software blocks in delivering software to the test program. Schedule delays require retention of engineering staff for longer periods of time. Also, some capabilities have been moved to future blocks in attempts to meet schedule and mitigate risks. Uncertainties pertaining to critical technologies, including the helmet-mounted display and advanced data links, pose risks for more delays. The JSF program is at a critical juncture—9 years in development and 4 years in limited production–but still early in flight testing to verify aircraft design and performance. If effectively implemented and sustained, the restructuring DOD is conducting should place the JSF program on a firmer footing and lead to more achievable and predictable outcomes. However, restructuring comes with a price—higher development costs, fewer aircraft received in the near term, training delays, prolonged times for testing and delivering the capabilities required by the warfighter, and impacts on other defense programs and priorities. Reducing near-term procurement quantities lessens, but does not eliminate the still substantial and risky concurrency of development and production. Development and testing activities will now overlap 11 years of procurement. Flight testing and production activities are increasing and contractors are improving supply and manufacturing processes, but deliveries are still lagging. Slowed deliveries have led to a growing backlog of jets on order but not delivered. This is not a good use of federal funds, obligating millions of dollars well before the manufacturing process can deliver aircraft. We agree with defense leadership that a renewed and sustained focus on affordability by contractors and the government is critical to moving this important program forward and enabling our military services and our allies to acquire and sustain JSF forces in needed quantities. Maintaining senior leadership’s increased focus on program results, holding government and contractors accountable for improving performance, and bringing a more responsible management approach to the JSF to “live within its means” may help limit future cost growth and the consequences for other programs in the portfolio. The JSF acquisition demands an unprecedented share of the DOD’s future investment funding. The program’s size and priority are such that its cost overruns and extended schedules must either be borne by funding cuts to other programs or else drive increases in the top line of defense spending; the latter may not be an option in a period of more austere budgets. Given the other priorities that DOD must address in a finite budget, JSF affordability is critical and DOD must plan ahead to address and manage JSF challenges and risks in the future. Chairman Levin, Ranking Member McCain, and members of the Senate Armed Services Committee, this completes my prepared statement. I would be pleased to respond to any questions you may have. For further information on this statement, please contact Michael Sullivan at (202) 512-4841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement are Bruce Fairbairn, Charlie Shivers, Julie Hadley, Dr. W. Kendal Roberts, LeAnna Parkey, and Matt Lea. development start) December 2003 (replan) April 2010 (initial program baseline) restructure) Start of system development and demonstration approved. Critical technologies needed for key aircraft performance elements are not mature. Program should delay start of system development until critical technologies are mature to acceptable levels. DOD did not delay start of system development and demonstration stating technologies were at acceptable maturity levels and will manage risks in development. The program undergoes re-plan to address higher than expected design weight, which added $7 billion and 18 months to development schedule. We recommended that the program reduce risks and establish executable business case that is knowledge-based with an evolutionary acquisition strategy. DOD partially concurred but does not adjust strategy, believing that their approach is balanced between cost, schedule and technical risk. Program sets in motion plan to enter production in 2007 shortly after first flight of the non-production representative aircraft. The program plans to enter production with less than 1 percent of testing complete. We recommend program delay investing in production until flight testing shows that JSF performs as expected. DOD partially concurred but did not delay start of production because they believe the risk level was appropriate. Congress reduced funding for first two low-rate production buys thereby slowing the ramp up of production. Progress is being made but concerns remained about undue overlap in testing and production. We recommend limits to annual production quantities to 24 a year until flying quantities are demonstrated. DOD non-concurred and felt that the program had an acceptable level of concurrency and an appropriate acquisition strategy. DOD implemented a Mid- Course Risk Reduction Plan to replenish management reserves from about $400 million to about $1 billion by reducing test resources. We believe new plan actually increases risks and DOD should revise the plan to address concerns about testing, use of management reserves, and manufacturing. We determine that the cost estimate is not reliable and that a new cost estimate and schedule risk assessment is needed. DOD did not revise risk plan nor restore testing resources, stating that they will monitor the new plan and adjust it if necessary. Consistent with a report recommendation, a new cost estimate was eventually prepared, but DOD did not do a risk and uncertainty analysis that we felt was important to provide a range estimate of potential outcomes. The program increased the cost estimate and adds a year to development but accelerated the production ramp up. Independent DOD cost estimate (JET I) projects even higher costs and further delays. Because of development problems, we stated that moving forward with an accelerated procurement plan and use of cost reimbursement contracts is very risky. We recommended the program report on the risks and mitigation strategy for this approach. DOD agreed to report its contracting strategy and plans to Congress. In response to our report recommendation, DOD subsequently agreed to do a schedule risk analysis, but still had not done so as of February 2011. In February 2010, the department announced a major restructuring of the JSF program, including reduced procurement and a planned move to fixed-price contracts. The program was restructured to reflect findings of recent independent cost team (JET II) and independent manufacturing review team. As a result, development funds increased, test aircraft were added, the schedule was extended, and the early production rate decreased. Because of additional costs and schedule delays the program’s ability to meet warfighter requirements on time is at risk. We recommend the program complete a full comprehensive cost estimate and assess warfighter and IOC requirements. We suggest that Congress require DOD to prepare a “system maturity matrix” - a tool for tying annual procurement requests to demonstrated progress. DOD continued restructuring actions and announced plans to increase test resources and lower the production rate. Independent review teams evaluated aircraft and engine manufacturing processes. As we projected in this report, cost increases later resulted in a Nunn-McCurdy breach. Military services are currently reviewing capability requirements as we recommended. The department and Congress are working on a “system maturity matrix” tool to improve oversight and inform budget deliberations. Average procurement unit cost. Projected development costs for the air system and primary engine comprise nearly 80 percent of total system development funding requirements. Both contracts have experienced significant price increases since contract awards—79 percent and 69 percent respectively. Figures 6 and 7 depict the price histories for these contracts and the reasons behind major price increases. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. Tactical Aircraft: Air Force Fighter Force Structure Reports Generally Addressed Congressional Mandates, but Reflected Dated Plans and Guidance, and Limited Analyses. GAO-11-323R. Washington, D.C.: February 24, 2011. Defense Management: DOD Needs to Monitor and Assess Corrective Actions Resulting from Its Corrosion Study of the F-35 Joint Strike Fighter. GAO-11-171R. Washington D.C.: December 16, 2010. Joint Strike Fighter: Assessment of DOD’s Funding Projection for the F136 Alternate Engine. GAO-10-1020R. Washington, D.C.: September 15, 2010. Tactical Aircraft: DOD’s Ability to Meet Future Requirements is Uncertain, with Key Analyses Needed to Inform Upcoming Investment Decisions. GAO-10-789. Washington, D.C.: July 29, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Joint Strike Fighter: Significant Challenges and Decisions Ahead. GAO-10-478T. Washington, D.C.: March 24, 2010. Joint Strike Fighter: Additional Costs and Delays Risk Not Meeting Warfighter Requirements on Time. GAO-10-382. Washington, D.C.: March 19, 2010. Joint Strike Fighter: Significant Challenges Remain as DOD Restructures Program. GAO-10-520T. Washington, D.C.: March 11, 2010. Joint Strike Fighter: Strong Risk Management Essential as Program Enters Most Challenging Phase. GAO-09-711T. Washington, D.C.: May 20, 2009. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Joint Strike Fighter: Accelerating Procurement before Completing Development Increases the Government’s Financial Risk. GAO-09-303. Washington D.C.: March 12, 2009. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Joint Strike Fighter: Impact of Recent Decisions on Program Risks. GAO-08-569T. Washington, D.C.: March 11, 2008. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program. GAO-07-656T. Washington, D.C.: March 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Tactical Aircraft: DOD’s Cancellation of the Joint Strike Fighter Alternate Engine Program Was Not Based on a Comprehensive Analysis. GAO-06-717R. Washington, D.C.: May 22, 2006. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Actions Needed to Get Better Results on Weapons Systems Investments. GAO-06-585T. Washington, D.C.: April 5, 2006. Tactical Aircraft: Recapitalization Goals Are Not Supported by Knowledge-Based F-22A and JSF Business Cases. GAO-06-487T. Washington, D.C.: March 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Joint Strike Fighter: Management of the Technology Transfer Process. GAO-06-364. Washington, D.C.: March 14, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. Washington, D.C: April 6, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The F-35 Lightning II, also known as the Joint Strike Fighter (JSF), is the Department of Defense's (DOD) most costly and ambitious aircraft acquisition, seeking to simultaneously develop and field three aircraft variants for the Air Force, Navy, Marine Corps, and eight international partners. The JSF is critical for recapitalizing tactical air forces and will require a long-term commitment to very large annual funding outlays. The estimated total investment cost is currently about $385 billion to develop and procure 2,457 aircraft. Because of a history of relatively poor cost and schedule outcomes, defense leadership over the past 15 months has directed a comprehensive restructuring of the JSF program that is continuing. This testimony draws substantially from our extensive body of work on the JSF including our April 2011 report, the latest annual review mandated in the National Defense Authorization Act for Fiscal Year 2010, Pub. L. No. 111-84 244 (2009). This testimony discusses (1) program cost and schedule changes and their implications on affordability; (2) progress made during 2010; (3) design and manufacturing maturity; and (4) test plans and progress. GAO's work included analyses of a wide range of program documents and interviews with defense and contractor officials. DOD continues to restructure the JSF program, taking positive, substantial actions that should lead to more achievable and predictable outcomes. Restructuring has consequences--higher up-front development costs, fewer aircraft in the near term, training delays, and extended times for testing and delivering capabilities to warfighters. Total development funding is now estimated at $56.4 billion to complete in 2018, a 26 percent cost increase and a 5-year schedule slip from the current baseline. DOD also reduced procurement quantities by 246 aircraft through 2016, but has not calculated the net effects of restructuring on total procurement costs nor approved a new baseline. Affordability for the U.S. and partners is challenged by a near doubling in average unit prices since program start and higher estimated life-cycle costs. Going forward, the JSF requires unprecedented funding levels in a period of more austere defense budgets. The program had mixed success in 2010, achieving 6 of 12 major goals and progressing in varying degrees on the rest. Successes included the first flight of the carrier variant, award of a fixed-price aircraft procurement contract, and an accelerated pace in development flight tests that accomplished three times as many flights in 2010 as the previous 3 years combined. However, the program did not deliver as many aircraft to test and training sites as planned and made only a partial release of software capabilities. The short takeoff and landing (STOVL) variant had significant technical problems and deficient flight test performance. DOD directed a 2-year period to evaluate and engineer STOVL solutions. After more than 9 years in development and 4 in production, the JSF program has not fully demonstrated that the aircraft design is stable, manufacturing processes are mature, and the system is reliable. Engineering drawings are still being released to the manufacturing floor and design changes continue at higher rates than desired. More changes are expected as testing accelerates. Test and production aircraft cost more and are taking longer to deliver than expected. Manufacturers are improving operations and implemented 8 of 20 recommendations from an expert panel, but have not yet demonstrated a capacity to efficiently produce at higher production rates. Substantial improvements in factory throughput and the global supply chain are needed. Development testing is still early in demonstrating that aircraft will work as intended and meet warfighter requirements. About 4 percent of JSF capabilities have been completely verified by flight tests, lab results, or both. Only 3 of the extensive network of 32 ground test labs and simulation models are fully accredited to ensure the fidelity of results. Software development--essential for achieving about 80 percent of the JSF functionality--is significantly behind schedule as it enters its most challenging phase.
Since the end of the Cold War, the United States has dramatically decreased its overseas basing of military forces. The Air Force’s presence in Europe, for example, shrank from 25 bases with 850 aircraft in 1990 to just 6 bases and 174 aircraft in 1999. In preparation for Operation Allied Force, the Air Force augmented its supply of aircraft in the European theater to 207 aircraft at 10 bases in 5 European countries (see fig. 1). By the end of the operation, just 78 days later, NATO had assembled over 1,000 aircraft in the region. Of these, the United States provided over 700, and other NATO allies contributed the remainder. Of the more than 700 U.S. aircraft, over 500 fixed-wing aircraft were deployed at 22 land bases in 8 countries (see fig. 2). Seventy percent of the U.S. land-based aircraft belonged to the Air Force, and 30 percent to the Navy and the Marine Corps. These numbers exclude all helicopters, including the Army Apache helicopters that were deployed to Albania. According to an after-action report by USAFE, in terms of size and resource allocations, Operation Allied Force was the equivalent of a major theater war for the U.S. Air Force. Arranging for combat aircraft basing involves much planning. This planning generally includes working with the host countries and U.S. embassies to obtain permission to base aircraft in specific locations; conducting extensive site visits to determine what improvements must be made to foreign airfields and arranging for the improvements to be completed; ensuring that U.S. aircraft have adequate ramp space, hangars, and fuel; obtaining all the logistics services necessary to sustain and house the personnel who will be deployed at foreign airfields. Because the United States no longer has the large number of established bases it had during the Cold War, experience has shown that it is in the best interest of the United States to work out as many of these details in advance as possible. According to USAFE officials, Status of Forces Agreements with many countries in Europe are very general and provide adequate protections and privileges for official visits, small unit activities, and most short-term exercises and operations. Supplemental agreements, which may be negotiated by DOD in consultation with the Department of State, are useful in addressing the more detailed protections and privileges required for operations approaching the scale of Operation Allied Force. According to EUCOM officials, there was no prepared plan that could be used for executing Operation Allied Force because it was a combination of peacetime and combat operations. At the time of the operation, DOD had detailed war plans for joint military operations written in advance only for two specific major theater wars, neither of which included the European theater. NATO had detailed plans only for what it considered wars in defense of its member partners or for peacetime operations. Thus, the Air Force did not have the benefit of specific advanced determinations of where it could place its combat aircraft quickly and efficiently for Operation Allied Force. The lack of a plan for such operations resulted in ad hoc deployments. Developing detailed plans for every possible contingency throughout Europe would be impractical, but both EUCOM and NATO now recognize that better planning is needed. Because the conflict surrounding Kosovo evolved rapidly, Operation Allied Force required not only that plans be quickly developed but that aircraft basing decisions be repeatedly revised. In fact, the plan for conducting the air campaign was changed 70 times during the 78-day operation, according to EUCOM officials. Each time a change was made, adjustments to basing decisions were also necessary. According to a USAFE after-action report, these constant changes in plans prevented decisionmakers for the initial deployments of aircraft from taking into account what deployments of other aircraft might be needed later. In some cases, aircraft units were deployed only to be moved back to where they had come from. For example, early in the conflict, units from the 48th Fighter Wing, at Lakenheath, England, were deployed to Cervia, Italy, but later on, as additional forces were added, these units were sent back to Lakenheath. Similarly, the 52nd Fighter Wing, located in Spangdahlem, Germany, was initially deployed to Aviano Air Base, Italy, until that base filled to capacity and the wing was returned to Spangdahlem. The lack of a stable plan for combat aircraft basing also affected how airfield space and supplies were provided to U.S. forces deployed during the operation. For example, according to an after-action report by USAFE civil engineers, the lack of a combat aircraft basing plan resulted in the forces first on the ground simply taking the space they needed on a first- come, first-served basis—without thought given to land use, safety, utilities access, or airfield obstructions. An after-action report by USAFE transportation officials said that they had to dramatically tailor the packages of equipment and supplies sent to support troops deployed to combat aircraft bases. This tailoring was necessary because these packages had been planned for operations the size of a major theater war and were not structured into blocks that could be built up as the conflict grew. Finally, details had to be worked out after the conflict began regarding how equipment and supplies destined for aircraft bases could be transported through the countries where U.S. troops were deployed. Exhaustive plans cannot be developed for every possible future contingency. However, EUCOM officials agree that more detailed planning should be done in advance of conflicts such as Operation Allied Force. At the time of our visit, EUCOM was planning to revise a generic plan for operations in support of NATO but said that completing this plan could take 2 years. EUCOM was not yet in a position to state how this new plan would solve problems like the ones encountered during the conflict in Kosovo. The goal is for EUCOM to have a plan that it can use for a future Kosovo-type conflict. NATO has also recognized the need for more planning for future operations like Operation Allied Force and has issued a new strategic concept. At its 50th Anniversary Summit in Washington, D.C., in April 1999, while the conflict was ongoing, NATO addressed the likelihood that future Alliance military operations would be smaller in scale than those that were the basis for Alliance planning during the Cold War. According to DOD’s after-action report, NATO’s new strategic concept reflects the realistic view that the U.S. role in future NATO operations is likely to fall somewhere between full-scale combat operations in defense of the Alliance and peace support activities. Despite EUCOM’s role as the U.S. focal point in the European theater, EUCOM officials told us that they had neither the resources nor the responsibility to work out detailed combat aircraft basing arrangements for the individual services. Also, during Operation Allied Force, no other organization was tasked with responsibility for directing and coordinating the combat aircraft basing for all U.S. military services and the allies. As a result, the services, for the most part, planned their own deployments and worked out individual arrangements with the host countries. While the services did their best to quickly plan all the details necessary to base their aircraft, the lack of a focal point to coordinate the plans resulted in at least some duplication of effort, in last-minute work that could have been done before the conflict began, and in communications problems among U.S. services and agencies and NATO allies concerning what their individual plans were for basing aircraft. The Air Force has recognized the need to do more preparatory work such as airfield site surveys before future conflicts begin. To address this need, it plans to develop a database of airfield information. In countries where the United States has a permanent presence, DOD and the Department of State have generally negotiated agreements with the host countries stipulating which bases may be used in what circumstances. However, during Operation Allied Force, the United States did not have such agreements worked out in advance with many of the countries involved. EUCOM officials maintained that the services should arrange their own aircraft basing because only they knew their detailed basing needs. However, joint doctrine requires that EUCOM’s Commander review the requirements of the various service component commands and establish priorities through the deliberate planning process to use supplies, facilities, mobility assets, and personnel effectively. Such coordination should prevent the unnecessary duplication of facilities and overlapping of functions among the services and should include establishing bases and coordinating other logistics requirements. Absent coordination by EUCOM, service officials expressed confusion during the operation about how basing arrangements should be made. A “huge challenge” in making basing arrangements, according to USAFE officials, was in first determining the chain of command to request the use of airfields from host nations. The services did not always know how or when to coordinate with other services, EUCOM, or allied countries. The services also found that each U.S. request for aircraft access was treated differently by each nation. While most countries accepted a U.S. request at the bilateral level, some countries asked that a formal request originate from NATO headquarters. Further confusion arose as countries received requests from individual service components for basing arrangements. Section 112b of title 1 of the United States Code requires that Department of State personnel be kept informed of all agreements being made with host countries. Cases arose, however, in which host nation and U.S. Department of State personnel were not aware of what individual service components were doing. For example: In one case, U.S. aircraft flying from one allied country to another had to turn around in midair because they had not been approved for landing at their destination. In another case, host country officials complained to the U.S. embassy of incessant coordination telephone calls made by U.S. servicemembers. In a third case, confusion arose because Air Force personnel were trying to arrange for aircraft basing just as U.S. State Department personnel were trying to negotiate with the host country themselves. A fourth situation involved a case in which Air Force deployment of fighter aircraft to an allied base was almost underway before the Air Force learned that adequate space was not available because this ally was not planning to move its own aircraft out. The services were expected to do their own site surveys of possible airfield locations to determine where units could base their aircraft. No one organization maintained a database of combat aircraft bases that the services might be able to use. According to USAFE officials, there was relatively little information on many of the airfields within EUCOM’s area of responsibility. Some information was available from the U.S. National Imagery and Mapping Agency, but much of this information was obsolete. As the major supplier of aircraft, the Air Force consequently took the lead in doing these site surveys. The process for site surveys entailed determining what information needed to be collected and who should be on the survey teams. After the operation had begun, between April 8 and May 24, 1999, USAFE used over 200 persons to form teams to travel to potential sites and complete 27 site surveys. The USAFE group that took the lead in doing these site surveys said in their after-action report that host nation support was largely undefined and that, as a result, they had to operate under numerous constraints. For example, in anticipation of going into the host countries, site survey teams had to first obtain host country approval for their visits. Also, host countries usually allowed teams only one day to survey airfield sites. In addition, according to USAFE officials, many of the personnel on the teams had never before participated in a site survey. In addition to the efforts of the USAFE teams to do last-minute site surveys, the Marine Corps did its own site surveys. For example, one Marine Corps commanding officer who was planning his unit’s deployment to Operation Allied Force formed his own nine-member team to do site surveys of two locations in Hungary. His teams also had only one day to do each site survey, and the commander made his own arrangements with embassy staff to prepare for his unit’s deployment. Although this commander told us that he did have access to USAFE’s site surveys on these airfields, he found that he still needed to perform a second survey because the Air Force had not gathered all the needed information. Servicemembers throughout the military services worked long and hard to overcome the obstacles cited in this report and to achieve U.S. and NATO objectives in Operation Allied Force. Nevertheless, in response to aircraft basing problems encountered during Operation Allied Force, USAFE officials realized that they needed a better basing strategy. During the conflict, they found that their existing basing structure had not been methodically planned in a way that tied it to probable threats. They decided to do a review of where aircraft should be based in the European theater in anticipation of future threats. As part of this effort, USAFE plans to collect information on each potential air base. The information will include a site survey, base support plans, and host nation agreements. As part of this effort, USAFE also plans to determine what locations could be used as operating bases in the event of future contingency operations. At the time of our visit to Europe, USAFE officials had just briefed EUCOM officials on their proposal for developing a basing strategy, and EUCOM officials had decided to form a working group to develop a similar proposal. According to EUCOM’s planned approach, dated November 2000, EUCOM hopes to investigate the leasing of specific facilities, airfields, and equipment for future contingencies, among other things, to establish a theater basing strategy. According to Air Force headquarters officials, it took 17 days to complete each site survey, from its initiation to the host country’s approval to use the site. The Air Force believes that these site surveys took far too long to complete. The Air Force has therefore undertaken an effort to build the “Employment Knowledge Base,” a database of site surveys that can be accessed when planning a deployment. At present, this is an Air Force- only initiative, though the Marine Corps has expressed interest in it. Part I of a “Survey Tool for Employment Planning” has been developed by a contractor and was fielded in April 2000 to be used as a checklist for persons conducting site surveys. The site survey team can input data into the checklist using a laptop computer. The goal is to have part II of the site survey completed by October 2001. Efforts to update the Employment Knowledge Base from field locations have not yet been funded by the Air Force. The lack of supplemental international agreements during Operation Allied Force made the United States vulnerable to hastily made ad hoc arrangements with some host countries. A USAFE official believes that the United States could have paid excessive prices for supplies and services purchased “in the heat of battle” during Operation Allied Force because the United States had not negotiated supplemental agreements with countries in Europe where the United States based combat aircraft and purchased logistical support. Supplemental agreements addressing basing and logistics details were not in effect with some host nations during Operation Allied Force. Such agreements between the United States and host countries often contain provisions stipulating that the United States will not be charged for airport landing, parking, or overflight. These agreements also often contain a provision stating that U.S. forces will be charged the same rates for logistics supplies and services as the foreign nations’ own military forces are charged. While we did not attempt to independently determine whether or not any costs charged the United States during Operation Allied Force were excessive, a USAFE official cited one case in which U.S. aircraft were already enroute when an Air Force sergeant paid a NATO member’s airport authority $1.5 million for the use of the destination airport. If a supplemental agreement had been in place prior to Operation Allied Force allowing the United States the use of this airfield, the United States would not have had to pay this fee at all if the airfield was government owned, and any other fees for logistics supplies would have been the same as those charged the host nations’ own military forces. The DOD official who is responsible for managing DOD’s supplemental agreements worldwide told us that it is not unusual for countries with whom the United States does not have agreements to charge airfield landing and takeoff fees. He cited a case in which a U.S. airplane was not allowed to take off until the United States paid landing fees. This official said that supplemental agreements also typically cover such issues as exemptions from payment for goods and services at rates higher than those charged a country’s own armed forces. While generally the United States did not use Partnership for Peace countries for combat aircraft basing, some of these countries provided logistics services for allied forces and may be even more important in future conflicts. Most Partnership for Peace countries had only a very general Status of Forces Agreement with the United States. According to an after-action report written by USAFE’s Judge Advocate staff, the Partnership for Peace Status of Forces Agreement does not address the detailed matters required for sustained operations that can be provided in supplemental, country-specific agreements. The agreement provides adequate protections and privileges only for official visits, small unit activities, and most short-term military exercises and operations. The agreement does not include supplemental protections and privileges required for operations approaching the scale of Operation Allied Force, particularly as they relate to the following issues: the status of U.S. contractors and provisions for their logistical support; the use of U.S. contracting procedures for U.S.-funded procurements; exemption from value-added and similar taxes; the automatic waiver of host country criminal jurisdiction over U.S. personnel; exemption from landing fees, navigation fees, and overflight charges; expedited customs inspection procedures for U.S. forces’ property; the right to operate post exchanges; banks; post offices; commissaries; and morale, welfare, and recreation activities; responsibility for the perimeter defense of installations and facilities used by U.S. personnel; payment of residual value for improvements to facilities financed by the privately owned vehicles’ licensing and registration. Because of the lack of supplemental agreements establishing arrangements for the purchase of goods and services, U.S. military components used the Acquisition and Cross Servicing Agreement Program during Operation Allied Force. This program allows military-to-military exchanges of logistics services and supplies for cash, equal value exchanges, or payment in kind. USAFE officials stressed the value of the program in that it allowed deployed commanders to obtain the necessary host nation support. The program was successfully used to provide parts and services to allies and to the United States. While cross-servicing agreements were critical for U.S. forces to obtain needed host nation services, USAFE officials believe that the use of such agreements made hastily by many different individuals resulted in many inconsistencies in the agreements made. According to the USAFE Judge Advocate’s report on Operation Allied Force, as a result of the absence of supplemental agreements with Partnership for Peace nations, some individual services’ agreements with host nation individuals and companies were favorable to the United States, but some were not. Often, the terms and duration of these agreements differed from one country to another. According to a DOD official, in 1995 the State Department granted DOD the authority to negotiate supplemental agreements with Partnership for Peace countries that would cover issues that are not included in their Status of Forces Agreements. At the time of Operation Allied Force, DOD had sent out model agreements to various Partnership for Peace countries as the beginnings of negotiations. According to one DOD official, negotiations have taken so long because of limited staff and other priorities. For USAFE officials, Operation Allied Force highlighted the dire need for in-place status and stationing arrangements for immediate use during future military operations in countries where the United States has no permanent presence. Recent history demonstrates that air campaigns are likely to be significant portions of future conflicts the United States can anticipate. While we agree that the Commander of the U.S. European Command cannot prepare detailed plans that cover the specifics for every possible contingency, the kind of ad hoc basing of combat forces that occurred during Operation Allied Force demonstrates that the lack of at least some planning has the potential to result in costly and unnecessary problems and inefficiencies, as was the experience in this operation. Also, because the European Command did not coordinate the movement of all service and host nation participants, confusion arose over who was planning deployments, where airfields were available for basing in the region, and how arrangements should be made. Finally, without supplemental agreements with host nations from whom the United States is likely to request aircraft basing and logistics services during a future contingency, the United States will probably again be in the position of being vulnerable to paying excessive costs for these fees and services. We recommend that the Secretary of Defense direct the Commander of the European Command to develop the most detailed combat aircraft basing plans possible for future conflicts, like Operation Allied Force, that do not fit into the category of a major theater war or a peacekeeping operation. These plans should consider existing NATO plans and entail the appropriate coordination between DOD and the Department of State. They should also address the following issues, as discussed in our report: development of a strategy for basing aircraft that is tied to probable future coordination of all service and host nation arrangements for basing their aircraft during contingencies, and maintaining a database of complete information on available airfields in EUCOM’s area of responsibility and providing this information to all the services as needed. To ensure that U.S. forces have access to airfields and bases from which they will need to conduct operations in likely future conflicts, we recommend that the Secretary of Defense direct EUCOM’s Commander to work with the Department of State to finalize as many supplemental agreements with host nations as possible. These supplemental agreements should include provisions exempting the United States from being charged overflight, airfield access, and aircraft landing and parking fees. These supplemental agreements should also include a provision stating that U.S. troops should be charged rates for logistics supplies that are comparable to the rates charged the host nation’s own armed forces. In written comments on a draft of this report, DOD agreed with the contents of the report and concurred with the recommendations. DOD stated that future aircraft basing plans need to consider operational and political issues that must be overcome with each host nation. Also, host nation agreements should consider existing NATO basing plans. Technical changes were made as appropriate throughout the report. The comments are presented and evaluated in appendix I. To determine what plans were in place to determine where and how to deploy combat aircraft for Operation Allied Force and how combat aircraft basing decisions were coordinated among the services and allied nations, we visited the U.S. European Command in Stuttgart, Germany, and interviewed officials who had participated in the operation. We also visited the U.S. Air Forces, Europe, at Ramstein Air Base, Germany, and interviewed officials in the Offices of Strategy and Deliberate Plans/Engagements, Plans and Doctrine, Logistics, Civil Engineering, Financial Management, and the Air Operations Squadron Plans Division. In addition, we reviewed documentation on Operation Allied Force planning and coordination efforts at these locations. To determine whether the United States had the necessary international agreements in force to enable it to quickly execute plans for Operation Allied Force, we interviewed officials in the Operations Law Division of the Judge Advocate General’s Office at the U.S. Air Forces, Europe. We also interviewed officials in the Office of Foreign Military Affairs, Assistant Secretary of Defense (International Security Affairs). To discuss issues involving who may be granted the authority to negotiate supplemental international agreements, we interviewed officials in the Office of Treaty Affairs in the U.S. Department of State. We also reviewed documentation on supplemental international agreements. We conducted our review between September 2000 and June 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Joseph W. Westphal, Acting Secretary of the Army; the Honorable Robert B. Pirie, Jr., Acting Secretary of the Navy; the Honorable Lawrence J. Delaney, Acting Secretary of the Air Force; General James L. Jones, Commandant of the Marine Corps; the Honorable Colin L. Powell, Secretary of State; and the Honorable Mitchell E. Daniels, Jr., Director of the Office of Management and Budget. We will also make copies available to others upon request. Please contact me at (757) 552-8111 if you or your staff have any questions concerning this report. Key staff who contributed to this report were William Cawood, Donna Rogers, Beverly Schladt, and Nancy Ragsdale. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated May 10, 2001. 1. We were aware that the military services used Acquisition and Cross Servicing Agreements during Operation Allied Force to purchase host nation goods and services, and we discuss this usage in the body of our report (see p. 14). However, as we state there, U.S. Air Forces in Europe officials told us that the use of such agreements made hastily during Operation Allied Force resulted in inconsistencies in agreements with different countries, some of which were favorable to the United States and some of which were not. We continue to believe that more uniformity and advanced planning for purchasing such items and services could result in lower costs to the United States in future conflicts. 2. We agree that arranging issues of combat basing rights are politically sensitive. We also agree that such arrangements cannot be made on a purely cost savings basis. We did not state in our draft report that cost should be the only consideration, nor do we here. 3. We agree that U.S. European Command’s (EUCOM) combat basing plans should consider existing North Atlantic Treaty Organization (NATO) basing plans and have included this wording in our recommendation (see p. 15). 4. We have added language to our recommendation stating that, when making combat aircraft basing plans, including conducting site surveys, DOD should appropriately coordinate with the Department of State (see p. 15). 5. As noted in our draft report, because Operation Allied Force did not fit into the definition of conflicts for which NATO had prepared combat plans, NATO’s structure did not apply to Operation Allied Force, and the United States prepared plans for its own participation in the operation after the conflict arose. 6. While we did not evaluate aircraft rebasing in this report, we recognize that a certain amount of rebasing will occur during any conflict. We continue to believe, however, that more advanced planning could have minimized such rebasing during Operation Allied Force. 7. We expect that, as part of its effort to create a database of available airfields, EUCOM will make use of already available resources to minimize or eliminate any duplication of effort. 8. Our recommendation states that the Secretary of Defense should direct EUCOM’s Commander to work with the Department of State to finalize as many supplemental agreements as possible. With the Department of State’s oversight, DOD can ensure that the scope of possible agreements is weighed against their expected cost and any operational security implications. 9. This statement is added in a footnote on p. 5. 10. This statement is added in a footnote on p. 3.
Following the failure of peace talks and escalating violence against ethnic Albanians in Kosovo, the United States provided military support to the North Atlantic Treaty Organization (NATO) combat operations against Yugoslavia in March 1999. This report reviews how well the United States was prepared for basing its combat aircraft during this operation, called Operation Allied Force. Specifically, GAO determines (1) whether plans were in place to determine where and how to deploy combat aircraft for an operation like Allied Force, (2) how combat aircraft basing decisions were coordinated among the services and allied nations, and (3) whether the United States had the necessary international agreements in place to enable it to quickly execute plans for such an operation. GAO found that the United States had no specific and detailed advanced plans that could be used to determine where and how to deploy its combat aircraft during Operation Allied Force because it was a combination of peacetime and combat operations. Overall plans for operations in defense of NATO members did not apply to this conflict. Although part of the U.S. European Command's mission is to plan for NATO conflicts, the Command had no prepared plan that could be applied to the conflict in Kosovo. Neither the U.S. European Command nor any U.S. military service coordinated combat aircraft basing decisions for all the U.S. service components and for all allies. The U.S. European Command serves as the focal point for American support to NATO, but the services generally planned their own deployments. Finally, the United States had general agreements with most countries involved in Operation Allied Force to cover the legal status and protections of U.S. citizens. However, the United States did not have more specific agreements with many countries on such issues as which host countries would provide what airfield access and what rates would be charged for the logistics services provided.
Small employers with fewer than 50 employees represent more than three- fourths of all U.S. private establishments and employ nearly one-third of the private sector workforce. (See fig. 1.) However, small employers are less likely than large employers to offer health insurance to their employees. In 1998, whereas 96 percent of employers with 50 or more employees offered health insurance, 71 percent of employers with 10 to 49 employees provided coverage and only about 36 percent of employers with fewer than 10 workers offered health benefits to their employees.The primary reason small employers cited for not offering coverage was cost. During the early 1990s, concern about small employers’ access to health insurance and the affordability of providing coverage to their employees led most states to adopt small group insurance market reforms. While the extent and scope of reforms varied across states, most states included reforms guaranteeing that small employers seeking coverage would be accepted for at least certain plans offered by insurers (known as guaranteed issue); guarantees that small employers could renew health insurance even if they had high claims except under certain circumstances, such as the failure to pay premiums (guaranteed renewal); limits on how long insurers could deny coverage for medical conditions individuals had at the time they obtained coverage (limits on preexisting condition exclusions); and limits on the variation allowed in premiums. States regulate insurance products sold within their borders, but their laws do not affect all employers. The Employee Retirement Income Security Act of 1974 (ERISA) generally preempts states from directly regulating employer-sponsored health plans. Thus, employers that assume the risk for, or “self-fund,” their employees’ health benefits are largely exempt from state regulation, including premium taxes and mandated benefits. The MEPS data from 1998 show that approximately 52 percent of large employers self-funded at least one health plan, compared with 11 percent of small employers. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) established minimum federal standards that further enhanced state efforts to ensure access to health insurance for small employers. While many of the state reforms already met or exceeded the HIPAA minimums, HIPAA ensured consistency in the definition of small employers (those with 2 to 50 employees) and established minimum standards regarding guaranteed issue, guaranteed renewal, and limits on preexisting conditions applying to both insured and self-funded health plans. States could exceed these minimum standards in their own statutes and regulations. While HIPAA helped ensure that small employers would have access to insurance, it did not impose any restrictions on premiums or otherwise address the affordability of insurance for small employers. Average annual health insurance premiums—the total amount paid by both employers and employees—were nearly the same for small and large employers in 1998. Small employers’ premiums were slightly higher than large employers’ for single coverage and slightly lower for family coverage. However, while small and large employers paid similar premiums, small employers’ coverage was generally less generous—their plans covered slightly fewer benefits and required those insured to pay higher out-of- pocket costs. Furthermore, many small employers would likely have had to pay higher than average premiums if they had provided coverage to their uninsured workers and their dependents, including those who were offered coverage but declined and those who were not offered coverage. This is because more of these uninsured individuals reported not being in excellent health than did those with insurance and most states allow insurers to charge small employers higher premiums to cover individuals in poorer health. Overall, average health insurance premiums for small and large employers varied only slightly. The total amount paid by the employer and employees for single coverage was on average slightly higher for small employers than for larger ones in 1998. Specifically, the average annual single premium was 4 percent higher ($83 more) for all small employers and 8 percent higher ($182 more) for the smallest of these—those with fewer than 10 employees. Average annual family premiums, however, were lower for small employers compared to large employers—about 3 percent lower for all small employers ($180 less) and 7 percent lower for the smallest of these ($357 less). (See table 1.) Within these average premiums, however, employers may find a considerable range of available premiums. For example, analysis of 1996 MEPS data indicates that annual single premiums at the smallest employers ranged from $995 to $4,540 per employee—about 456 percent—in 1996. In comparison, single premiums at small and large employers varied by about 369 percent and about 306 percent, respectively. While small and large employers generally paid, on average, about the same amount for health insurance coverage, small employers received less value for their premium dollars for several reasons. Small employers generally purchased coverage with higher cost-sharing requirements for their employees compared to larger employers. Also, small employers tended to receive slightly fewer covered benefits for the same premiums paid by large employers. To make coverage more affordable, small employers tend to purchase plans that require higher deductibles and higher maximum annual out-of- pocket costs for their employees. Average annual deductibles in preferred provider organizations—the plan type most often purchased by workers covered by small employers—are more than $100 higher for employers with 3 to 50 employees than for larger employers. A higher deductible typically translates into a lower premium. For example, actuarial experts estimate that a plan with an annual $200 deductible would reduce claims costs by about $65 per year compared to the same plan with a $100 deductible. Further, workers covered through small employers typically are potentially liable for higher out-of-pocket costs than those employed by larger employers. Specifically, about 35 percent of workers covered through small employers have maximum annual out-of-pocket limits that are $2,500 or more, compared to about 20 percent of workers covered through large employers. In addition, workers covered through small employers are less likely to receive certain benefits. As shown in figure 2, while workers covered through small employers were nearly as likely as those covered through large employers to have coverage for prescription drugs and adult physicals, they were slightly less likely to have coverage for other services such as prenatal care and mental health. The largest differentials between small and large employers—as much as 15 percentage points—were for benefits less likely to be covered by employers of any size, such as chiropractic care, oral contraceptives, and acupuncture. Individuals covered by small employers’ health care plans had, on average, health characteristics that were similar to those insured through large employers. Table 2 shows that selected demographic and self-reported health characteristics of individuals insured through small and large employers did not vary significantly. Specifically, whether they were insured through small or large employers, about the same percentages of individuals reported excellent physical and mental health. Moreover, nearly the same percentage of those insured through the smallest employers—those with fewer than 10 employees—reported being in excellent health (39.5 percent). Compared to individuals insured through small employers, uninsured workers at these employers and their dependents appear to be less healthy. Therefore, they could represent greater risks to insurers if small employers provided coverage to the uninsured. As shown in table 2, individuals insured through small employers had similar self-reported health characteristics when compared to those insured through large employers. However, our analysis of 1996 MEPS data shows that uninsured workers and their dependents at small employers considered themselves to be less healthy than their insured counterparts. This difference was particularly evident for workers from age 30 to 64 years and their dependents. The MEPS data showed that a smaller share of uninsured individuals in this age group reported being in excellent physical health—about 27 percent—compared to about 36 percent for insured people of similar ages. In addition, a smaller percentage of uninsured individuals reported having excellent mental health. (See table 3.) Unless prevented from doing so by state law, insurers often screen small employers for health and other risk factors associated with their workers when setting health insurance premiums and charge more for higher-risk groups. For example, we obtained premium quotes for hypothetical small employers in a selected city in each of four large states. Table 4 shows that in Austin, Texas the relatively high-risk small employer group would pay anywhere from 82 percent to 290 percent more than a relatively low-risk group. In the other three locations, premium quotes were 29 percent to 132 percent higher for the relatively high-risk small employer group. Small employers that had workers considered to be higher risk typically would have had to pay more for health insurance than healthier groups for the same coverage. Insurers’ administrative costs and expenses (other than benefits) are higher for small employers than for large employers. As a result, insurers spend a smaller share of small employers’ premium dollars on benefits and more on administrative and other expenses than they do for large employers’. For smaller employers, administrative costs such as marketing and billing are spread over fewer people. Furthermore, because large employers typically assume the risk for their employee health benefits by self-funding rather than purchasing insurance, other expenses, such as premium taxes, can be avoided. Insurers also report the potential for adverse risk selection—or purchasing of insurance by those with relatively high health care needs—is greater with the smallest groups, and to remain financially viable, insurers generally take steps to avoid covering a disproportionate share of these costly groups. Therefore, insurers may attempt to mitigate the difficulty of predicting the risk of a small group compared to a large group by reviewing the medical history of individuals in the group—called medical underwriting—or adding a premium surcharge to better ensure that they can cover costs resulting from unexpectedly large health care costs. Our analysis of existing data indicates that, overall, insurers’ administration costs and expenses, other than benefits, typically account for about 20 percent to 25 percent of small employers’ premiums compared to about 10 percent of large employers’ premiums. These expenses can range from around 5 percent to 30 percent of the premium dollar, depending on the size of the employer, type of plan, and insurer.The smaller the size of the group the larger the share of the premium that goes towards paying for expenses other than benefits. This is due in part to the fact that small employers have fewer individuals over which to spread expenses and certain costs are lower or can be avoided by large employers. Insurers’ administrative activities, such as marketing and billing, increase small employers’ premiums more because, with fewer people to share the costs, they cannot obtain the financial savings afforded to larger groups. For example, if it costs an insurer $5 a month to generate a bill for each employer, this cost spread over a group of five people would increase each person’s monthly premium by $1. In contrast, for a group with 100 people this same activity would increase the monthly premium for each person by only 5 cents. In addition, some expenses associated with insurance for most small employers may be avoided or reduced for large employers who assume the financial risk for their employees’ health coverage or perform some administrative functions internally. By self-funding, large employers avoid expenses such as state premium taxes assessed on insurance sold in the state that typically represent about 1 percent to 3 percent of health insurance premiums. In addition, large employers may perform some administrative activities, such as employee enrollment and education, which insurers or agents perform for, and therefore charge, small employers. Large employers typically purchase insurance with the assistance of benefits consultants, whom they pay a fixed hourly or lump sum fee. A recent survey by Kaiser/HRET estimated that the average administrative cost borne internally by large employers—those with 200 or more employees—for providing health benefits is approximately $250 per covered worker. This would increase the cost per covered employee by approximately 6 percent. Small employers, on the other hand, typically purchase insurance through agents whose fees can account for as much as 8 percent to 10 percent of the insurance premium. Where permitted by state law, insurers may also incur additional expenses assessing small employers’ risks and protecting themselves against the greater uncertainty in risk associated with these groups. Insurers are more concerned about increased financial risk to cover people through small employers for three reasons. First, insurers are unable to predict risk as accurately for small employers as they are for large employers. Estimates of a group’s future expenses that are based on prior health care use tend to be more accurate the larger the group is. Actuaries indicate that until a group approaches about 500 people its prior health care use and costs are not reliable enough to be the only data used in setting premiums. Second, insurers report that small employers, especially those with two or three employees, may be costly because they are more likely to seek coverage only when their employees anticipate needing it, a phenomenon known as adverse selection. Third, since smaller groups generate smaller amounts of premium revenue, insurers may be less willing to assume the potential risk of one individual incurring a catastrophic accident or illness that could elevate costs significantly and generate expenses exceeding premium revenues contributed by the group as a whole. To protect against these risks, insurers may review the medical history for each individual in the small group and set the group’s premiums accordingly—a practice known as medical underwriting. The degree to which medical underwriting is done depends on the size of the group. Very small groups are often screened most extensively, with each person required to provide a detailed medical history. As the group size increases, approaching 20 individuals or more, fewer questions may be asked. Such individual level assessments are not typically done for large employers so this cost accrues only for insurers when selling coverage to small employers. Furthermore, some insurers may add a surcharge of 1 percent to 5 percent of small employers’ premiums to increase their financial reserves—a pool of money they invest to help ensure that there will be sufficient funds should an unanticipated large expense occur. This surcharge tends to be higher when the insurer is less certain of the risk of the group and may be imposed in lieu of or in addition to medical underwriting. However, not all states permit these activities and not all insurers underwrite small groups or add a risk surcharge. Most states have enacted laws—generally referred to as state rating reforms—that restrict how much small employers’ health insurance premiums can vary. How these restrictions affect premiums depends on the latitude each state allows insurers when setting these premiums. Nearly all states have restricted insurers’ ability to vary small employers’ premiums to some degree. Tight restrictions allow no or little variation in premiums, while looser restrictions allow premiums to vary widely according to the health risk and demographic characteristics presented by each small employer. In states that do not allow insurers to set premiums based on health status, small employers with employees who have health conditions pay the same premiums as those with employees who do not have any health conditions, all other characteristics being the same. In states allowing insurers to adjust premiums for health and other characteristics, premiums for small employers with high-risk employees can be several times higher than those for employers with low-risk employees. Overall, average premiums, adjusted for geographic differences in the cost of physician services, were about 6 percent higher in states that did not allow rates to vary for employees’ health status than in those that did. However, our analysis found that states prohibiting insurers from setting premiums based on health status did not have a higher proportion of high-risk individuals insured through small employers than states with more flexible restrictions. To differing degrees, state laws restrict the variation allowed in small employers’ health insurance premiums. Two states—New York and Vermont—have adopted a premium restriction practice called community rating that essentially requires insurers to charge all small employers of the same size a common rate regardless of their employees’ and their dependents’ ages, health, or other demographic characteristics. In these states, premiums are allowed to vary only for geographic location of the group, plan or benefit design, and family size. As of June 2000, 10 other states had adopted modified community rating laws that also prohibit variation in premiums based on the health status of employees, but may allow some variation for other factors. For example, Maryland allows premiums for small employers to vary only by limited amounts for age, geographic location, and family size. Most other states allow premiums to vary based on health as well as other factors, but restrict the degree to which variation is allowed; these restriction categories are called rating bands. In these states, insurers can charge higher premiums for small employers insuring employees with certain characteristics—such as older individuals, women of childbearing age, smokers, individuals in poor health, and employees in certain industries—that are considered high risk or costly. However, the amount of variation is limited. For example, California allows insurers to consider age, family size, geographic area, and health factors when setting premiums, but limits the amount of variation for health factors to plus or minus 10 percent. Other states with rating bands allow much wider variation for health and other factors. For example, Texas allows factors such as age, sex, geography, group size, industry, and health to be considered in setting premiums, but limits the amount premiums can be adjusted for health to plus or minus 25 percent. As of June 2000, 35 states used rating bands when setting premiums. (See fig. 3.) The differences in state restrictions can greatly affect the premiums paid by small employers, particularly for those considered to be high risk. To illustrate these differences, we obtained premium quotes from several insurers in a selected city in each of five states representing different approaches to restricting premiums for the following three hypothetical small employers: Group 1: Low-risk group of 10 individuals, predominantly in their 20s, with few smokers, and none with any identified existing health conditions. Group 2: The same as group 1, but one of the workers has juvenile-onset diabetes. Group 3: A relatively high-risk group of 10, with several members in their 50s, several smokers, several women of childbearing age, and one member with juvenile-onset diabetes. The extent to which the second and third groups paid higher premiums than the first group depended on the state’s premium restrictions. (App. II provides a description of the hypothetical groups and the premium quotes we obtained within these localities.) For example, see the following. In New York (which has community rating and does not allow rates to vary for health or other factors), each of the groups paid the same premium. In Maryland (which does not allow premiums to vary for health but does allow limited variation for other factors), group 2 with one employee with juvenile-onset diabetes paid the same as group 1, and premiums were on average 73 percent higher for group 3 (with older workers, women of childbearing age, and more smokers). In California (which allows up to a 10-percent variation for health) and Florida (which allows up to a 15-percent variation for health), premiums were on average the same or slightly higher for group 2, and 53 percent to 85 percent higher for group 3 than for group 1. In Texas (which allows up to a 25-percent variation for health), where premiums can vary for multiple factors, the differences were most pronounced. On average, the insurers would charge the second group 44 percent more than the low-risk group, while they would charge the highest-risk group 176 percent more. Several insurers would have charged the high-risk group premiums two and a half to nearly four times as much as the low-risk employer. As shown in figure 4, for each location we compared the average percentage change in premiums for the group with one health condition (group 2) and the high-risk group (group 3) to the low risk group (group 1). By making the cost of coverage similar for low- and high-risk groups, states with tighter restrictions might be expected to attract a larger share of high-risk small employers, and thereby have higher average premiums, than states without tight restrictions. Based on 1996 MEPS data—adjusted for geographic cost differences—average annual single premiums for fully insured small employer plans were about 6 percent higher in states that prohibited premium adjustments for health characteristics ($2,150) than in other states and the District of Columbia that either had rating bands allowing limited variation for health characteristics or had no restrictions ($2,034). Average annual family premiums were about 7 percent higher in states that prohibited premium adjustments for health characteristics ($5,189) than in the other states and the District of Columbia ($4,855). While average premiums were slightly higher in states prohibiting the use of health characteristics to set premiums, these states do not appear to have a higher proportion of high-risk groups insured in the small group market based on certain characteristics associated with risk. Using the 1996 MEPS, we compared average medical expenditures and use, demographic characteristics, and self-reported health characteristics for individuals insured through a small employer in states that (1) prohibited premiums from varying for health characteristics and (2) allowed at least some variation for health or had no restrictions. We found individuals in both groups of states to have generally similar expenditures, use, demographic characteristics, and health characteristics. States have undertaken other efforts to help small employers purchase health insurance, but have had limited success in addressing affordability issues. Attempts to reduce premiums by allowing insurers to offer less generous, scaled-back benefit packages have not been widely embraced by small employers. State and private efforts to pool small employers into purchasing cooperatives have made it easier for small employers to offer a broader choice of plans to their employees, but most efforts have not resulted in expected premium reductions when compared to similar plans available outside of the cooperatives. A few states have recently begun to provide tax incentives or subsidies to small employers offering insurance. While these initiatives are too new for their effect to be fully evaluated, previous studies suggest that tax incentives need to represent a significant portion—half or more—of the premium to significantly increase coverage. Scaled-back benefit plans that cover fewer services or have higher out-of- pocket requirements can reduce premiums, but they have not been widely purchased when offered. For example, see the following. Illinois officials reported that 25 people were enrolled in plans with scaled- back benefits when they were offered in the 1990s. The Illinois Department of Insurance stopped approving the sale of these plans in 1997. Florida allows insurers to offer a basic low-cost plan that contains most of the state’s mandated benefits but has high deductible and coinsurance requirements. Few of these plans were sold, accounting for less than 1 percent of premiums collected in Florida’s small group market. Texas allows insurers to offer basic and catastrophic benefit plans. Both plans cover many common benefits, such as maternity, outpatient services, and hospital charges, and the catastrophic plan has deductibles as high as $5,000 and maximum out-of-pocket expenses up to $10,000. Data provided by the Texas Department of Insurance indicate that, at peak enrollment in 1997, only 53 basic and catastrophic plans were sold. In 1999, a major national health insurer introduced a set of scaled-back benefit plans designed for small employers. The plan reimburses a maximum of $50 of the cost of a doctor’s visit and pays as little as $100 toward the cost of inpatient hospitalization after the 10th day. A year after introducing the program, which is available in about 30 states, the company acknowledged that the experiment failed to generate much interest from small employers. Experts attribute the poor sales of scaled-back policies to a desire among small employers to offer benefits comparable to those offered by large employers. Also, experts have reported that employees tend to be averse to high deductibles, for example, those of $1,000 or more. Furthermore, some small employers may not even be aware of the availability of these scaled-back benefit plans because agents, whose commissions tend to be lower for these plans, may not market them aggressively. Private and public efforts to allow small employers to join together and purchase health insurance have not, in most cases, lowered the cost of coverage. In general, small-employer purchasing cooperatives try to function like large employers to obtain lower premiums, offer more plan options, and achieve administrative economies of scale. In 2000, 20 states had laws allowing small employers to pool together into cooperatives for the purpose of purchasing health insurance, and several recent congressional proposals would further encourage the development of similar purchasing arrangements. However, most cooperatives account for a small share of each state’s small group market (typically, less than 5 percent of small employers), and several cooperatives recently have failed. We reported in 2000 on the experience of five relatively large, geographically dispersed cooperatives, most of which offered a wide range of benefit options and administrative services to participating small employers. For similar plans, premiums inside the cooperatives were about the same as those available outside. Specifically, we reported that individuals in a group made up of 20- to 30-year-olds in the cooperatives in California, Connecticut, and Florida paid average monthly premiums ranging from $108 to $187 in 1999. The premiums for individuals in a comparable group outside the cooperatives in these states ranged from $101 to $169. A 1997 national survey found similar average monthly single premiums for small employers participating in any pooled purchasing group—$180, compared with $172 for nonparticipants. Several states have recently offered tax incentives or other subsidies to small employers that offer insurance to their employees. These actions have the potential to make premiums less expensive and encourage more small employers to offer coverage and more individuals to purchase it. Two recently implemented efforts to lower premiums for small employers provide assistance for up to about 18 percent of the average premium. For example, starting in 2000, Kansas allowed employers to receive a refundable tax credit for the first 5 years they provide health insurance to their employees. The credit is worth up to $420 per employee per year for the first 2 years and then decreases to no more than $315 for the remaining 3 years. Massachusetts makes payments to qualified small employers providing health benefits to eligible low-income employees of up to $400 per employee per year for single coverage and $1,000 for family policies. Assistance is also available in Massachusetts to eligible employees for their portion of the premium. As part of its Healthy New York Program, the state has recently initiated a unique subsidy intended to assist certain small employers and working uninsured individuals in obtaining coverage. The state is providing financial reimbursement to health maintenance organizations (HMO), with other insurers able to participate on a voluntary basis, to cover high-cost claims—a type of reinsurance known as “stop-loss” coverage. This could help address concerns about the potential for some HMOs and insurers receiving a disproportionate share of high-cost enrollees and the greater uncertainty in the risk for insurers providing coverage to small employers. Specifically, the New York program covers 90 percent of each enrolled individual’s claims between $30,000 and $100,000. Also, the stop-loss coverage—along with a standardized, scaled-back benefits package that HMOs must offer—is intended to make health insurance more affordable and accessible. New York estimates that the program will cost the state about $300 to $500 for each enrolled individual. However, because the program just started at the beginning of 2001, it is too early to assess its effectiveness. These tax incentives and subsidies reduce the net cost to small employers of providing health insurance, but it is uncertain whether they will be sufficient to encourage many new small employers to begin offering coverage. At small employers, even large premium subsidies may not persuade a significant number of workers—particularly low-income workers—to purchase health insurance when it is offered. A 1997 study estimated that, for workers eligible to participate in employer-sponsored coverage, subsidies as high as 75 percent would only increase participation rates from 89 percent to 93 percent. In addition, some studies have indicated that tax incentives to individuals need to represent a significant portion of the premium—perhaps half or more—to result in many individuals newly purchasing health insurance. However, the Kansas and Massachusetts subsidies would represent less than 20 percent of a small employer’s typical single coverage premium. Furthermore, the temporary nature of some state programs—such as the Kansas subsidy that lasts for 5 years—may limit their effectiveness. Experts report that small employers may be hesitant to begin offering coverage even with subsidies if they are uncertain that the subsidy will be available for the long term because employers do not want to drop coverage once they begin offering it. While federal and state reforms over the last decade have generally made health insurance more accessible for small employers, many small employers and their employees continue to face challenges in affording health insurance. Recognizing the difficulties and costs that many small employers face in offering their employees health insurance, the Congress has considered several proposals to assist small employers in sponsoring health insurance, such as proposed tax incentives and new purchasing arrangements. These efforts are directed toward helping to make health insurance more affordable for small employers by subsidizing costs for the employers or their employees or by helping small employers gain some of the advantages large employers have in purchasing health insurance. The complexity and diversity of the small-group health insurance market as well as the experience of the states in regulating premiums and trying other approaches to expand coverage are important considerations in crafting effective reforms. Small employers often get less value for their premium dollar than large employers and, in states that do not tightly restrict premium variation, small employers with high-risk employees may pay substantially higher premiums than those with lower-risk employees. As a result, many small employers with uninsured workers and dependents in such states may face higher premiums if they provide coverage because fewer of these uninsured individuals report being in excellent health and they therefore may represent a higher risk to insurers. States’ experiences indicate that efforts to increase affordability and access can have some benefits—such as increasing the availability of a wider array of plan options for small employers or helping to ensure that small employers with high-risk employees pay lower premiums. However, they generally have not made coverage more affordable overall or been sufficient to encourage many new small employers to begin providing coverage. Other efforts, such as purchasing cooperatives and scaled-back benefit offerings, have not attracted a large share of the small group market to date. Further, recently enacted temporary state subsidies and incentives may not be sufficient to encourage many small employers to offer coverage. Several private insurance experts, an expert on the MEPS database, and a health insurance industry representative provided comments on a draft of this report. In general, these reviewers concurred with our findings. Two reviewers noted that while health insurance premiums were higher in states that implemented tighter rating restrictions compared to the remaining states, other factors in local health care markets, such as the types of plans available or mix of industries, might also explain these differences. We revised the report to reflect that these other factors may also account for some premium differences across groups of states. Another reviewer further emphasized that while federal and state small- group reforms have made health insurance more accessible, affordability still remains a major obstacle to more small employers offering coverage to their workers. The reviewers also made technical comments that we incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to other interested congressional committees and other parties. We will also make copies available to others on request. Please call me at (202) 512-7118 or John Dicken, Assistant Director, at (202) 512-7043 if you have any questions. Major contributors to this report include N. Rotimi Adebonojo, JoAnne Bailey, and Joseph Petko. To review the affordability of health insurance in the small group market we: reviewed literature on the small group market; analyzed three Medical Expenditure Panel Survey (MEPS) data files—the 1996 Household Component and the 1996 and 1998 Insurance Components; analyzed the Kaiser Family Foundation and Health Research and Educational Trust (Kaiser/HRET) Employer Health Benefits 2000 Annual Survey; obtained health insurance premium quotes for three hypothetical small employer groups in California, Florida, Maryland, New York, and Texas; interviewed insurance regulators in California, Florida, Maryland, New York, and Texas; and interviewed health insurance experts, actuaries, and insurers’ representatives. Our analyses of the MEPS and Kaiser/HRET benefit data are further discussed below, and appendix II includes additional information regarding the premium quotes we obtained in the five states. MEPS, conducted by the Agency for Healthcare Research and Quality (AHRQ), consists of four surveys and is designed to provide nationally representative data on health care use and expenditures for U.S. civilian noninstitutionalized individuals. For our analysis, we used two of the four surveys: the Household Component and the Insurance Component. The Household Component is a survey of individuals regarding their demographic characteristics, health insurance coverage, and health care use and expenditures. The Insurance Component’s list sample is a survey of employers regarding the health insurance they offer and their premiums. We consulted with AHRQ staff regarding MEPS, and in some cases AHRQ or the Bureau of the Census had programmers perform analyses at our request in order to provide us with additional data or to ensure that the confidentiality of the data was not compromised. We used the 1996 MEPS Household Component to compare nonelderly individuals with health insurance through small and large private employers according to select demographic and health characteristics. To compare insured individuals, we identified the size of the employers through which the coverage was obtained using variables created with the assistance of AHRQ. We classified insured individuals by employer size according to the responses provided by the policyholders, who were either self-employed individuals or wage earners (employees). Our analysis also included dependents that had coverage through the policyholder. We classified the following as insured through a small employer: (1) self- employed individuals who reported 50 or fewer employees at their firms and (2) wage earners at single location establishments with 50 or fewer employees. We excluded from our analysis wage earners at establishments with 50 or fewer employees whose employers had more than one establishment—approximately 21 percent of the private, employer-sponsored population—because we could not determine with certainty whether the employers would have 50 or fewer employees or more than 50 employees for all locations combined. We classified self- employed and wage earners reporting more than 50 employees, regardless of the number of establishments, as insured through large employers. When an individual had multiple sources of coverage from two different sized employers—for example, if he or she was a policyholder on one plan and a dependent on a spouse’s plan—we assigned the individual to the employer size of the plan for which he or she was a policyholder. Less than 1 percent of persons were dependents on more than one private employer-sponsored plan, and we randomly assigned each of these individuals to either the small or large employer through which he or she had insurance. Table 5 shows the unweighted and weighted sample sizes on which our analyses are based. We also compared characteristics of insured and uninsured individuals from households with at least one individual working for a small employer. For these analyses we included uninsured nonelderly individuals living in households for which we could determine that at least one adult was employed by a small private employer. Furthermore, only those persons eligible to obtain health insurance from within the household were included. Our analysis of the uninsured is based on a sample size of 1,462, representing a population of 16.1 million uninsured individuals in households with at least one worker at a small employer. In addition, we compared the risk characteristics of individuals insured through small employers in states that prohibited adjustment of premiums based on the health or claims experience of a group with those insured in states that allowed premiums to vary for these health characteristics. (See table 6 for the two state groupings.) To determine which states prohibited the use of health characteristics in setting premiums in 1996, we used information from the Institute for Health Policy Solutions, the Blue Cross and Blue Shield Association’s Survey of Plans, and the Health Policy Tracking Service of the National Conference of State Legislatures. We supplemented this information with telephone calls to insurance regulators in eight states to clarify any inconsistencies. To compare the premiums for health insurance provided through small and large firms, we obtained premium data from the 1998 MEPS Insurance Component’s list sample. The premium data we present include national estimates for insurance provided by small employers (those with fewer than 50 employees) as well as large employers (those with 50 or more employees). To assess the effect of rating reforms that prohibit the use of health characteristics in setting premiums in the small group market, the Bureau of the Census and AHRQ staff conducted data analyses of the 1996 MEPS Insurance Component at our request. We also requested that AHRQ weight premiums to reflect plan enrollment. The premium data we present include the average premiums for insurance at small employers (those with 50 or fewer employees) in our two state groupings—those that prohibit varying of premiums according to the health characteristics of the group and those that permit premiums to be adjusted for these characteristics. In addition, we also present a national range of premiums for employers, representing from the 5th to the 95th percentile in premium costs. To compare benefits generally purchased by small and large firms, we used data from the Kaiser/HRET Employer Health Benefits 2000 Annual Survey. Kaiser/HRET surveyed randomly selected public and private employers that had from 3 to more than 300,000 employees. The survey’s overall response rate was 45 percent. Kaiser/HRET provided us with unpublished data to reflect the employer size categories (3 to 50 employees and 51 or more employees) we requested. We weighted the results by plan type (including indemnity plans, health maintenance organizations, preferred provider organizations, and point-of-service plans) to reflect enrollment patterns among small and large employers. In collaboration with the National Association of Health Underwriters (NAHU), an association that represents professional health insurance agents and brokers, we obtained health insurance quotes for three hypothetical small employer groups in a selected city in each of five states. We selected states based on size and geography. In addition, we considered the type of rating reforms they implemented. We selected two states where premiums cannot vary by health status: (1) New York, which requires community rating that allows premiums to vary for benefit design, family size, and geographic location, and (2) Maryland, which requires modified community rating that allows variation for age, family size, and geography. We also selected three states where premiums can vary to different degrees by health, along with other factors. Florida amended its rating system in July 2000 to permit limited variation in premiums for health status. California and Texas have rating bands that allow premiums to vary for health and other factors. Within each state, we obtained quotes for a selected city—specifically, (1) Albany, New York, (2) Baltimore, Maryland, (3) Sacramento, California, (4) Orlando, Florida, and (5) Austin, Texas. We asked that agents associated with NAHU solicit quotes from the three to five major insurers active in each locality’s small group market. Specifically, based on their expertise, agents solicited quotes for the most popular and actively marketed benefit packages from these insurers. The agents did not disclose the purpose of the survey to the insurers from whom they received premium quotes. In addition, some premium quotes from the insurers were preliminary and could have been subjected to further underwriting. The survey instrument was pretested in Atlanta, Georgia. Each of the three hypothetical employer groups for which coverage was sought had 10 workers, 3 of whom were part-time and ineligible for health insurance. The group applying for coverage consisted of a total of 10 individuals—the 7 eligible workers and 3 dependents. The employer was to pay for all of the cost of coverage for the employees and nothing toward the cost of coverage for the dependents. The employees in the first group were relatively healthy, ranging in age from 25 to 34 years old. The employees in the second group were similar to the first except one person reported a serious medical condition—juvenile-onset diabetes. The employees in the third group were given characteristics that were higher risk than the other two groups. In addition to the serious medical condition, these workers in the third group were older, had a higher proportion of smokers and women of childbearing age, and were employed by a restaurant—an industry considered to be higher risk by some insurers. We received 147 premium quotes from 18 different insurers in the five states. Some insurers reported premiums for different plan types (including health maintenance organizations, (HMO), preferred provider organizations (PPO), point-of-service (POS) plans, and an exclusive provider organization (EPO)) as well as different options within these plan types. Table 7 shows the health insurance premium quotes we received for each of the three small employer groups. Health Insurance: Proposals for Expanding Private and Public Coverage (GAO-01-481T, Mar. 15, 2001). Health Insurance: Characteristics and Trends in the Uninsured Population (GAO-01-507T, Mar. 13, 2001). Private Health Insurance: Cooperatives Offer Small Employers Plan Choice and Market Prices (GAO/HEHS-00-49, Mar. 31, 2000). Private Health Insurance: Progress and Challenges in Implementing 1996 Federal Standards (GAO/HEHS-99-100, May 12, 1999). Health Insurance Standards: New Federal Law Creates Challenges for Consumers, Insurers, Regulators (GAO/HEHS-98-67, Feb. 25, 1998). Health Insurance Regulation: Varying State Requirements Affect Cost of Insurance (GAO/HEHS-96-161, Aug. 19, 1996).
Many small employers--those with 50 or fewer workers--do not offer health benefits to their employees. This is particularly true for employers with fewer than 10 workers. The families of workers employed by small employers are about twice as likely to be uninsured as households with a worker at a large employer. Despite efforts by Congress and the states to help small employers buy coverage, many small employers continue to cite cost as a major obstacle to providing coverage. Small and large employers purchasing health insurance generally had comparable premiums in 1998, but this comparison does not fully reflect the challenges facing small employers in providing health insurance for their employees. Although the premiums were similar, the health plans offered by small employers were slightly less generous on average--they had slightly higher average cost-sharing requirements for their employees and were somewhat less likely to offer some benefits, excluding, for example, mental health services and chiropractic care. Also, insurers' costs to administer employer-based health insurance and protect against potentially large health care costs result in a larger share of small employers' premium dollars being spent on these nonbenefit expenses. Nearly all states have passed laws that limit the ability of insurers to vary premiums charged to small employers on the basis of the group's risk factors, including health. Other state efforts to make insurance more affordable for small employers have had limited results. Few small employers appear interested in lower-cost benefit packages that require significantly higher cost sharing by individuals or that scale back benefits that are covered.
The Stewart B. McKinney Homeless Assistance Act (P.L. 100-77, July 1987) was the first comprehensive federal law designed to assist the homeless. Although the McKinney Act authorized a number of direct assistance programs to provide shelter and support services for the homeless, it did not consolidate the funding for or administration of these programs. It did, however, establish the Interagency Council on the Homeless to promote coordination. Originally, the Council was authorized by the Congress as an independent council with its own funding, full-time executive director, and staff. Its members were the heads of 12 Cabinet departments (or their designees), the heads of several other designated agencies, and the heads of other federal entities as determined by the Council. In 1994, however, because of congressional concern that the Council was not effectively coordinating a streamlined federal approach to homelessness, funds were not appropriated for the Council and it became a voluntary working group under the President’s Domestic Policy Council. The Department of Housing and Urban Development (HUD) currently staffs the Council with a part-time executive director, two professional staff, and one clerical staff and provides administrative funding. Entitlements, such as for the Food Stamp Program, are under the control of authorizing committees and, under the appropriations process, are mandatory. A direct payment is financial assistance that the federal government provides directly to recipients who satisfy federal eligibility requirements, without placing any restrictions on how the recipients spend the money. According to the Office of Management and Budget’s Catalog of Federal Domestic Assistance, formula grants are federal funds typically allocated to a state or one of its subdivisions in accordance with a distribution formula prescribed by law or administrative regulation, for activities of a continuing nature not confined to a specific project. Project grants are provided for a fixed or known period for a specific project or for the delivery of specific services or products. Nonprofit organizations and other entities usually apply directly to agencies to receive funding for these specific types of services. The Results Act establishes a formal process for holding federal agencies accountable for their programs’ performance. It requires these agencies to develop (1) long-term (generally 5-year) strategic plans, the first of which were due to the Congress by September 30, 1997, and (2) annual performance plans, the first of which covered fiscal year 1999 and were submitted to the Congress in the spring of 1998. The annual performance plans are to (1) identify annual performance goals and measures for each of an agency’s program activities, including those that cut across agency lines; (2) discuss the strategies and resources needed to achieve annual performance goals; and (3) explain what procedures the agency will use to verify and validate its performance data. The Office of Management and Budget oversees the efforts of federal agencies under the Results Act. Eight federal agencies administer 50 programs and other resources that can assist homeless people. Both targeted and nontargeted programs provide an array of services to the homeless, such as housing, health care, job training, and transportation. In some instances, different programs may offer the same types of services. Some of the targeted programs are available to the general homeless population, while others are reserved for specific groups within this population, such as children and youth or veterans. Similarly, some of the nontargeted programs are available to the low-income population as a whole, while others are designed exclusively for certain low-income groups, such as youth or veterans. Eight federal agencies—the departments of Agriculture (USDA), Health and Human Services (HHS), HUD, Education, Labor, and Veterans Affairs (VA) and two independent agencies, the Federal Emergency Management Agency (FEMA) and the Social Security Administration (SSA)—administer 50 programs that can serve homeless people. In some cases, multiple agencies operate programs that provide similar services. For example, six agencies operate programs that offer food and nutrition services, five agencies administer education programs (or programs that have an educational component), and four agencies administer housing assistance programs that can serve homeless people. As table 1 shows, 16 of the 50 programs we identified are targeted, or designed exclusively for homeless people. Thirty-four programs are nontargeted, or designed for a broader group of people with low incomes and/or special needs, such as disabilities or HIV/AIDS. While this broader group may include homeless people, information on the number served is generally not available. Because eligibility for the nontargeted programs is based on income or other criteria unrelated to homelessness, the programs generally do not—and are not required to—track data on the number of homeless persons served. A few nontargeted programs are, however, beginning to collect such data. For example, USDA’s Summer Food Service Program tracks the average number of children who receive meals at shelters for the homeless during the summer, and HUD’s Housing Opportunities for Persons With AIDS (HOPWA) program collects data on the number of homeless people served. A chart of the 50 programs and their eligible services appears in appendix I, while detailed information about the programs appears in appendix II. In addition, federal agencies and advocacy groups identified other resources and activities that can assist the homeless. While these activities are also important, we did not include them in our list of 50 programs. Some of the activities require little or no extra resources. For example, the Department of Energy provides insulation to qualifying homeless shelter dwellings, and USDA’s Rural Housing Service, HUD, and VA make foreclosed properties available to nonprofit organizations for housing homeless people. More information on these resources and activities is included in appendix III. As table 2 indicates, both targeted and nontargeted programs can offer a variety of services that often appear similar. For example, four agencies administer 23 different programs (11 targeted and 12 nontargeted) that provide some type of housing assistance, including emergency shelter, transitional housing, and other housing assistance. Similarly, six agencies administer 26 programs (11 targeted and 15 nontargeted) that deliver food and nutrition services. For example, USDA provides food and nutrition services ranging from funding for school lunches and breakfasts to food stamps, while FEMA funds the distribution of groceries to food pantries and food banks. Of the 50 programs, 10 (5 targeted and 5 nontargeted) provide assistance to prevent homelessness. For example, FEMA’s targeted Emergency Food and Shelter Program and HHS’ nontargeted Temporary Assistance for Needy Families (TANF) program can provide rental assistance to prevent evictions, which could lead to homelessness. HUD’s nontargeted HOPWA program also provides short-term assistance to cover rent, mortgage and/or utility payments to prevent homelessness. However, the existence of programs that offer similar services does not necessarily mean that there is duplication because the particular services provided by each program may differ. For example, USDA’s Homeless Children Nutrition Program focuses on providing food services throughout the year to homeless children in emergency shelters, while VA’s Domiciliary Care for Homeless Veterans program provides food only to veterans who are involved with that program at a given time. Some of the targeted programs are available to the general homeless population, while others are reserved for specific groups within this population. Similarly, some of the nontargeted programs are available to the low-income population as a whole, while others are designed exclusively for certain low-income groups. As table 3 indicates, four of the targeted programs, including HUD’s Supportive Housing Program and FEMA’s Emergency Food and Shelter Program, serve the homeless population as a whole. Five targeted programs, such as Education’s Education for Homeless Children and Youth program, serve only homeless children and youth, and four other targeted programs, such as VA’s Domiciliary Care for Homeless Veterans program, serve only homeless veterans. Similarly, 14 nontargeted programs, including HHS’ Community Services Block Grant and USDA’s Emergency Food Assistance Program, are available to all qualifying low-income people, while 8 programs, such as HHS’ Head Start program, provide benefits only to low-income children and youth, and 1 program, Labor’s Veterans Employment Program, provides benefits only to veterans, including those who are homeless. In addition, of the 16 different programs under which homeless people may be eligible to receive one type of service—primary health care—7 programs are available either to all homeless people or to broad groups of low-income people. Programs available to broad groups of low-income people include HHS’ Medicaid, Community Health Centers, and Social Services Block Grant programs. However, 9 of the 16 programs are available only to groups with special needs, such as runaway youth or veterans. Additional information on groups served through these programs can be found in appendix IV. In fiscal year 1997, $1.2 billion in obligations was reported for programs targeted to the homeless, and about $215 billion in obligations was reported for nontargeted programs. While the funding for targeted programs must be used to assist homeless people, information on how much of the funding for nontargeted programs is used for this purpose is not generally available. Some of the funding for nontargeted programs is provided through formula grants or direct payments, while the funding for targeted programs is likely to be provided through project grants. Both formula and project grants present advantages and disadvantages in serving homeless people. In fiscal year 1997, the federal government reported obligations of over $1.2 billion for programs targeted to the homeless. Over three-fourths of the funding for the targeted programs, such as the Health Care for the Homeless and Supportive Housing programs, is provided through project grants, which are allocated to service providers. Most of the remainder for targeted programs is allocated to states and local governments through formula grants. Of the amount spent for targeted programs, about 70 percent was for programs administered by HUD. Roughly $215 billion in obligations was reported for nontargeted programs that serve people with low incomes, who may be homeless. Information is not available on how much of the funding for nontargeted programs is used to assist homeless people. However, a significant portion of the funding for nontargeted programs does not go to serving the homeless. As figure 1 shows, in fiscal year 1997, about 64 percent of the nontargeted funding is for Medicaid, Supplemental Security Income (SSI), and TANF, which are primarily intended for families, the disabled, or the elderly, rather than able-bodied single men. However, single men make up the majority of the homeless population. About 20 percent of the funding for nontargeted programs is provided through formula grants. These grants are flexible funding sources that can be used to serve the general homeless population. The remainder of the funding for nontargeted programs consists of direct payments for the Food Stamp Program and project grants for several programs whose services are generally available to the homeless. The reported obligations for each program for fiscal years 1995-98 are shown in appendix V. While the funding for nontargeted programs can be used to benefit the homeless, the agencies generally do not, and are not required to, track or report what portion is used for this purpose. Although HHS does not track the dollar value of the benefits that homeless people receive through its nontargeted programs, the Secretary informed the Chairman of the Subcommittee on Housing and Community Opportunity, House Committee on Banking and Financial Services, in an October 1997 letter, that HHS provides “billions of dollars worth of resources” to meet the needs of low-income people, including the homeless, through large block grants, such as TANF, as well as through other programs for delivering mental, primary, and children’s health care services and for preventing substance abuse and domestic violence. Officials at other agencies, such as VA and Education, emphasized that their programs are available to all who qualify, including the homeless, but said that they have not tried to determine how much of the funding for their programs is used to serve the homeless. Officials also said that although their nontargeted programs appear to have sufficient resources, they are sometimes unable to serve all those who are eligible. For example, Labor’s Director of Operations and Programs said that the Department is not able to serve all who qualify for its Job Training Partnership programs. Similarly, an official with USDA’s Commodity Supplemental Food Program—which provides food, such as peanut butter, to certain low-income groups—said resources depend on each fiscal year’s appropriation, which determines the number of caseload slots that are available in each state. Once the slots are filled, no additional persons can be served. About 20 percent of the funding for nontargeted programs is provided through formula grants, which are typically distributed to the states according to a formula, and the states decide how to spend these funds within federal guidelines. Compared with some project grants, formula grants are broader in scope, generally receive more funds, and offer greater discretion in the use of funds. These funds can then be used for a variety of activities within a broad functional area, such as social services or mental health services. The flexibility inherent in some formula grant programs, such as HUD’s HOPWA program, allows states and localities to define and implement programs—that may or may not include services for the homeless—in response to their particular needs. Although service providers who receive these funds often cannot identify their source, since the funds flow through the state and/or local government, the providers appreciate the steady flow of funds. However, some service providers expressed concern that because of the flexible nature of formula grant programs, vulnerable populations, such as the homeless, are rarely guaranteed a measure of assistance, posing a problem in communities that do not place a priority on spending for the homeless. In contrast to nontargeted programs, targeted programs are likely to be funded through project grants. Nine of the 16 targeted programs are funded through such grants, while 5 are funded through formula grants. Two of VA’s programs receive funding through the agency’s Mental Health Strategic Healthcare Group, which provides the funds directly to VA medical centers for the programs. Project grants enable nonprofit organizations and service providers to apply directly to federal agencies to receive funding for specific types of services offered exclusively to the homeless population; however, funding is often limited and programs are not offered at all locations. For example, VA’s Domiciliary Care for Homeless Veterans program offered resources to VA facilities that chose to implement the program, but the Department does not require all of its facilities to provide domiciliary care for homeless veterans. According to a June 1997 VA report, only 35 of VA’s 173 hospitals offered this program. In addition, the agency’s Homeless Chronically Mentally Ill Veterans program is not available in every state or locality with a significant number of eligible homeless veterans. According to a 1995 HUD study, the unpredictability of competitive grant funding levels and the varying lengths of grant awards are not consistent with a long-term strategy for eliminating homelessness. In addition, the types of projects eligible for funding may be poorly matched to local needs, and differing eligibility and reporting requirements across agencies present administrative complications for service providers who receive funds from multiple project grants. HUD has sought to minimize the disadvantages associated with project grants by consolidating the process of applying for its programs to assist the homeless. HUD also requires community service providers to collaborate through its Continuum of Care approach, discussed later in this report. State coordinators and local providers of services for the homeless in Colorado, Georgia, Michigan, Vermont, and Washington, D.C., identified HUD’s targeted programs, as well as a few of HUD’s nontargeted programs, as the ones they used most frequently to meet their state and local funding needs. They also cited HHS, FEMA, and Education as funding sources but were not as familiar with these agencies’ programs for assisting the homeless. Federal efforts to assist the homeless are coordinated in several ways, and many agencies have established performance measures, as the Results Act requires, for program activities designed to assist the homeless. Coordination can take place through (1) the Interagency Council on the Homeless, which brings together representatives of federal agencies that administer programs or resources that can be used to alleviate homelessness; (2) jointly administered programs and policies adopted by some agencies to encourage coordination; and (3) compliance with guidance on implementing the Results Act, which requires federal agencies to identify crosscutting responsibilities, specify in their strategic plans how they will work together to avoid unnecessary duplication of effort, and develop appropriate performance measures for evaluating their programs’ results. Although coordination is occurring, agencies have not yet taken full advantage of the Results Act’s potential as a coordinating mechanism to do much more than identify crosscutting responsibilities. Furthermore, although most agencies have established process or output measures for the services they provide to the homeless through their targeted programs, they have not consistently incorporated results-oriented goals and outcome measures related to homelessness in their performance plans. The Council brings agency representatives together to coordinate the administration of programs and resources for assisting homeless people. The full Council, consisting of the Cabinet Secretaries or other high-level administrators, has not met since March 1996. However, the Council’s policy group is scheduled to meet every 2 months. Between December 1997 and November 1998, the policy group met four times, and staff from various agencies attended one or more of the meetings. Among other things, the policy group is coordinating a major survey of homeless assistance providers and clients. Other activities include discussing efforts to periodically distribute a list of federal resources available to assist homeless people; coordinating the distribution of surplus real property on base closure property, as well as the distribution of surplus blankets; and conducting a round table discussion with representatives of major homeless advocacy groups. Recently, the group has discussed the need to better connect targeted homeless assistance programs with nontargeted programs that provide housing, health care, income, and social services. While such responses to immediate issues and exchanges of information are useful, Council staff and the executive directors of two major homeless advocacy groups believe that the Council lost much of its influence after the Congress stopped its funding in 1994 and it became a voluntary working group. HUD acknowledges that the Council scaled back its efforts when its staffing was reduced but maintains that the Council is still very involved in coordinating federal efforts and sharing information. Another mechanism for promoting coordination is the joint administration of programs and resources to benefit the homeless. For example, VA and HUD officials collaborate on referring appropriate homeless veterans to local housing authorities for certain Section 8 rental assistance vouchers. FEMA and the Department of Defense work together to make unmarketable but edible food available to assistance providers, and Education collaborates with HHS to provide services to elementary and secondary school children through HHS’ Runaway and Homeless Youth and Education’s Education for Homeless Children and Youth programs. Labor and VA also collaborate to provide services that are intended to increase the employability of homeless veterans. USDA has developed a multipurpose application form for free and reduced-price meals provided through its children’s nutrition programs that allows households applying for meal benefits to indicate that they want information on HHS’ State Children’s Health Insurance program and Medicaid. Some agencies, such as HUD and VA, have adopted policies that encourage coordination between service providers at the local level. For example, HUD’s Continuum of Care policy promotes coordination by encouraging service providers to take advantage of programs offered by other agencies, as well as other HUD programs. This policy, which is designed to shift attention from individual programs or projects to communitywide strategies for solving the problem of homelessness, can be used to leverage services from many sources in a community, according to HUD. HUD’s Continuum of Care strategy grew out of a 1994 Interagency Council report that proposed to address the diverse needs of homeless people. According to the report, these needs include (1) outreach and needs assessments, (2) emergency shelters with appropriate supportive services, (3) transitional housing with appropriate supportive services, and (4) permanent housing. The report recommended consolidating HUD’s McKinney Act programs and FEMA’s Emergency Food and Shelter Program into a single HUD block grant. VA, under its nationwide Community Homelessness Assessment, Local Education and Networking Groups program (CHALENG), began hosting meetings to bring together public and private providers of assistance to determine the met and unmet needs of homeless veterans and to identify the assistance available from non-VA providers. While HUD and VA encourage participation by a wide array of service providers—including those receiving both targeted and nontargeted funding—participation varies by location. Most agencies that administer targeted programs for the homeless have identified crosscutting responsibilities related to homelessness, but few have attempted the more challenging task of describing how they expect to coordinate their efforts with those of other agencies or to develop common outcome measures. Few performance plans contain evidence of substantive coordination, and none discusses coordination with nontargeted programs to decrease overlaps or fill gaps in services. For example, HUD’s 1999 performance plan indicates that the Department will work with other federal agencies to promote self-sufficiency but does not identify all the departments or programs through which it will do so. This finding is not surprising in view of the time and effort required to coordinate crosscutting programs—an issue we have discussed in reviewing federal agencies’ implementation of the Results Act. In general, we have found that agencies have made inconsistent progress in coordinating crosscutting programs. Given the large number of programs that can assist the homeless and the multiple agencies that administer them, increased coordination— including, ultimately, the development of common outcome measures—could strengthen the agencies’ management. As we reported previously, the Results Act, with its emphasis on defining missions and expected outcomes, can provide the environment needed to begin addressing coordination issues. Most agencies have established process or output measures for the services they provide to the homeless through their targeted programs, but they have not consistently provided results-oriented goals and outcome measures related to homelessness in their plans. For example, Education established process measures, but not outcome measures, for its Education for Homeless Children and Youth program. Its measures include proposing changes to state and local laws to remove obstacles to the education of homeless children and youth and reducing barriers to school enrollment, such as lack of immunizations and transportation. Additionally, HHS’ Projects for Assistance in Transition from Homelessness (PATH) program has an output measure that will encourage at least 70 percent of participating state and local PATH-funded agencies to offer outreach services. Output measures also appear in HUD’s fiscal year 1999 performance plan. Among these are increasing the number of transitional beds linked to supportive services. This emphasis on output measures is consistent with the results of our reviews of agencies’ annual performance plans as a whole. In these reviews, we also found that the plans did not consistently contain results-oriented goals. Some agencies did develop outcome measures, while others said that they planned to include outcome measures in future performance plans for their targeted programs. Other agencies believe that developing such measures would be too difficult. For example, Labor established an outcome measure for its targeted Homeless Veterans Reintegration program—helping 1,800 homeless veterans find jobs. HUD also included an outcome measure in its plan—the percentage of homeless people who move each year from HUD transitional housing to permanent housing. This measure may vary from year to year, depending on the resources available for the program. Finally, according to VA’s Director for Homeless Programs, the Veterans Health Administration’s performance plan for fiscal year 2000 includes two outcome measures for veterans who have completed residential care in targeted VA programs. These measures set goals for the percentages of veterans who (1) are housed in their own apartment, room, or house upon discharge from residential treatment and (2) are employed upon discharge from residential treatment. USDA has not created outcome measures for its Homeless Children Nutrition program because it believes that the limited nature of the program would make the effort too difficult. In a summary of its fiscal year 1999 performance plan, HHS said that measures of output and process are more practical and realistic than outcome measures, particularly for annual assessments of programs that affect people. HHS also said that for many health and human service programs, it is unrealistic to expect meaningful changes in people’s lives because of an individual program. In our assessment of HHS’ plan, we noted that future plans would be more useful and would better meet the purposes of the Results Act if HHS made greater use of outcome goals and measures, instead of output or process goals. In response to our assessment, HHS acknowledged that future performance plans should include outcome goals and indicated that it has begun to develop them. The federal approach to assisting homeless people—a web of targeted and nontargeted programs administered by different agencies to deliver services to varying homeless groups—makes coordination and evaluation essential. The administering agencies have an opportunity, through implementing the Results Act’s guidance, to coordinate, and evaluate the results of, their efforts to serve homeless people. The agencies have begun to identify crosscutting responsibilities and will have further opportunities, in preparing their annual performance plans, to devise strategies for coordinating their efforts and to develop consistent outcome measures for assessing the effectiveness of their efforts. Providing for effective coordination and evaluation is essential to ensure that the federal programs available to serve homeless people are cost-effectively achieving their desired outcomes. We provided a draft of this report to the eight federal agencies—USDA, Education, FEMA, HHS, HUD, Labor, SSA, and VA—that administer the programs included in this report. HHS and HUD provided written comments that appear in appendixes VI and VII of the report, along with our detailed responses. USDA, Education, Labor, SSA, and VA provided clarifying language and technical corrections that we incorporated into the report as appropriate. FEMA did not have any comments on the report. HHS characterized the report as a useful compilation of information and agreed that federal agencies need to better coordinate their efforts to serve the homeless and develop consistent outcome measures for assessing the effectiveness of their efforts. HHS further agreed that this coordination must include nontargeted programs. HHS’ primary concern was that, in quoting an HHS letter, we specify that the “billions of dollars worth of resources” the Department provides are not used only to meet the needs of homeless people. We revised our discussion to make it clear that the resources are used to benefit many low-income groups, not only the homeless. In response to HHS’ comment that many single homeless men are disabled and therefore eligible for Medicaid and/or SSI, we added language to the report indicating that disabled single homeless men may qualify for benefits under these programs. HUD’s major concern was that we did not fully describe the role of the Interagency Council on the Homeless or the extent of its activities. After reviewing HUD’s comments, we included more examples of the Council’s activities in the report. However, it was not the purpose of this report to give a detailed account of the Council’s activities; the Council was included as one of the mechanisms through which federal agencies coordinate their efforts to assist homeless people. To identify and describe the characteristics of federal programs targeted for the homeless and the key nontargeted programs available to low-income people generally, we developed a preliminary list of programs using studies and evaluations by the federal agencies that administer programs and initiatives for homeless people, as well as information from other sources, such as the Congressional Research Service, Government Information Services, and the Catalog of Federal Domestic Assistance (CFDA). We included all targeted programs that the agencies identified, as well as “key” nontargeted programs. We defined key nontargeted programs as those that (1) were means tested and had reported annual obligations of $100 million or more, (2) included homelessness as a criterion of eligibility, (3) provided services similar to those offered by targeted programs, or (4) were considered by agency officials to be critical in meeting the needs of the homeless. The “types of services” and “services provided” listed in table 2 and appendix I were those commonly included in the descriptions of programs found in agencies’ documents and the sources listed above. To check the accuracy of the list of programs that we had determined should be included in our review, we asked each agency to verify our list before we developed our program summaries. Agency officials were allowed to add to, omit, or modify the list in accordance with their knowledge of these programs. The staff of the Interagency Council on the Homeless verified the list of resources and initiatives for the homeless that we included in the report. We obtained information on the programs and the resources and initiatives from studies and evaluations by the federal agencies, as well as from studies by the Congressional Research Service and CFDA. We also visited recognized homeless advocacy groups and service providers and obtained testimonial and documentary information from them about the programs and about issues and challenges associated with homelessness. We identified the amount and type of funding for the targeted and nontargeted programs from agencies’ budget summaries, CFDA, and agency officials. We did not verify the budgetary data that we obtained from CFDA documents. In the report, we present data for fiscal year 1997 to give the reader a perspective; in appendix V, we present data for fiscal years 1995-98 to reflect the trend in obligations for programs that serve homeless people. From these data, we also assessed the flow of monies from the federal agencies to state and/or local entities. We identified funding types, such as formula and project grants, from agency officials and CFDA. To determine if federal agencies have coordinated their efforts to assist homeless people and developed outcome measures for their targeted programs, we reviewed the agencies’ strategic and annual performance plans to determine if each agency had (1) identified crosscutting responsibilities or established program coordination efforts with other agencies or (2) established performance goals and measures. We also obtained input from agency officials through site visits and through studies and evaluations they provided. Finally, we reviewed GAO reports on the agencies’ plans. We performed our work between May 1998 and February 1999 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Education, HHS, HUD, Labor, and VA; the Director of FEMA; the Commissioner of SSA; and other interested parties. Copies will be made available to others on request. If you have any questions, please call me at (202) 512-7631. Major contributors to this report are listed in appendix VIII. Program funds may be used for housing referral services and short-term emergency housing assistance to ensure eligible HIV-infected persons and families maintain access to medical care. The eligible education services are specific to the services provided under these programs (e.g., treatment education). Program funds can also be used for nonmedical mental and substance abuse treatment services. Thirty percent of ESG funds can be spent on supportive services. ESG and SHP program funds can also be used for life skills training, child care, AIDs treatment, etc. For these programs, HUD requires grantees to provide supportive services from another source. Fifteen percent of CDBG entitlement funds can be spent on supportive services. According to Labor's Director of Operations and Programs, supportive services are allowed under DOL programs, but the Department is not likely to fund these services because grantees can leverage them from other agencies, such as HUD and VA. This appendix presents information on the 50 federal programs we identified that can serve homeless people. These programs—administered by the departments of Agriculture (USDA), Health and Human Services (HHS), Housing and Urban Development (HUD), Education, Labor, and Veterans Affairs (VA); the Federal Emergency Management Administration (FEMA); and the Social Security Administration (SSA)—are listed alphabetically by agency and are grouped according to whether they are targeted to homeless people or nontargeted. For each program, we identify the federal agency responsible for administering the program, the type of program (targeted or nontargeted), and the type of funding associated with the program. The primary types of funding include entitlements and direct payments (funding provided directly to beneficiaries who satisfy federal eligibility requirements), formula grants (funding distributed in accordance with a formula), and project grants (funding provided directly to applicants for specific projects). We also provide a brief overview of each program’s (1) purpose/objective, services, and scope (number of homeless persons served); (2) administration and funding; (3) eligibility requirements; and (4) limitations in serving homeless people and/or giving them access to benefits. We obtained the information for the summaries primarily from the Catalog of Federal Domestic Assistance, Guide to Federal Funding for Governments and Nonprofits, program fact sheets and budget documents, and agency officials. Most of the information discussed in the section on each program’s limitations was obtained from agency officials. Administering Agency: U.S. Department of Agriculture (USDA) Funding Type: Formula grants The Homeless Children Nutrition Program assists state and local governments, other public entities, and private nonprofit organizations in providing food services throughout the year to homeless children under the age of 6 in emergency shelters. Two types of statistics on program participation are collected monthly: enrollment and average daily participation. Enrollment is the total number of homeless children served by the program each month, and average daily participation is the average number of homeless children participating on a given day. Because of high client turnover in most participating shelters, enrollment is significantly above average daily participation for most shelters. The average monthly enrollment for fiscal years 1995, 1996, 1997, and 1998 was 2,703; 2,761; 2,700; and 2,569, respectively. The average daily participation was 1,401; 1,257; 1,193; and 1,245 for the same fiscal years, respectively. The Homeless Children Nutrition Program is administered by private nonprofit organizations, state or local governments, and other public entities—all known as sponsoring organizations. Private nonprofit organizations may not operate more than five food service sites and may not serve more than 300 homeless children at each site. The Department provides cash reimbursement directly to the sponsoring organizations. Payments are limited to the number of meals served to homeless children under the age of 6 multiplied by the appropriate rate of reimbursement. Sponsoring organizations may receive reimbursement for no more than four meals per day served to an eligible child. The Department gives current-year funding priority to grantees funded during the preceding fiscal year in order to maintain the current level of service and allocates any remaining funds to eligible grantees for new projects or to current grantees to expand the level of service provided in the previous fiscal year. Local Matching Requirement: None. All children under the age of 6 in emergency shelters where the Homeless Children Nutrition Program is operating are eligible for free meals. According to a Homeless Children Nutrition Program official, the Department has not identified any factors limiting the usefulness of this program for the homeless. Administering Agency: U.S. Department of Agriculture (USDA) The Child and Adult Care Food Program assists states, through grants-in-aid and other means, in providing meals and snacks to children and adults in nonresidential day care facilities. The program generally operates in child care centers, outside-school-hours care centers, family and group day care homes, and some adult day care centers. Information on the number of homeless persons participating in the Child and Adult Care Food Program is not available. Most Child and Adult Care Food programs are administered by state agencies. Currently, USDA directly administers the program in Virginia. The Department provides funds to states through letters of credit to reimburse eligible institutions for the costs of food service operations, including administrative expenses. To receive reimbursement for free, reduced-price, and paid meals, participating centers take income applications and count meals served, both by the type of meal and by the recipient’s type of eligibility. Under the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, family day care homes are reimbursed under a two-tiered system intended to better target the program’s funds to low-income children. When a family day care home is located in an area where 50 percent of the children are eligible for free or reduced-price meals or when the family day care provider’s household is eligible for free or reduced-price meals, the home receives a single reimbursement rate comparable to the free rate in centers for each meal. Other homes receive a lower reimbursement rate except when individual children are determined eligible for free or reduced-price meals. Meals for these children are reimbursed at a higher rate. None. Child and Adult Care Food programs in child care centers and homes limit assistance to children aged 12 or under, migrant children aged 15 or under, and children with disabilities who, if over the age of 12, would be eligible to participate only in a center or home where the majority of those enrolled are aged 18 or younger. In adult day care centers, functionally impaired adults aged 18 or older and adults aged 60 or older who are not residents of an institution are eligible to participate in the program. Income guidelines for free and reduced-price meals/snacks are the same as those indicated for the National School Lunch and School Breakfast programs. Homeless children or adults who meet the basic eligibility requirements can receive benefits under the program. In addition, children from households eligible for assistance through the Food Stamp Program, the Food Distribution Program on Indian Reservations, or Temporary Assistance for Needy Families, as well as some children in Head Start programs, may automatically be eligible for free meals under the Child and Adult Care Food Program. In addition, a person aged 60 or older, or an individual defined as “functionally impaired” under USDA’s regulations who is a member of a household that receives food stamps, Food Distribution Program on Indian Reservations benefits, Social Security, or Medicaid is eligible for free meals through the Child and Adult Care Food Program. Program Limitations: According to a Child and Adult Care Food Program official, there are no programmatic factors that prevent homeless children or adults, as defined by federal regulations, from participating in this program. Administering Agency: U.S. Department of Agriculture (USDA) Funding Type: Formula grants The objective of the Commodity Supplemental Food Program is to improve the health and nutritional status of low-income pregnant, postpartum, and breastfeeding women; infants; children up to the age of 6; and persons aged 60 or older by supplementing their diets with nutritious commodity foods. Information on the number of homeless persons served by the program is not available. State agencies, such as departments of health and social services, administer this program. The Department purchases food and makes it available to the state agencies, along with funds to cover administrative costs. The state agencies store and distribute the food to public and nonprofit private local agencies. The local agencies determine applicants’ eligibility, give approved applicants monthly food packages targeted to their nutritional needs, and provide them with information on nutrition. The local agencies also refer applicants to other welfare and health care programs, such as the Food Stamp Program and Medicaid. The Department is required by law to make 20 percent of the program’s annual appropriation and 20 percent of any carryover funds available to the states to pay the costs of administering the program. None. Pregnant, postpartum, and breastfeeding women; infants; and children up to the age of 6 who are eligible for benefits under another federal, state, or local food, health, or welfare program for low-income persons are eligible for benefits under this program. Elderly persons whose incomes are at or below 130 percent of the federal poverty guidelines are also eligible. In addition, states may establish nutritional risk and local residency requirements. Even though the program does not directly target homeless persons, those meeting its eligibility criteria can receive benefits. Persons eligible for both the Commodity Supplemental Food Program and the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) cannot participate in both programs simultaneously. According to a program official, assistance offered to homeless persons is limited by the amount of available resources. States are allocated a specific number of caseload slots; that number depends on the amount of the fiscal year appropriation for each caseload cycle. Once all slots have been filled, no additional persons can be served. In addition, no men other than those aged 60 or older can participate in the program. Women, infants, and children receive priority over the elderly. Administering Agency: U.S. Department of Agriculture (USDA) Funding type: Formula grants The Emergency Food Assistance Program supplements the diets of low-income persons by providing them with free, healthful foods. Under the program, the Department provides the states with (1) commodity foods, such as fruits, dried beans, and canned meats, and (2) funds to help cover the state and local costs associated with transporting, processing, storing, and distributing the commodities to needy persons. Information on the number of homeless persons served by the program is not available. The Department buys the food, processes and packages it, and ships it to the states. The amount each state receives depends on its low-income and unemployed populations. The states provide the food to local agencies for distribution to households or to organizations that prepare and provide meals for needy people. The states must give at least 40 percent of the administrative grant to local agencies. The states are required to match (in cash or in kind) the funds they retain to pay state-level costs. Each state sets criteria for identifying households that are eligible to receive food for home consumption. Such criteria may, at the state’s discretion, include participation in other federal, state, or local means tested programs. Persons receiving benefits through the Emergency Food Assistance Program can participate in other food assistance programs at the same time. Homeless persons can benefit from the Emergency Food Assistance Program through organizations that provide prepared meals or distribute commodities for home use. Homeless persons must meet state eligibility requirements to receive food for home use. Organizations that distribute commodities for household consumption can provide foods only to needy persons who meet the eligibility criteria established by the state. Organizations that prepare meals are eligible for commodities if they can demonstrate that they serve predominantly needy persons. Persons seeking food assistance through such organizations are not subject to a means test. According to a program official, the assistance offered to homeless persons is limited only by the amount of available resources. The Department allocates commodities and administrative funds among the states on the basis of the number of needy and unemployed persons in each state. The value of the commodities and administrative funds allocated to the states depends on the program’s yearly appropriation. Administering Agency: U.S. Department of Agriculture (USDA) The Food Stamp Program is the primary source of nutrition assistance for low-income persons. The program’s purpose is to ensure access to a nutritious, healthful diet for low-income persons through food assistance and nutrition education. Food stamps, which supplement the funds beneficiaries have to spend on food, may be used to purchase food items at authorized food stores. Homeless persons eligible for food stamps may also use their benefits to purchase prepared meals from authorized providers. Information on the number of homeless persons served by the program is not available. The Food Stamp Program is a federal-state partnership, in which the federal government pays the full cost of food stamp benefits and approximately half the states’ administrative expenses. Households apply for benefits at their local; state; or state-supervised, county-administered welfare offices. The states certify eligible households, calculate each household’s allotment, monitor recipients’ eligibility, conduct optional nutrition education activities, and conduct employment and training activities to enhance participants’ ability to obtain and keep regular employment. Food stamp benefits are typically dispensed on a monthly basis through electronic issuance; the mail; and private issuance agents, such as banks, post offices, and check cashers. States have the option of conducting outreach programs that target low-income people. During fiscal year 1998, four states—New York, Vermont, Washington, and Wisconsin—had optional federally approved plans that specifically targeted homeless individuals or families. According to a Food Stamp Program official, other states also conduct outreach efforts to low-income persons, including the homeless, but use other funding sources. Therefore, they are not required to report their outreach efforts or target groups to the Department. The states are required to cover 50 percent of their administrative costs. Eligibility is based on household size and income, assets, housing costs, work requirements, and other factors. A household is normally defined as a group of people who live together and buy food and prepare meals together. Households in which all of the members receive Temporary Assistance for Needy Families, Supplemental Security Income, or General Assistance are, in most cases, automatically eligible for food stamps. Food Stamp Program officials reported that several factors limit the participation of homeless persons in the program. First, there is a false impression among homeless persons and the general public that a permanent address is required to qualify for benefits. In fact, neither a permanent residence nor a mailing address is needed. Second, only a limited number of restaurants nationwide have been authorized to accept food coupons for meals provided at a concession price to elderly or homeless participants in the program. Third, the Food Stamp Act’s current definition of “eligible foods,” as it relates to supermarkets and grocery stores, does not allow food stamp recipients to purchase “hot” meals prepared by the deli departments of such stores. Finally, homeless persons generally have no place to store food items purchased with food stamps. Thus, the allotment may not go as far for a homeless person as it does for someone with a refrigerator and storage space. Administering Agency: U.S. Department of Agriculture (USDA) The National School Lunch Program assists the states, through cash grants and food donations, in making the school lunch program available to school students and encouraging the domestic consumption of nutritious agricultural commodities. Information on the number of homeless children participating in the program is not available. The National School Lunch program is usually administered by state education agencies, which operate the program through agreements with local school districts. Participating public or private nonprofit schools (for students in high school or lower grades) and residential child care institutions receive cash reimbursements and donated commodities from state agencies for each meal they serve that meets federal nutrition requirements. The states are required to contribute revenues equal to at least 30 percent of the total federal funds provided under section 4 of the National School Lunch Act in the 1980-81 school year. All children, including those who are homeless, enrolled in schools where the National School Lunch Program is operating may participate and receive a federally subsidized lunch. Lunch is served (1) free to children who document that they come from households with incomes at or below 130 percent of the poverty level and (2) at a reduced price not to exceed 40 cents to children who document that they come from households with incomes between 130 percent and 185 percent of the poverty level. If children are eligible for free or reduced-price meals in the School Breakfast Program, they are eligible for the same level of benefits in the National School Lunch Program. Children from households eligible for benefits under the Food Stamp Program, the Food Distribution Program on Indian Reservations, and Temporary Assistance for Needy Families, as well as some children in Head Start programs, may automatically be eligible for free meals under the National School Lunch Program. Because of the difficulty in getting homeless families to complete income eligibility applications, school officials may directly certify homeless children as eligible for free meals. The officials must have direct knowledge of the children’s homelessness and evident need. According to a National School Lunch Program official, there are no programmatic factors preventing homeless children from participating in the National School Lunch Program. Administering Agency: U.S. Department of Agriculture (USDA) The School Breakfast Program provides the states with cash assistance for nonprofit breakfast programs in schools and residential child care institutions. Information on the number of homeless children participating in the program is not available. State education agencies and local school food authorities administer the program locally. Participating public or private nonprofit schools (for students in high school or lower grades) and residential child care institutions are reimbursed by state agencies for each meal they serve that meets federal nutrition requirements. None. All children, including those who are homeless, attending schools where the program is operating may participate and receive a federally subsidized breakfast. Breakfast is served (1) free to children who document that they come from families with incomes at or below 130 percent of the poverty level and (2) at a reduced price, not to exceed 30 cents, to children who document that they come from families with incomes between 130 percent and 185 percent of the poverty level. Children from households eligible for benefits under the Food Stamp Program, the Food Distribution Program on Indian Reservations, and Temporary Assistance for Needy Families, as well as some children in Head Start programs, may automatically be eligible for free meals under the breakfast program. Because of the difficulty in getting homeless families to complete income eligibility applications, school officials may directly certify homeless children as eligible for free meals. The officials must have direct knowledge of the children’s homelessness and evident need. According to a School Breakfast Program official, there are no programmatic factors preventing homeless children from participating in this program. Administering Agency: U.S. Department of Agriculture (USDA) The Special Milk Program provides subsidies to schools and child care institutions to encourage the consumption of fluid milk by children. Homeless shelters can participate in this program and receive reimbursement for milk they serve to homeless children. Information on the number of homeless children served by this program is not available. The program makes funds available to state agencies to encourage the consumption of fluid milk by children in public and private nonprofit schools (for students in high school or lower grades), child care centers, and similar nonprofit institutions devoted to the care and training of children. Milk may be provided to children either free or at a low cost, depending on the family’s income level. None. All children, including homeless children, attending schools and institutions where the program is operating are eligible for benefits. Children from households eligible for benefits under the Food Stamp Program, the Food Distribution Program on Indian Reservations, and Temporary Assistance for Needy Families, as well as some children in Head Start programs, may automatically be eligible for free milk. According to a Special Milk program official, there are no programmatic factors preventing homeless children from participating in this program. In fact, homeless shelters are identified in the program’s guidelines as child care institutions eligible for participation. Administering Agency: U.S. Department of Agriculture (USDA) Funding Type: Formula grants The Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) provides supplemental nutritious foods, nutrition education, and health care referrals to low-income pregnant, postpartum, and breastfeeding women; infants; and children up to the age of 5 determined to be at nutritional risk. In response to provisions of the Hunger Prevention Act of 1988, several changes affecting the homeless were made to the WIC program’s regulations. These changes specifically define a “homeless individual;” identify WIC as a supplement to the Food Stamp Program and to meals or food provided through soup kitchens, shelters, and other emergency food assistance programs; establish conditions under which residents in facilities and institutions for the homeless may participate in WIC; require a description, in the state’s comprehensive plan of efforts to provide benefits to the homeless; ensure that the special needs of the homeless are considered when providing food packages; and authorize the states to adopt methods of delivering benefits that accommodate the special needs of the homeless. Information on the number of homeless WIC recipients/clients is not available. The WIC program is operated through local clinics by state health agencies. Grants are made to state health departments or comparable agencies that then distribute funds to participating local public or private nonprofit health or welfare agencies. Funds are allocated for food benefits; nutrition services, including nutritional risk assessments; and administrative costs. WIC recipients receive food through food instruments, usually vouchers (listing the specific foods appropriate to the recipient’s status) or checks that can be redeemed at approved retail outlets. Participating retailers then redeem the vouchers for cash from the WIC agency. None. However, some states contribute nonfederal funds in support of a larger WIC program in their state. Low-income pregnant, postpartum, and breastfeeding women; infants; and children up to the age of 5 are eligible for the WIC program if they (1) are individually determined by a competent professional to be at nutritional risk and (2) meet state-established income requirements. Applicants who receive, or have certain family members who receive, benefits under Medicaid, Temporary Assistance for Needy Families, or the Food Stamp Program may automatically meet WIC’s income requirements. Persons eligible for WIC and the Commodity Supplemental Food Program cannot participate in both programs simultaneously. The WIC program’s legislation establishes homelessness as a predisposing nutritional risk condition. Thus, categorical and income-eligible homeless persons who lack any other documented nutritional or medical condition are eligible for the program’s benefits. WIC program officials said that because WIC is a fixed grant program, all eligible persons will not necessarily be served. State agencies manage their WIC programs within their grants and seek economies in benefit delivery to permit the maximum numbers of eligible persons to be served. State agencies target benefits to those who are most in need, as defined by a regulatory priority system. Persons who meet income guidelines with nutritionally related medical conditions are considered to be the most in need of benefits. Administering Agency: U.S. Department of Agriculture (USDA) The Summer Food Service Program provides funds for program sponsors to serve free, nutritious meals to children in low-income areas when school is not in session. In fiscal year 1997, sponsors served over 128 million meals at a total federal cost of about $243 million. Feeding sites for the homeless that primarily serve homeless children may participate in this program. The average number of children who received meals at a homeless shelter during July (the month of highest participation) in 1995, 1996, 1997, and 1998 was 1,355; 2,032; 1,996; and 764 for the same fiscal years, respectively. State education agencies administer most Summer Food Service programs at the state level, but other state agencies may also be designated. Participating service institutions (also called sponsors) can include units of local government, camps, nonprofit private organizations, and schools. Approved sponsors operate local programs; provide meals at a central site, such as a school or community center; and receive reimbursement from the Department through their state agency for the meals they serve and for their documented operating costs. None. Local sponsors can qualify for reimbursement for the free meals served to all children aged 18 or younger by operating a site in an eligible area. An eligible area is one in which at least 50 percent of the children are from households with incomes at or below 185 percent of the federal poverty guidelines (i.e., households that are eligible for free or reduced-price school meals). Sponsors can also qualify for reimbursement for the free meals served to all children at sites not located in eligible areas if at least 50 percent of the children enrolled are eligible for free or reduced-price school lunches. In addition, camps may be reimbursed only for meals that are served to children who have been individually determined to be eligible because of their household’s income. Children from households eligible for benefits under the Food Stamp Program, the Food Distribution Program on Indian Reservations, and Temporary Assistance for Needy Families, as well as some children in Head Start programs, may automatically be eligible for free meals under the Summer Food Service Program. Program Limitations: Summer Food Service officials reported that there are no programmatic factors preventing homeless children from participating in this program. Administering Agency: U.S. Department of Education Funding Type: Formula grants The objective of this program is to ensure that homeless children and youth have equal access to the same free, appropriate public education as other children; to provide activities and services to ensure that these children enroll in, attend, and achieve success in school; to establish or designate an office in each state educational agency for coordinating the education of homeless children and youth; to develop and implement programs for school personnel to heighten awareness of problems specific to homeless children and youth; and to provide grants to local educational agencies. Local educational agencies may provide services such as tutoring, remedial education, and other educational and social services for homeless children, directly, and/or through contracts with other service providers. State and local educational agencies must coordinate with the state and local housing authorities that are responsible for preparing the comprehensive housing plan required for federal housing and homeless programs to receive aid. According to an Education official, efforts to coordinate and provide support services are essential to the enrollment, retention, and success of homeless children and youth in school. Therefore, all the work of the state coordinators involves outreach and coordination so that homeless children and youth receive appropriate educational and support services, including Title I, Head Start, access to special education or education for gifted children (as appropriate), health care referrals, counseling, parenting education, free and reduced-price meals, and other services. The local educational agencies that receive funds must (1) ensure that homeless children are provided with services (e.g., school meals) comparable to those provided to other children; (2) coordinate with social services agencies and other agencies or programs providing services to homeless children and youth (including services provided under the Runaway and Homeless Youth Act, administered by HHS); and (3) designate a liaison to ensure that homeless children and youth receive the education to which they are entitled under law. The Department does not require the states to report the numbers of homeless children and youth served through subgrants under the McKinney Act program but rather to “provide the estimated number of homeless children and youth in their state according to school level.” The program, however, has the potential to affect the education of all homeless children and youth because its primary purpose is to ensure that homeless children have the same equal access to public education as all other children and youth. State educational agencies—including the equivalent agencies in the District of Columbia, Puerto Rico, and the territories—are eligible to participate in this program, as are schools supported by the Bureau of Indian Affairs that serve Native American students. For a state educational agency to receive a grant under the program, the state must submit an individual or consolidated plan to the Department. Each state educational agency must also ensure that homeless students are able to participate in appropriate federal and local food programs and before- or after-school care programs. Funds flow from the Department to the state educational agency through a formula grant, and the state educational agency awards discretionary subgrants to local educational agencies. The average grant to a state educational agency in fiscal year 1997 was $475,000. According to a senior agency official, about 3 percent of the local educational agencies included in 1995 evaluation have subgrants. None. Eligibility: Homeless children and youth, including preschool children, who, were they residents of the state, would be entitled to a free, appropriate public education. According to an evaluation performed by the Department in 1995, the largest obstacle to ensuring equitable educational services for homeless children and youth is lack of transportation to the school that would best meet their needs during the period of homelessness. Funding Type: Formula grants This program provides funds to support a variety of activities designed to help educationally disadvantaged children in high-poverty areas reach high academic standards. These activities can include supplemental instruction in basic and more advanced skills during the school day; before- and after-school programs, summer school programs, preschool programs; alternative school programs; programs featuring home visits; parent education; and childcare. According to an official in the Office of Elementary and Secondary Education, the Department first collected data on the number of homeless children served by this program during the 1996-97 school year. As of October 1998, the Department was analyzing the data. The official also mentioned that a few states did not submit data. As part of its efforts to ensure homeless children’s access to mainstream programs, the Department issued formal guidance for the Title I program to clarify that educationally deprived homeless children are eligible to participate in the program regardless of their current location or lack of a legal residence. State educational agencies and the Secretary of the Interior may apply to the Department of Education for grants. The Department then makes grants to the state agencies and the Secretary using statutory formulas. The state agencies suballocate the grant funds to local educational agencies on the basis of a formula that includes the best available data on the number of children from low-income families. The Secretary suballocates the grant funds for Indian tribal schools. None. Eligibility is based on the number of children who are failing, or most at risk of failing, to meet challenging state academic standards. According to an official in the Office of Elementary and Secondary Education, states may need to encourage local school districts to implement the provision of Title I that pertains to homeless children and youth. Also, some Title I state coordinators reported that record transfers remain a barrier because homeless children and youth move frequently during the school year. Administering Agency: Federal Emergency Management Agency (FEMA) Funding Type: Formula grants The Emergency Food and Shelter Program supplements and expands ongoing efforts to (1) provide food, shelter, and supportive services for homeless or hungry individuals and (2) prevent individuals from becoming homeless or hungry. The program’s funds are used for mass feeding, food distribution through food pantries and food banks, mass shelter, short-term other shelter (hotel/motel accommodations), assistance with rent or mortgage payments to prevent evictions, payment of the first month’s rent for families and individuals leaving shelters for more stable housing, payment of utility bills for 1 month to prevent service shutoffs, and limited emergency rehabilitation work on mass care facilities to bring them up to code. The Emergency Food and Shelter Program does not collect information on the number of homeless persons served. However, information is available on the number of meals served; nights of shelter provided; and bills paid for rent, mortgage, and utility charges. The Emergency Food and Shelter Program is governed by a national board chaired by FEMA and includes representatives from (1) the American Red Cross; (2) Catholic Charities, USA; (3) the Council of Jewish Federations; (4) the National Council of the Churches of Christ in the USA; (5) the Salvation Army; and (6) the United Way of America. The United Way of America serves as the Secretariat and fiscal agent to the national board. There are also local boards made up of affiliates of national board members (with a local government official replacing the FEMA representative), a homeless or formerly homeless person, and other interested parties. The national board uses unemployment and poverty statistics to select local jurisdictions (i.e., cities and counties) for funding and determines how much funding each jurisdiction will receive. In turn, the local board in each area designated to receive funds assesses its community’s needs, advertises the availability of funds, establishes local application procedures, reviews applications, selects local nonprofit or public organizations to act as service providers, and monitors the providers’ performance under the program. Grant funds flow directly from the national board to the local recipient organizations. None. The Emergency Food and Shelter Program targets individuals with emergency needs. The term “emergency” refers to economic, not disaster-related, emergencies. According to the chief of the Emergency Food and Shelter Program, FEMA, the White House, and the Congress view this as a very successfully administered federal program. The program continues to be lauded by agencies that receive funding and by recipients of assistance. The chief also said the reduction in the program’s funding level after fiscal year 1995 is the primary factor that limits the program’s usefulness. In most areas of the United States, this program is the only source of funding for the prevention of homelessness. When localities have depleted these funds, they have no other source of emergency assistance for rent, mortgage or utility bills. The only factor that may prevent the homeless or anyone in need from obtaining benefits through the Emergency Food and Shelter Program is lack of transportation to the agencies that provide the services. In many rural and suburban areas, transportation continues to be a problem. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Project grants (discretionary) The Health Care for the Homeless program awards grants to allow grantees, directly or through contracts, to provide for the delivery of primary health services and substance abuse services to homeless individuals, including homeless children. This program emphasizes a multidisciplinary approach to delivering care to homeless persons, combining aggressive street outreach with integrated systems of primary care, mental health and substance abuse services, case management, and client advocacy. Specifically, Health Care for the Homeless programs (1) provide primary health care and substance abuse services at locations accessible to homeless persons; (2) provide around-the-clock access to emergency health services; (3) refer homeless persons for necessary hospital services; (4) refer homeless persons for needed mental health services unless these services are provided directly; (5) conduct outreach to inform homeless individuals of the availability of services; and (6) aid homeless individuals in establishing eligibility for housing assistance and services under entitlement programs. The grants may be used to continue to provide these services for up to 12 months to individuals who have obtained permanent housing if services were provided to these individuals when they were homeless. Health Care for the Homeless serves approximately 450,000 homeless persons yearly. State and local governments, other public entities, and private nonprofit organizations are eligible to apply for Health Care for the Homeless grants. Health Care for the Homeless projects are administered by federally funded community and migrant centers, inner city hospitals, nonprofit coalitions, and local public health departments. The Department distributes grant awards directly to nonprofit and public organizations. None. The program’s 1996 reauthorization ended a matching requirement of $1 for every $2 of federal funds. However, grantees that received initial funding between 1988 and 1995 are required to maintain the level of effort begun when the matching requirement was in place. The Health Care for the Homeless program serves homeless individuals and families. According to a Health Care for the Homeless official, recent federal and state welfare changes, as well as the loss of Supplemental Security Income benefits for individuals with substance abuse problems, have led to a drastic increase in the number of uninsured persons seeking Health Care for the Homeless services. At the same time, Health Care for the Homeless programs are facing decreases in third-party reimbursements as many states enact Medicaid managed care plans. Because these managed care plans provide restricted access to providers that may be geographically distant, homeless patients regularly seek more accessible services “out of the plan” through the Health Care for the Homeless program. Patients receive care, but the program receives no reimbursement. Declining Medicaid reimbursement, combined with increased numbers of uninsured persons needing services, limits grantees’ capacity to meet demand. In some instances, providers have been forced to turn away homeless persons seeking Health Care for the Homeless services. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Formula grants The PATH program provides financial assistance to states to provide a variety of housing and social services to individuals with severe mental illness, including those with substance abuse disorders, who are homeless or at risk of becoming homeless. Services funded under PATH include (1) outreach; (2) screening and diagnostic treatment; (3) habilitation and rehabilitation services; (4) community mental health services; (5) alcohol or drug treatment services; (6) staff training; (7) case management; (8) supportive and supervisory services in residential settings; (9) referrals for primary health services, job training, and educational services; and (10) a prescribed set of housing services. PATH allows the states to set their own priorities among the eligible services. The states cannot use more than 20 percent of their allotment for prescribed housing services. In addition, funds cannot be used to (1) support emergency shelters or the construction of housing facilities, (2) cover inpatient psychiatric or substance abuse treatment costs, or (3) make cash payments to intended recipients of mental health or substance abuse services. During fiscal years 1995 through 1997, the PATH program served 125,947; 76,395; and 62,112 homeless persons, respectively. Information for fiscal year 1998 was not available during our review. The Department provides grants to states that, in turn, make subgrants to local public and private nonprofit organizations. Eligible nonprofit subgrantees include community-based veterans organizations and other community organizations. Local Matching Requirement: Grantees must contribute $1 in cash or in kind for every $3 in federal funds. The PATH program targets persons with mental illness, including those with substance abuse disorders, who are homeless or at risk of becoming homeless. According to a PATH official, the program cannot meet the demand for its services from eligible persons. Therefore, the program specially targets those who are most in need. Other factors limiting the program’s effectiveness include a lack of affordable housing; difficulties for clients in gaining access to health and entitlement benefits (because of limitations on eligibility, problems in obtaining necessary documentation, or inability to follow through on application processes); limitations on coverage under health and entitlement programs; and limitations on the availability of mental health resources. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Project grants (discretionary) The Runaway and Homeless Youth Basic Center program provides grantees with financial assistance to establish or strengthen community-based centers that address the immediate needs of runaway and homeless youth and their families. The program offers young runaways a system of care outside the traditional child protective services, law enforcement, and juvenile justice agencies. Basic centers provide services such as emergency shelter, food, clothing, counseling, referrals for health care, outreach, aftercare services, and recreational activities. During fiscal year 1997, the Runaway and Homeless Youth Basic Center and Transitional Living programs provided services to 83,359 homeless youth. The Department did not collect this information during fiscal years 1995 and 1996. Information for fiscal year 1998 was not available at the time of our review. Grants are provided to local public and private or nonprofit agencies, as well as to coordinated networks of such agencies. The grantee must match 10 percent of the federal grant, either in cash or in kind. Runway and homeless youth and their families are eligible for benefits. According to a program official, funding levels severely limit the types and duration of services that can be offered to young people. Basic centers may house youth for only 15 days, a period that is often not long enough to locate a longer-term alternative for youth who cannot return to their family home or to ensure that youth who are returned home will be safe. In addition, because of funding limitations, the centers are often full and most Transitional Living programs have waiting lists. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Project grants (discretionary) Education and Prevention Grants to Reduce Sexual Abuse of Runaway, Homeless, and Street Youth (Street Outreach Program) fund street-based education and outreach, emergency shelter, and related services for runaway and homeless youth and youth on the streets who have been, or are at risk of being, sexually exploited and abused. Street-based outreach activities are designed to reach those youth who do not benefit from traditional programs because they stay away from shelters. Services provided through the program include survival aid, emergency shelters, street-based education and outreach, individual assessments, treatment and counseling, prevention and education activities, information and referrals, crisis intervention, and follow-up support. The Department does not collect data on the number of homeless youth served through the program. However, information is available on the number of youth contacted through street outreach efforts. The Department awards grants to private nonprofit agencies to provide outreach services designed to build relationships between grantee staff and street youth. These agencies provide services directly or in collaboration with other agencies. The grantee must provide 10 percent of the federal grant in cash or in kind. Adolescents up to the age of 24 who are living on the streets are eligible for the program’s benefits. The Department’s comments on this program appear in our discussion of the Runaway and Homeless Youth Basic Center programs. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Project grants (discretionary) Program Description: The Transitional Living Program for Older Homeless Youth supports projects that provide longer-term residential services to homeless youth aged 16 to 21 for up to 18 months to help them make a successful transition to self-sufficient living. These services include (1) basic life skill building, (2) interpersonal skill building, (3) career counseling, (4) mental health care, (5) educational opportunities, and (6) physical health care. During fiscal year 1997, the Runaway and Homeless Youth Basic Center and Transitional Living programs provided services to 83,359 homeless youth. The Department did not collect this information during fiscal years 1995 and 1996. Information for fiscal year 1998 was not available at the time of our review. The Transitional Living Program provides grants to local public and private organizations to address the shelter and service needs of homeless youth. Grantees must provide 10 percent of the federal grant in cash or in kind. The Transitional Living Program targets homeless youth aged 16 to 21. A homeless youth accepted into the program is eligible to receive shelter and services continuously for up to 18 months. According to a program official, most Transitional Living programs have waiting lists because the number that can be funded with current resources is limited. Administering Agency: U.S. Department of Health and Human Services (HHS) Program Type: Nontargeted (discretionary) Funding Type: Project grants The Community Health Center program supports the development and operation of community health centers, which provide preventive and primary health care services, supplemental health and support services, and environmental health services to medically underserved areas/populations. Although the Health Care for the Homeless program is specifically designed to serve the homeless population, many community health centers serve homeless individuals and have internal programs for this purpose. Any public agency or private nonprofit organization with a governing board, a majority of whose members are users of the center’s services, is eligible to apply for a project grant to establish and operate a community health center in a medically underserved area. Public or private nonprofit organizations may also apply for grants to provide technical assistance to community health centers. None. However, grantees are expected to have nonfederal revenue sources. Population groups in medically underserved areas are eligible for services provided by community health centers. Criteria for determining whether an area is medically underserved include, among others, a high rate of poverty or infant mortality, a limited supply of primary care providers, and a significant number of elderly persons. A program official reported that there are no factors preventing homeless people from gaining access to community health centers. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Formula grants The Community Services Block Grant program provides block grants to states, territories and Indian tribes for services and activities to reduce poverty. Block grants give states flexibility to tailor their programs to the particular service needs of their communities. Activities designed to assist low-income participants, including homeless individuals and families, are acceptable under this program. Eligible services include employment, education, housing assistance, nutrition, energy, emergency, and health services. The Department does not collect data on the number of homeless persons served by this program. Each state submits an annual application and certifies that it agrees to provide (1) a range of services and activities having a measurable and potentially major impact on causes of poverty in communities where poverty is an acute problem and (2) activities designed to help low-income participants become self-sufficient. States make grants to locally based nonprofit community action agencies and other eligible entities that provide services to low-income individuals and families. States are required to use at least 90 percent of their allocations for grants to community action agencies and other eligible organizations. None. Community Services Block Grant programs are targeted at the poor and near-poor, and need is the primary criterion for eligibility. In general, beneficiaries of programs funded by these block grants must have incomes no higher than those set forth in the federal poverty income guidelines. The Department did not identify any factors limiting the usefulness of this program for homeless persons. Administering Agency: U.S. Department of Health and Human Services (HHS) Program Type: Nontargeted (discretionary) Funding Type: Project grants The Head Start program provides comprehensive health, educational, nutritional, social, and other services primarily to preschool children from low-income families. The program fosters the development of children and enables them to deal more effectively with both their present environment and later responsibilities in school and community life. Head Start programs emphasize cognitive and language development and socio-emotional development to enable each child to develop and realize his or her highest potential. Head Start children also receive comprehensive health services, including immunizations, physical and dental exams and treatment, and nutritional services. In addition, the program emphasizes the significant involvement of parents in their children’s development. Parents can make progress toward their educational, literacy, and employment goals by training for jobs and working in Head Start. While all Head Start programs are committed to meeting the needs of homeless children and families, 16 Head Start programs were selected in a national demonstration competition to target children who are homeless. Head Start has provided $3.2 million a year since 1993 to these 16 programs. The Department plans to issue a final report detailing the key lessons learned from the demonstration programs in late 1998 or early 1999. During the last 4 program years, approximately 50 percent of local Head Start programs reported that they undertook special initiatives to serve homeless children and their families. However, at this time, information on the number of homeless persons served is not collected nationally. Head Start funds are awarded directly to local public and private nonprofit agencies, such as school systems, city and/or county governments, Indian tribes, and social service agencies. Grantees must provide 20 percent of the program’s total cost. The Head Start program is primarily for preschool children between the ages of 3 and 5 from low-income families. However, children under the age of 3 from low-income families may be eligible for the Early Head Start program. At least 90 percent of Head Start participants must come from families with incomes at or below set poverty guidelines. At least 10 percent of the enrollment opportunities in each program must be made available to children with disabilities. A Head Start program official reported that while there are a number of effective approaches to serving homeless families, the efficacy of any particular approach often depends on the local community’s resources, policies, and service delivery systems for homeless families. The official also reported that, according to grantees, Head Start has a critical role to play in serving homeless families, and in many communities it may be the only program serving homeless families that focuses on children as well as families. In addition, because Head Start employs a family-based, comprehensive approach to serving families, it is in a unique position to provide the multiple services homeless families require. A key lesson learned from the Head Start Homeless Demonstration Projects is that Head Start programs cannot “do it all.” Collaboration with other agencies serving homeless families was and is critical to the success of each project. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Formula grants The Maternal and Child Health Services Block Grant Program supports states’ activities to improve the health status of pregnant women, mothers, infants, and children. The program is designed to address key health issues for low-income women and their children, including reducing the rate of infant mortality and disabling diseases among women and children. Information on the number of homeless persons served through this program is not available. States receive grants from the federal government and may make subgrants to public or private nonprofit organizations. States are required to use at least 30 percent of their block grant allocations to develop systems of care for preventive and care services for children and 30 percent for services for children with special needs. Approximately 30 percent may be used, at the state’s discretion, for services for either of these groups or for other appropriate maternal and child health services, including preventive and primary care services for pregnant women, mothers, and infants up to 1 year old. Spending for administrative costs is capped at 10 percent. States or localities must provide $3 for every $4 of federal funds. The Maternal and Child Health Services Block Grant program targets pregnant women, mothers, infants and children, and children with special health care needs, particularly those from low-income families (i.e, families whose income is below 100 percent of the federal poverty guidelines). Program Limitations: A Maternal and Child Health Services program official reported that there are no factors preventing homeless people from gaining access to programs funded by the block grant. Administering Agency: U.S. Department of Health and Human Services (HHS) The Medicaid program provides financial assistance to states for payments of medical assistance on behalf of aged, blind, and disabled individuals, including recipients of Supplemental Security Income payments, families with dependent children, and special groups of pregnant women and children who meet income and resource requirements. Medicaid is the largest program providing medical and health-related services to America’s poorest people. For certain eligibility groups known as the categorically needy, states must provide the following services: in- and out-patient hospital services; physician services; medical and surgical dental services; nursing facility services for individuals aged 21 or older; home health care for persons eligible for nursing facility services; family planning services and supplies; rural health clinic services and any other ambulatory services offered by a rural health clinic that are otherwise covered under the state plan; laboratory and X-ray services; federally qualified health center services; nurse-midwife services (to the extent authorized under state law); and early and periodic screening, diagnosis, and treatment services for individuals under the age of 21. Information on the number of homeless persons served by the Medicaid program is not available. Within broad national guidelines, each state (1) administers its own program; (2) establishes its own eligibility standards; (3) determines the type, amount, duration, and scope of services; and (4) sets the rate of payment for services. Thus, the Medicaid program varies considerably from state to state, as well as within each state, over time. State and local Medicaid agencies operate the program under an HHS-approved Medicaid state plan. The Department matches state expenditures for services provided to eligible beneficiaries at a rate established by formula. Under the Social Security Act, the federal share for medical services may range from 50 percent to 83 percent. Medicaid payments are made directly by the states to the health care provider or health plan for services rendered to beneficiaries. The Department also matches administrative expenses for all states at a rate of 50 percent except for some specifically identified administrative expenses, which are matched at enhanced rates. Among the expenses eligible for enhanced funding are those for operating an approved Medicaid Management Information System for reimbursing providers for services. States are required to match federal funds expended for covered medical services to beneficiaries at a rate established by formula. Some states require local governments to provide part of the state matching funds. Low-income persons who are over the age of 65, blind, or disabled; members of families with dependent children; low-income children and pregnant women; and certain Medicare beneficiaries who meet income and resource requirements are eligible for benefits. Also, in many states, medically needy individuals may be eligible for medical assistance. Eligibility is determined by the states in accordance with federal regulations. The states have some discretion in determining the groups their Medicaid programs will cover and the financial criteria for Medicaid eligibility. In all but a few states, persons receiving Supplemental Security Income are automatically eligible for Medicaid. The Department did not identify any factors limiting the usefulness of this program for homeless persons. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Formula grants Mental Health Performance Partnership Block Grants assist states in creating comprehensive, community-based systems of care for adults with serious mental illnesses and children with severe emotional disturbances. In order to receive block grant funds, states must submit plans that, among other things, provide for the establishment and implementation of a program of outreach to, and services for, such individuals who are homeless. The plans must include health and mental health, rehabilitation, employment, housing, educational, medical and dental, and other supportive services, as well as case management services. States primarily use PATH and other limited available funds to establish and implement their plans for outreach to the homeless. Information is not available on the number of homeless adults with serious mental illnesses and homeless children with severe emotional disturbances served by this program. Funds are used at the discretion of the state to achieve the program’s objectives. States carry out their block grant activities through grants or contracts with a variety of community-based organizations, such as community mental health centers, child mental health centers, and mental health primary consumer-directed organizations. The Department uses 5 percent of the block grant funds for technical assistance to states, data collection, and program evaluation. None. States have flexibility in allocating their block grant funds. While funds may not be identified explicitly for services to the homeless, most state mental health agencies do provide services for homeless adults with serious mental illnesses and homeless children with severe emotional disturbances. According to a program official, the demand for public mental health services exceeds the ability of many programs to serve all eligible persons. Therefore, programs generally target services to high-priority populations. Many states and communities are faced with significant needs among various high-priority populations, and many states have identified significant gaps in services—such as services related to the criminal justice system and transitional services for children moving to adulthood. Gaps in these service areas may contribute to homelessness in some communities. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Project grants Migrant health centers support the planning and delivery of health services to migrant and seasonal farmworkers and their families as they move and work. In some cases, migrant farmworkers are considered as homeless for at least a portion of their work year, since housing is usually not guaranteed with employment. Migrant health centers make grants to public and private nonprofit entities for the planning and delivery of health care services to medically underserved migrants and seasonal farmworkers. This program is closely related to the Community Health Centers program. In fact, the majority of the grantees under the Migrant Health Centers program also receive funds through the Community Health Centers program. Local Matching Requirement: None. Migratory and seasonal agricultural workers and their families are eligible for services. A program official reported that the Migrant Health Centers program encourages centers to undertake farmworker housing projects. However, only a few centers have pursued this option. As a result, most centers are not in a position to assist farmworkers with housing issues. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Formula and project grants The Ryan White Comprehensive AIDS Resources Emergency Act (Ryan White CARE Act) provides assistance to states, eligible metropolitan areas, and service providers to improve the quality and availability of care for individuals and families living with the Human Immunodeficiency Virus (HIV) and Acquired Immune Deficiency Syndrome (AIDS) through seven different programs that target specific aspects of the HIV/AIDS epidemic. Title I of the act provides substantial emergency resources to metropolitan areas facing high HIV/AIDS caseloads to develop and operate programs that provide an effective, appropriate, and cost-efficient continuum of health care and support services for individuals and families living with HIV. Title II of the act enables states to improve the quality, availability, and organization of health and support services for individuals infected with HIV and their families. Titles I and II receive the most funds and provide services to low-income, underserved, vulnerable populations, such as the homeless, who are infected with HIV/AIDS. The services provided include health care services and support services, such as housing referrals, case management, outpatient health services, emergency housing assistance, and assistance associated with residential health care delivery—for example, residential substance abuse care. Information on the number of homeless persons served through titles I and II of the act is not available. Under title I, eligible metropolitan areas receive formula grants based on the estimated number of people infected with HIV who are living in the metropolitan area. The remaining funds available after the formula grant amounts are determined are distributed as supplemental grants through a discretionary mechanism established by the Secretary of HHS. Title I grants are awarded to the chief elected official of the city or county that administers the health agency providing services to the greatest number of people living with HIV in the eligible metropolitan area. Title II grants are also determined by formula and are awarded to the state agency designated by the governor to administer the title II program, usually the health department. The use of the program’s funds is authorized only after all other funding sources have been exhausted. Title I - None. Title II - States with a confirmed number of AIDS cases that exceeds 1 percent of the aggregate number of cases in the United States for the 2-year period preceding the fiscal year for which the state is applying for funds are subject to a matching requirement. The matching requirement increases each year of the grant cycle. In the first fiscal year of participation, states must provide at least $1 for every $5 of federal funds; in the second fiscal year, $1 for every $4; in the third fiscal year, $1 for every $3; and in the fourth and subsequent fiscal years, $1 for every $2 of federal funds. Low-income, uninsured, and underinsured HIV-infected individuals and their families may be eligible for services funded through titles I and II. Program Limitations: A program official reported that program priorities for titles I and II of the Ryan White CARE Act are determined locally and are based on local assessments of the needs of people living with HIV/AIDS. Resources may not be adequate to meet all needs; therefore, important services may not be provided. Also, according to the official, adequate housing for persons living with HIV/AIDS remains a critical need and a major service gap in many eligible metropolitan areas and states. While inadequate housing is a major problem for persons living in poverty, this problem is magnified for persons living with HIV. In many areas, the stock of affordable housing is not growing, but the proportion of persons with HIV living in poverty continues to grow. To meet varying needs, a range of services may be required to help such persons locate, maintain and/or retain housing. Homelessness not only affects basic health and dignity but also disrupts access to services and makes continuing compliance with medication regimens very difficult. The costs of providing housing assistance are high, and collaboration among agencies and programs is needed to make more adequate housing available for persons with HIV/AIDS. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Formula grants Social Services Block Grants (SSBG) enable each state to furnish social services best suited to the needs of its residents. The grants are designed to (1) reduce or eliminate dependency; (2) achieve or maintain self-sufficiency; (3) help prevent the neglect, abuse, or exploitation of children and adults; (4) prevent or reduce inappropriate institutional care; and (5) secure admission or referral for institutional care when other forms of care are not appropriate. Each state determines which of 28 services included in an SSBG index will be provided and how the funds will be distributed. Services that may be supported with SSBG funds are transportation, case management, education and training, employment, counseling, housing, substance abuse, and adoption services; congregate meals; day care; family planning services; foster care services for adults and children; health-related and home-based services; home-delivered meals; independent/transitional living information and referral; legal, pregnancy and parenting, and prevention/intervention services; protective services for children and adults; recreational services; residential treatment; and special services for youth at risk and disabled persons. The Department does not collect data on the number of homeless persons served by this program. Grant funds are determined by a statutory formula based on each state’s population. Local government agencies and private organizations may receive subgrants. States may also contract with local service providers to supply the range of services allowed under the program. None. Each state determines the services that will be provided and the individuals that will be eligible to receive services. According to a program official, the ability of the SSBG program to serve the homeless is limited by the discretionary nature of states as independent program entities, the lack of an index service for or explicit emphasis on the homeless within the SSBG index, and objectives (1) and (2) of the legislative program. These objectives, which support efforts to prevent, reduce, or eliminate dependency, encourage the use of SSBG funds to assist persons whose existing housing is threatened rather than those who are already homeless. While SSBG funds can be used as a stopgap to prevent further homelessness, they cannot be used to provide housing for the homeless. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Formula grants The State Children’s Health Insurance Program (CHIP) provides funds to states to enable them to initiate and expand child health assistance to uninsured, low-income children. Information on the number of homeless children served by CHIP is not available. Any state applying for CHIP funds must submit and have approved by the Secretary of HHS a state child health plan that includes certain eligibility standards to ensure that only targeted low-income children are provided assistance under the plan. The plan must also indicate what share of the costs, if any, will be charged by the state. The plan may not exclude coverage for preexisting conditions. The states may spend up to 10 percent of their total CHIP funds on administrative activities, including outreach to identify and enroll eligible children in the program. The final allotment for a state’s CHIP plan is based on (1) the number of low-income, uninsured children in the state and (2) the state’s cost factor. A state-specific percentage is determined on the basis of these two factors for each state with an approved CHIP plan. A state’s final allotment for the fiscal year is determined by multiplying the state-specific percentage for each approved CHIP plan by the total national amount available for allotment to all states. The amount each state pays varies with the state’s federal medical assistance percentages used in the Medicaid program. No state pays more than 35 percent. CHIP targets children who have been determined eligible by the state for child health assistance under the state’s plan; low-income children; children whose family income exceeds Medicaid’s applicable income level but is not more than 50 percentage points above that income level; and children who are not eligible for medical assistance under Medicaid or are not covered under a group health or other health insurance plan. When a state determines through CHIP screening that a child is eligible for Medicaid, the state is required to enroll the child in the Medicaid program. In addition, the state is expected to coordinate with other public and private programs providing creditable health coverage for low-income children. According to a program official, there may be barriers at the state level in both CHIP and Medicaid. For example, documentation and verification requirements vary from state to state. Furthermore, a limitation exists under the Medicaid side of the CHIP program related to presumptive eligibility, a temporary status that allows a person to receive care immediately if he/she appears to be eligible on the basis of a statement of income. The statute limits who can determine presumptive eligibility. Currently, most providers and shelters serving the homeless are not included in the statute as entities that can determine presumptive eligibility, even though they interact with homeless children daily. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Formula grants The Substance Abuse Prevention and Treatment Block Grant program provides financial assistance to states and territories for planning, implementing, and evaluating activities to prevent and treat substance abuse. Information on the number of homeless persons served through this program is not available because states are not required to routinely provide the Department with information on the numbers of individuals, including homeless individuals, receiving treatment under the program. Program Administration/Funding Mechanism: States receive grant awards directly from the Department on the basis of a congressionally mandated formula. States may provide prevention and treatment services directly or may enter into subcontracts with public or private nonprofit entities for the provision of services. Under this program, grantees are required to spend at least 35 percent of their total annual allocation for alcohol prevention and treatment activities; at least 35 percent for prevention and treatment activities related to other drugs; and at least 20 percent for primary prevention programs geared toward individuals who do not require treatment for substance abuse. A maximum of 5 percent of a grant may be used to finance administrative costs. Primary prevention programs must provide eligible individuals with education and counseling about substance abuse and must provide activities that reduce the risk of abuse by these individuals. In establishing prevention programs, states must give priority to programs serving populations at risk of developing a pattern of substance abuse. None. All individuals suffering from alcohol and other drug abuse, including homeless individuals with substance abuse disorders, are eligible for services. The Department did not identify any limitations. Administering Agency: U.S. Department of Health and Human Services (HHS) Funding Type: Block grant Temporary Assistance for Needy Families (TANF) is a fixed block grant for state-designed programs of time-limited and work-conditional aid to families with children. Title I of P.L. 104-193, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, created the TANF program. This legislation repealed the Aid to Families with Dependent Children, Emergency Assistance, and Job Opportunities and Basic Skills Training programs and replaced them with a single block grant to states. All states were required to implement TANF by July 1, 1997. Under TANF, cash grants, work opportunities, and other services are provided to needy families with children. TANF funds are used to (1) provide assistance to needy families so that children may be cared for in their own homes or in the homes of relatives; (2) end the dependence of needy parents on government benefits by promoting job preparation, work, and marriage; (3) prevent and reduce the incidence of out-of-wedlock pregnancies and establish annual numerical goals for preventing and reducing the incidence of these pregnancies; and (4) encourage the formation and maintenance of two-parent families. In reference to serving homeless populations, TANF program officials reported that P.L. 104-193 gives states the flexibility to design programs that cover the circumstances and meet the needs of their populations. Providing emergency shelter and other services to help families overcome homelessness is permitted under the statute, and a number of states are engaged in this effort. According to a March 1988 report on TANF and services for the homeless, 19 states’ TANF programs provide targeted cash benefits or services to the homeless, while 29 states’ TANF programs provide cash benefits or services to families at risk of becoming homeless. TANF explicitly permits states to administer benefits directly or to provide services through contracts with charitable, religious, or private organizations. Although states have wide flexibility to determine their own eligibility criteria, benefit levels, and the types of services and benefits available to TANF recipients, their programs must adhere to a variety of federal requirements. The Department provides states with TANF funding primarily through State Family Assistance Grants. Certain federal conditions are attached to the grants. For example, to receive full grants, states must achieve minimum work participation rates and spend a certain sum of their own funds on behalf of eligible families (i.e., the “maintenance-of-effort” rule). States must maintain at least 80 percent of their own historic spending levels (75 percent if they meet TANF’s work participation requirements) or suffer a financial penalty. Moreover, states must impose a general 5-year time limit on TANF-funded benefits. In addition, states may transfer a limited portion of their federal TANF grant for a fiscal year to the Child Care and Development Block Grant and the Social Services Block Grant programs. None. TANF beneficiaries are needy families with children whose eligibility is determined by the state. Because states may design their own assistance programs, eligibility criteria vary from state to state. States must, however, adhere to federal requirements. For example, under federal requirements, persons eligible to receive TANF assistance through state programs are families that include a minor child who resides with a custodial parent or other adult caretaker relative of the child. States may also cover pregnant individuals. According to TANF program officials, there are no statutory factors that limit the use of the TANF program for homeless families. These officials were not aware of any statutory provisions or program design decisions on the part of states that prohibit homeless families from obtaining TANF benefits. However, the officials did report that many states face the challenge of trying to stabilize homeless families in permanent living arrangements while encouraging the move to self-sufficiency before the program’s time-limited benefits expire. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Formula grants This is one of the principal formula grant programs to state and local governments under the McKinney Act. It is also one of the oldest and most widely used. There are four major categories of eligible activities: the renovation, major rehabilitation, or conversion of buildings for use as emergency shelters or transitional housing for homeless persons; the provision of up to 30 percent of the grant for essential social services (the Secretary may waive the 30-percent limit on essential services); the payment of operating costs of facilities for the homeless (but no more than 10 percent of the grant may be used for management costs); and the provision of up to 30 percent of the grant for activities to prevent homelessness. According to HUD’s estimates, grants under this program served 574,000 persons in fiscal year 1995, 420,000 in fiscal year 1996, and 420,000 in fiscal year 1997. The principal mechanism for coordinating and integrating this program is the consolidated plan, a document required and approved by HUD that describes what the community needs to assist the homeless, details available resources, and provides a 5-year plan and an annual action plan. According to a HUD division director, the process of developing this plan and using it to allocate funds from formula grant programs such as ESG gives each community considerable authority in deciding how funds will be used to meet the targeted needs of its homeless and low- and moderate-income residents. According to the HUD division director, ESG is a very important component of the Department’s Continuum of Care policy (and of the services offered in accordance with this policy) because it addresses homeless people’s needs for emergency and transitional housing. A 1994 study determined that although ESG provided only 10 percent of the average service provider’s operating budget, the program has allowed providers to meet their most basic needs for operating funds and appropriate facilities, enabling them to use funds from other sources to offer additional programs and services. According to the HUD division director, grantees may have shifted from funding rehabilitative activities to funding more operating costs, essential services, and prevention initiatives. The official also stated that the proportion of ESG funds used for essential services has increased for some grantees because the limit on the percentage of the grant that can be allocated for services was raised from 15 to 30 percent and requests for waivers of the 30 percent limit were widely approved. Formula grants are provided to states, metropolitan cities, urban counties, and territories in accordance with the distribution formula used for HUD’s Community Development Block Grants (CDBG). For local governments, a one-for-one match is required for each grantee. For states, there is no match for the first $100,000, but a one-for-one match is required for the remainder of the funds. This grant specifically targets the homeless population. To be eligible, grantees must (1) ensure that any building using ESG funds will continue as a homeless shelter for a specified period, (2) ensure that assisted rehabilitation is sufficient to make the structure safe and sanitary, (3) establish a procedure to ensure the confidentiality of victims of domestic violence and assist homeless individuals in obtaining appropriate supportive services and other available assistance, and (4) meet other generally applicable requirements, such as ensuring nondiscrimination and equal opportunity. Grantees are also required to supplement the grant with funds from other sources. ESG funds cannot be used to construct emergency shelter or transitional housing or to develop or lease permanent supportive housing for homeless persons. Permanent supportive housing may be obtained through the McKinney Act Shelter Plus Care, Supportive Housing, and Section 8 Single-Room Occupancy programs under the Continuum of Care competitive process. According to the 1994 study, grantees have suggested that more uses of the grant funds be allowed. Providers have had difficulty finding the resources to help their clients obtain permanent housing or gain access to a housing subsidy. Broadening the block grant is viewed as a way for the agencies operating ESG services to expand their services in the direction of transitional and permanent housing for homeless clients. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Project grants (competitive) The Section 8 Single-Room-Occupancy (SRO) Moderate Rehabilitation program provides rental assistance to homeless individuals. SROs are housing units intended for occupancy by a single person that need not, but may, contain food preparation or sanitary facilities, or both. Under the program, HUD enters into annual contributions contracts with public housing authorities for the moderate rehabilitation of residential properties that, when the work is completed, will contain multiple single-room dwelling units. The public housing authority is responsible for selecting properties that are suitable for assistance and for identifying landlords who will participate. The public housing authority then enters into a formal agreement with the property owner to make repairs and necessary improvements to meet HUD’s housing quality standards and local fire and safety requirements. The Continuum of Care concept, which applies to this program, requires linkages to and coordination with the local consolidated planning process undertaken by all states and CDBG entitlement communities. In addition, linkages with more than 100 federally designated empowerment zones and enterprise communities are enhanced through the awarding of additional points to applicants that can demonstrate strong coordination. Examples of coordination include the use of common board members on the Continuum of Care and empowerment zone/enterprise community planning committees, the location of assistance projects for the homeless within an empowerment zone or enterprise community, and the priority placement of homeless persons in an empowerment zone or enterprise community that provides assistance for the homeless. The use of mainstream housing programs, such as the Home Investment Partnership Program (HOME), CDBG, and the Low-Income Housing Tax Credit program in developing SRO housing involves further program integration and cross-agency coordination (e.g., between HUD and the Internal Revenue Service, within the Department of the Treasury). Program Administration/Funding Mechanism: Public and Indian housing authorities and private nonprofit organizations may apply for competitive awards of Section 8 rental subsidies. Private nonprofit organizations receiving awards must subcontract with the housing authorities to administer the SRO rental assistance. These entities then use the funds received from HUD to subsidize the rents of homeless people who will live in the housing. The housing authorities receive these funds from HUD over 10 years. The guaranteed cash flow from the Section 8 housing subsidies helps the owners obtain private financing for the work, cover operating expenses and service the project’s debt, and make a profit on the project. None. Eligible participants are homeless single individuals. Families are not eligible. The funding for this program is considered a permanent housing resource. Thus, homeless persons seeking temporary shelter or support services only would not be eligible for assistance. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Project grants (competitive) The Shelter Plus Care program provides rental assistance, together with supportive services funded from a source other than this program, to homeless persons with disabilities. The program may provide (1) tenant-based rental assistance, (2) sponsor-based rental assistance, (3) project-based rental assistance, or (4) SRO assistance. According to HUD’s estimates, this program served 7,440 persons in fiscal year 1995, 4,048 in fiscal year 1996, and 2,718 in fiscal year 1997. Estimates were not available for fiscal year 1998. According to a program evaluation study, HUD administers two programs other than this one for disabled homeless persons—the Permanent Housing for Handicapped Homeless Persons Program within the Supportive Housing Program and Housing Opportunities for Persons With AIDS (HOPWA). Although some communities have grants for all three programs, there is typically no direct linkage among them unless they are administered by the same service provider. When service providers have had a choice, some have enrolled homeless persons in the other two programs, especially when the homeless persons have been greatly in need of supportive services, because both the Supportive Housing Program and HOPWA permit the use of program funds for services. Also, HHS’ Projects for Assistance in Transition from Homelessness (PATH) program is a federal formula grant to assist the homeless mentally ill population. According to a Shelter Plus Care evaluation study, PATH has been an excellent source of referrals for local Shelter Plus Care programs and operates in many of the same communities. The goals of the Shelter Plus Care program are to (1) assist homeless individuals and their families; (2) increase housing stability, skill and/or income; and (3) obtain greater self-determination. The study concluded that overall, these programs could successfully serve the target population, but the program’s independent living housing options, as initially conceived, were not suitable for that population because the participants needed a more supervised setting that offered intensive case management, life skill training, housing supervision, and treatment for one or more of the participants’ disabilities. The study concluded that service providers adapted its outreach sources and screening criteria to reflect this need. The program changed its focus to disabled formerly homeless persons who came from transitional shelters, emergency shelters with strong transitional programs, or detoxification and treatment programs rather than directly from the streets. The rent subsidy can be administered by states (including territories), units of general local government, Indian tribes, and public and Indian housing agencies. Grant recipients may then subgrant funds in the form of rental assistance to housing owners. Under the sponsor-based assistance component, grantees may also provide rental assistance to private nonprofit entities (including community mental health centers established as nonprofit organizations) that own or lease dwelling units. Each grantee must match the federal funds provided for shelter with equal funding for supportive services. The match must come from a source other than the Shelter Plus Care program; however, federal, state and local resources may be used for the match. Eligible supportive services include health care, mental health and substance abuse services, child care, case management, counseling, supervision, education, job training, other services necessary for independent living. In-kind resources can count towards the match. Those eligible for participation include homeless persons with disabilities (primarily those who are seriously mentally ill; have chronic problems with alcohol, drugs, or both; or have AIDS) and, if also homeless, their families. Such persons must also have low annual incomes (not exceeding 50 percent of the median income for an area). The Shelter Plus Care program also targets those who are difficult to reach, such as persons living on the streets and sleeping on grates, in parks, or in bus terminals; residing in emergency shelters, welfare hotels, or transitional housing; or at imminent risk of being evicted and subsequently living on the street or in a shelter. Homeless persons not meeting the definition of “disabled” are not eligible for assistance. Also, homeless persons or families seeking temporary shelter, transitional housing, or support services only cannot participate in this program. According to a Shelter Plus Care evaluation study, the program is regarded as a resource for providing permanent housing. However, the program’s objective is to provide housing assistance for at least 5 years as needed; thus, the term “permanent housing” may not be strictly applicable. In addition, the study concludes that grantees have generally not found regional HUD staff to be prompt and helpful in providing technical assistance. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Project grants (competitive) The Supportive Housing Program is designed to promote the development of supportive housing and supportive services to assist homeless persons in the transition from homelessness and to enable them to live as independently as possible. Program funds may be used to provide (1) transitional housing within a 24-month period, as well as up to 6 months of follow-up services to former residents to promote their adjustment to independent living; (2) permanent housing in conjunction with appropriate supportive services designed to allow persons with disabilities to live as independently as possible; (3) supportive services for homeless persons not provided in conjunction with supportive housing (i.e., services only); (4) housing that is, or is a part of, an innovative development or alternative method designed to meet the long-term needs of homeless persons; and (5) safe havens for homeless individuals with serious mental illness currently residing on the streets who may not yet be ready for supportive services. According to HUD’s estimates, the Supportive Housing Program served 279,491 homeless persons in fiscal year 1995, 328,037 in fiscal year 1996, and 123,033 in fiscal year 1997. Estimates were not available for fiscal year 1998. HUD is collaborating with HHS on the safe havens component of the Supportive Housing Program. The departments are planning to distribute a guide that describes a combination of housing and services in facilities designated as safe havens. States, local governmental entities (including special authorities, such as public housing authorities), private nonprofit organizations, and community mental health associations that are public nonprofit organizations can apply for program funds. Program funds are to be used as follows: (1) not less than 25 percent for homeless persons with children, (2) not less than 25 percent for homeless persons with disabilities, and (3) at least 10 percent for supportive services for homeless persons who do not reside in supportive housing. A dollar-for-dollar cash match is required for grants involving acquisition, rehabilitation, or new construction. A 25- to 50-percent cost share is required for operating assistance. As of fiscal year 1999, a 25-percent match for supportive services is required. Homeless individuals and families with children are eligible for all but the permanent housing for persons with disabilities. Homeless persons with disabilities are eligible for all components, including services. Although the Supportive Housing Program does not have a statutory mandate to serve persons with substance abuse problems, HUD has determined that homeless persons whose sole impairment is alcoholism or drug addiction will be considered disabled if they meet the Department’s statutory criteria. Program funds cannot be used to develop or operate emergency shelters, although the funds can be used to provide supportive services at shelters. Although exceptions to the 24-month limit on stays in transitional housing are allowed, program funds cannot be use to provide permanent housing for nondisabled persons. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Formula and project grants (competitive) The CDBG program’s objective is to assist in developing viable urban communities by providing decent housing and a suitable living environment and by expanding economic opportunities, principally for persons with low and moderate incomes. It is the federal government’s primary vehicle for revitalizing the nation’s cities and neighborhoods, thereby providing opportunities for self-sufficiency to millions of Americans. The block grant has three components—CDBG/States’ Program, CDBG/Entitlement Program, and CDBG/Small Cities Program. CDBG grants can be used to acquire or rehabilitate shelters, operate shelters, and provide supportive (public) services such as counseling, training, and treatment. In addition, CDBG funds may be used for the construction of temporary shelter facilities and transitional housing, such as halfway homes, for the chronically mentally ill, considering these as public facilities, not residences. Data reported for funds expended in fiscal year 1995 under the Entitlement Communities portion of the CDBG program show that $27,500,000 was spent on facilities for the homeless and $51,000,000 was spent on public service activities specifically for the homeless. The actual number of homeless persons benefiting is not known because data are captured by activity and several activities often benefit the same individual. Also, each local government is free to measure data on beneficiaries to suit locally designed programs. According to HUD’s Office of Block Grant Assistance, there are no data on the number of homeless persons served by the CDBG State and Small Cities programs. Seventy percent of all CDBG funds are provided to entitlement communities (cities) and 30 percent to smaller communities, either through the states or directly from HUD (in New York and Hawaii). CDBG Entitlement Program: Cities in metropolitan statistical areas designated by the Office of Management and Budget as the central city of the metropolitan statistical area; other cities with over 50,000 residents within the metropolitan statistical area, and qualified urban counties with at least 200,000 residents are eligible to receive entitlement grants, determined by a statutory formula. Recipients may undertake a wide range of activities directed toward neighborhood revitalization, economic development, and the provision of improved community facilities and services. Activities that can be carried out with CDBG funds include the acquisition of real property and rehabilitation of residential and nonresidential structures. Up to 15 percent of CDBG entitlement funds may be used to pay for public services. All activities must aid in the prevention or elimination of slums or blight or meet other urgent community development needs. The grantee must certify that at least 70 percent of the grant funds are expended for activities that will principally benefit persons with low and moderate incomes. CDBG/States’ Program: State governments receive this formula grant and must determine the methods for distributing funds and distribute the funds to units of general local government in nonentitlement areas. The units of general local government funded by a state may undertake a wide range of activities directed toward neighborhood vitalization, economic development, or the provision of improved community facilities and services. CDBG/Small Cities Program: HUD administers this competitive grant program only for nonentitlement communities in New York and Hawaii. Eligible applicants are units of local government (including counties). Small cities develop their own programs and funding priorities. Funds may be used for activities that the applicant certifies are designed to meet urgent community development needs—defined as those that pose a serious and immediate threat to the health or welfare of the community. The applicant must also certify that no other financial resources are available to meet these needs. None. Eligibility: The principal beneficiaries of CDBG funds are persons with low and moderate incomes. For metropolitan areas, such people are generally defined as members of households with incomes equal to or less than the Section 8 low-income limit (i.e., 80 percent or less of an area’s median income) established by HUD. Grantees may not obligate more than 15 percent of their CDBG funds for public services. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Formula grants The objectives of this program are to (1) expand the supply of affordable housing, particularly rental housing, for Americans with low and very low incomes; (2) strengthen the abilities of state and local governments to design and implement strategies for achieving adequate supplies of decent, affordable housing; (3) provide both financial and technical assistance to participating jurisdictions, including the development of model programs for developing affordable low-income housing; and (4) extend and strengthen partnerships among all levels of government and the private sector, including for-profit and nonprofit organizations, in the production and operation of affordable housing. HOME funds can be used for acquisition, reconstruction, moderate or substantial rehabilitation, and new construction activities that promote affordable rental and ownership housing. Transitional housing is eligible for HOME funds. Tenant-based rental assistance is also eligible and is described by HUD as a flexible resource that communities can integrate into locally designed plans to assist persons with special needs, including those participating in self-sufficiency programs. Because the purpose of the HOME program is to produce affordable rental and homeownership housing for low-income families, HUD collects data on the income levels of the persons being served. Information on whether these individuals are homeless is not collected. All families occupying HOME-assisted units or receiving HOME-funded tenant-based rental assistance must have incomes at or below 80 percent of their area’s median income. Although HUD does not collect data on the number of homeless persons served through HOME, there is anecdotal evidence that jurisdictions are using HOME funds for single-room-occupancy projects and group homes to serve the homeless, as well as for tenant-based rental assistance to persons who are homeless or at risk of becoming homeless. States, cities, urban counties, and consortia (of contiguous units of general local governments with a binding agreement) are eligible to receive formula allocations. Funds are also set aside for grants to insular areas (i.e., the Virgin Islands, American Samoa, Guam, and the Northern Marianas). Applicants must submit a consolidated plan, an annual action plan, and certifications to HUD. The consolidated plan and annual action plan identify the applicant’s plans for using funds from four major formula-distribution HUD community development programs, including HOME. Also, according to a director in the Office of Affordable Housing, the annual action plan must describe the federal and other resources expected to be available, as well as the activities to be undertaken to meet priority needs. HOME funds are allocated to participating jurisdictions on a formula basis—60 percent to participating local governments and 40 percent to states, after set-asides for insular areas, management information support, technical assistance, and housing counseling have been subtracted. The formula takes into account factors that reflect a jurisdiction’s need for more affordable housing for families with low and very low incomes. Designed by HUD to meet statutory criteria, the formula considers shortfalls in the jurisdiction’s housing supply, the incidence of substandard housing, the number of low-income families in housing units likely to need rehabilitation, the cost of producing housing, the jurisdiction’s poverty rate, and the jurisdiction’s relative fiscal incapacity to carry out housing activities without federal assistance. HOME funds are frequently combined with funds made available under the McKinney Act to pay for the acquisition, rehabilitation, or new construction of projects for serving homeless persons. HOME funds are allocated by formula to state and local governments. The use of HOME funds with programs serving the homeless is coordinated at the state and local level through the Continuum of Care. Grantees must provide an amount equal to 25 percent of the grant. This percentage may be reduced for jurisdictions that are fiscally distressed or have been declared major disaster areas by the President. For rental housing, at least 90 percent of HOME funds must benefit families with low and very low incomes (at or below 60 percent of the area’s median income); the remaining 10 percent must benefit families with incomes at or below 80 percent of the area’s median income. Assistance to homeowners and homebuyers must be to families with incomes at or below 80 percent of the area’s median income. HOME funds can be used for permanent and transitional housing and for tenant-based rental assistance. However, they cannot be used for emergency shelters or vouchers for emergency shelter. In addition, because the program is designed to produce affordable housing, social services are not an eligible cost under the program (although the value of social services provided to persons in HOME-assisted units or receiving HOME tenant-based rental assistance can be considered part of the grantee’s matching contribution). Administering Agency: U.S. Department of Housing and Urban Development (HUD) Program Type: Nontargeted (competitive) Funding Type: Formula and project grants The objective of this program is to provide states and localities with the resources and incentives to devise long-term comprehensive strategies for meeting the housing needs of person with AIDS or related diseases and their families. Activities are carried out under strategies designed to prevent homlessness and may assist homeless persons who are eligible for the program. HOPWA grantees report that about 14 percent of clients are persons who were homeless upon entering the program. According to HUD’s estimates, the program served about 6,200 homeless persons from the street, in emergency shelters, or in transitional housing during a 12-month period. During fiscal years 1994-97, according to HUD’s estimates, HOPWA served 2,859 homeless persons from the street and 1,426 persons in emergency shelter—a total of 4,285 persons. According to the director, HUD’s Office of HIV/AIDS Housing has conducted a Multiple Diagnosis Initiative (MDI) in conjunction with HHS to improve the integration of health care and other services with housing assistance. The purpose of this initiative was to address the needs of homeless people who are multiply diagnosed and living with HIV/AIDS. The Office of HIV/AIDS Housing is collaborating with grantees and the Evaluation and Technical Assistance Center at Columbia University’s School of Public Health to evaluate the results of this initiative. As of September 1998, the assessment is ongoing, and reports and other statistical information will be shared, as developed, through the planned operating periods of these grants, 1996-2002. The principal mechanism for integrating and coordinating the HOPWA program is the consolidated plan and, if homeless persons are served, the area’s Continuum of Care effort. This process is intended to help all states, metropolitan cities, and urban counties formulate a holistic and comprehensive vision for their housing and community development efforts, including meeting the needs persons with HIV/AIDS who may be homeless or at risk of becoming homeless through HOPWA and other programs. Grantees are required to establish public consultation procedures and may involve area Ryan White CARE Act planning councils, consortia, and other planning bodies in designing efforts. States and qualified cities that meet population and AIDS incidence criteria (i.e., a metropolitan area with a population of at least 500,000 and at least 1,500 cases of AIDS) are eligible to receive formula grants. Activities must be consistent with an approved consolidated plan. Eligible activities include housing assistance (including rental assistance; short-term payments for rent, mortgage, and utilities to prevent homelessness; and housing in community residences, single-room-occupancy dwellings, and other facilities); housing development through acquisition, rehabilitation, and new construction; program development through technical assistance and resource identification; supportive services; and administrative costs. Ninety percent of the program’s funds are allocated, on the basis of a statutory formula that considers AIDS statistics, to metropolitan areas with a higher than average incidence of AIDS. As required by statute, HUD uses the remaining 10 percent of the funds to select special projects of national significance to make grants to areas that did not qualify for formula allocations. These selections are made by annual national competitions. None. Grantees are encouraged to coordinate activities with Ryan White CARE Act programs and other health care efforts. Competitive applications are reviewed, in part, on the basis of the resources leveraged; grantees selected in the 1992-97 competitions documented leveraged resources equal to 131 percent of the federal funds made available in these competitions. Low-income individuals with HIV or AIDS and their families are eligible to receive housing assistance or related supportive services under this program. Grantees may target assistance to persons with higher needs, including those who are homeless or at risk of becoming homeless. Only low-income individuals with HIV or AIDS are eligible for health services if compensation or health care is not available from other sources. Survivors of eligible individuals are eligible to receive housing assistance and related services for up to 1 year following the death of the person with AIDS. Individuals with AIDS and their families are eligible to receive housing information and coordination services, regardless of their incomes. Each person receiving rental or mortgage assistance under this program or residing in any rental housing assisted under this program (including single-room-occupancy dwellings and community residences) must make a contribution towards the cost of housing, such as a rent payment equal to 30 percent of the household’s adjusted monthly income. Program Limitations: According to the director of HUD’s Office of HIV/AIDS Housing, there are no limitations on serving homeless persons if they meet the program’s eligibility requirements. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Direct payments for specified uses This program is designed to provide and operate cost-effective, decent, safe and affordable dwellings for lower-income families through an authorized local public housing authority. In fiscal year 1997, HUD distributed funds to public and Indian housing authorities that provided public housing and services to 1.4 million households. Public housing authorities established in accordance with state law are eligible. The proposed program must be approved by the local governing body. Under the Native American Housing Assistance and Self-Determination Act of 1996, Indian housing authorities are no longer eligible for funding under the U. S. Housing Act (of 1937). In fiscal year 1997, the Department made available nearly $3 billion in annual contributions (operating subsidies) for about 1,372,000 public housing units. No development was funded under this program; such development of new or replacement units that did occur was primarily financed with funds from the modernization accounts. There is no matching requirement; however an indirect local contribution results from the difference between full local property taxes and payments in lieu of taxes made by local public housing authorities. Eligibility: Lower-income families that include citizens or legal immigrants are eligible. A “family” includes but is not limited to (1) a family with or without children; (2) an elderly family (head, spouse, or sole member 62 years or older), (3) a near-elderly family (head, spouse, or sole member 50 years old but less than 62 years old), (4) a disabled family, (5) the remaining member of a tenant family, (6) a displaced family, or (7) a single person who is neither elderly, near-elderly, displaced, or with disabilities. According to HUD’s Deputy Secretary of Public Housing Investments, HUD’s appropriation legislation eliminates, for fiscal year 1999 and every year thereafter, previous federal preferences for certain classes of persons, including those who are homeless, and earmarks 40 percent of public housing units for families earning less than 30 percent of their area’s median income. The elimination of federal preferences in obtaining public housing for select groups, including homeless people, provides less opportunity for these groups to obtain affordable housing. In the past, some households received higher priority for admission if they were paying more than 50 percent of their income for housing or were living in severely substandard housing (a category that includes homelessness and involuntary displacement). Additionally, in the past, homeless people with no income could obtain public housing. However, housing agencies are now allowed (but not required) to charge a minimum rent of up to $50 a month. This charge could prevent homeless people from obtaining public housing. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Contract administration and annual contribution contracts HUD’s Section 8 project-based program (HUD’s major project-based privately owned housing program) pays a portion of residents’ rent for housing owned by private landlords, public housing authorities, and state housing finance agencies. An assisted household generally pays 30 percent of its income for rent, although this percentage can vary depending on the household’s income and the type of program. Project-based contracts are generally between HUD and the owners of private rental housing. When the funds provided for long-term contracts exceed the actual expenses incurred, HUD can recapture the excess funds and use them to help fund other Section 8 contracts. Although expiring contracts were initially renewed for 5 years, they are, as of 1998, being renewed for 1 year. To provide Section 8 project-based assistance, HUD may enter into (1) a housing assistance payments contract with a private landlord or (2) an annual contributions contract with a housing finance agency or a public housing authority. When HUD enters into a housing assistance payments contract with a private landlord, it guarantees payments for a period of time (as short as 1 year) specified in the contract. When it enters into an annual contributions contract, it provides the Section 8 funds to the housing finance agency or the public housing authority, which in turn enters into a housing assistance payments contract with the private landlord. Residents live in housing that is designated as assisted housing for them. None. Eligibility is restricted to individuals and families with very low incomes (i.e., not exceeding 50 percent of the area’s median income). A limited number of available units may be rented to families and individuals with low incomes (i.e., between 50 and 80 percent of the area’s median income). Assistance is limited to income-eligible individuals and families. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Annual contributions contracts The objective of this program, as of September 30, 1998, is to aid families with very low incomes in obtaining decent, safe, and sanitary rental housing. The voucher subsidy amount is based on the difference between (a) a payment standard set between 80 and 100 percent of the fair market rent and (b) 30 percent of the household’s income. The Section 8 rental certificate program generally requires that rents at initial occupancy not exceed HUD-published fair market rents. According to a program specialist from the Office of Public Housing Operations, HUD’s Multifamily Tenant Characteristics System (MTCS) shows that 50,300 participants, or 3.5 percent of all applicants, were admitted to the Section 8 voucher and certificate programs with a preference because they were homeless. But because several large urban housing authorities have not adequately reported MTCS data, the program specialist estimated that a higher percentage (4 to 5 percent) were homeless at the time of admission. The housing agencies that give preference to homeless applicants typically receive referrals from, and coordinate the provision of support services with, local homeless service providers. According to an October 1994 study of the use of rental vouchers and certificates, the rate of success in finding suitable rental units in properties whose landlords would honor Section 8 certificates and vouchers was not significantly different for homeless and other participants. In the study’s sample, 89 percent of all participants were successful in finding suitable housing and 87 percent of homeless participants were successful. According to the program specialist, as of September 1998, there were 1,237,076 certificates and 429,310 vouchers available under this program to assist eligible families. Program Administration/Funding Mechanism: Only housing agencies may apply to participate in this program. According to the program specialist, Section 8 federal expenditures per unit in 1998 were about $5,499, (or about $458 per month). Housing authorities receive the amounts they need to pay housing assistance and cover related administrative expenses. None. Families with very low incomes are eligible. Seventy-five percent of vouchers and certificates are set aside for families earning less than 30 percent of the area’s median income. According to the program specialist, the local housing agencies that administer the rental voucher and certificate programs decide whether to establish an admission preference for the homeless. Thus, the local agencies determine to what extent homeless people will be assisted before other eligible applicants with very low incomes. In many areas, there are many more applicants for rental assistance than there is assistance available. The average wait, nationwide, for a rental voucher or certificate is 2-1/4 years. In some localities, the wait is much longer, and occasionally housing agencies must close their waiting lists to new applicants when there are more applicants than the housing agency can serve in the foreseeable future. Some homeless applicants are not ready for independent living under a lease agreement or do not have the capacity to uphold a lease agreement. Thus, the program—which is intended to operate in the private rental market and requires the participant to find and lease housing (with HUD’s financial assistance) for at least 1 year—may not be a suitable source of housing assistance for some homeless people. The program does not require a housing agency to coordinate supportive services for homeless applicants. Administering Agency: U.S. Department of Housing and Urban Development (HUD) Funding Type: Formula grants The Section 811 program was established to enable persons with disabilities to live with dignity and independence within their communities by expanding the supply of supportive housing that is (1) designed to accommodate the special needs of such persons and (2) provides supportive services that address the health, mental health, and other needs of such persons. Owners of Section 811 projects must have a supportive services plan that gives each resident the option to (1) receive any of the services the owner provides, (2) acquire his/her own services (the owner would provide a list of community service providers, as well as make any necessary arrangements to receive services for a resident selecting this option), or (3) receive no supportive services. Given these options, residents may be receiving supportive services through programs that serve homeless persons. The coordination and integration of such services usually occurs at the local level. The Department does not collect information on the number of homeless persons who have been served through this program. This program provides capital advances to nonprofit organizations with 501(c)(3) federal tax exemptions to finance the development of housing for very-low-income persons with disabilities aged 18 or older. The funds can be used for (1) capital advances, which may be used to develop housing through new construction, rehabilitation, or acquisition; (2) rental assistance, which is provided to cover the difference between the HUD-approved operating costs per unit and the amount the household pays (30 percent of the household’s adjusted income); and (3) supportive services, which include mental health services. Nonprofit organizations must provide a minimum capital investment of one-half of 1 percent of the HUD-approved capital advance amount up to $10,000. A person with a disability is eligible if he or she resides in a household that includes one or more very-low-income persons, at least one of whom is aged 18 or older. The applicant must have a physical or developmental disability or a chronic mental illness that (1) is expected to be of long and indefinite duration, (2) substantially impedes the applicant’s ability to live independently, and (3) could be improved by more suitable housing conditions. Because eligibility is limited to adults with very low incomes who are developmentally disabled and/or physically disabled and/or chronically mentally ill, some homeless people could not participate in this program. Funding Type: Project grants The objective of this program is to fund projects designed to expedite the reintegration of homeless veterans into the labor force. According to the Department’s director for Operations and Programs, the program is projected to serve about 3,023 homeless veterans in fiscal year 1998. Labor has established the creation of a prepared workforce as one of its strategic goals. In its annual performance plan, it lists performance goals for accomplishing this strategic goal, including the following: (1) Help 300,000 veterans find jobs: 10,000 will be disabled, and 1,800 will be homeless. Labor mentions that it plans to focus on the harder-to-serve veterans in 1999. (2) Develop and implement a national Veteran’s Employment initiative that will help approximately 25,000 unemployed older veterans find jobs each year for 5 years. Labor will receive a $100 million reimbursement for this initiative from the Department of Veterans Affairs over 5 years. State and local public agencies, private industry councils, and nonprofit organizations are eligible to apply for funds. Competition targets two types of areas: (1) the metropolitan areas of the 75 largest U.S. cities and San Juan and (2) rural areas defined as those territories, persons, and housing units that the Census Bureau has defined as not “urban.” None. Homeless veterans are eligible to participate. According to the director of the Department’s Office of Management and Budget, Labor simply requires those who apply for this program to meet the definition of being homeless and a veteran. Funding Type: Formula grants The objective of this program is to provide employment and training services to economically disadvantaged adults and others who face significant employment barriers, in an attempt to move such individuals into self-sustaining employment. According to the director of Labor’s Operations and Programs, about 6,048 homeless persons were served each year in program years 1995-98 each. This number represents about 3 percent of all who were served. The governor submits a biennial state plan to the Department’s Employment and Training Administration. Title II funds are allocated among states according to a formula that reflects relative unemployment and poverty. States use the same formula to suballocate funds to local service delivery areas, retaining a portion to conduct certain state leadership activities and administration. Each state is required to have a State Job Training Coordinating Council. These councils are formed by governors to make recommendations on proposed service delivery areas. Amendments to the Jobs Training Partnership Act and Labor’s administrative guidelines have improved homeless people’s access to services by eliminating residency requirements and creating additional incentives for reaching hard-to-serve groups, specifically including the homeless. Providers must refer all eligible applicants who cannot be served by their programs to other suitable programs within their service delivery area. Programs are to establish linkages with other federally assisted programs, such as those authorized under the Adult Education Act, the Food Stamp Employment and Training Program, HUD’s housing programs, and several others. None. Eligibility: Economically disadvantaged adults are eligible for this program if they face serious barriers to employment and need training to obtain productive employment. Providers must determine whether eligible individuals are suitable participants, considering, among other factors, whether other programs and services are available to these individuals and whether they can reasonably be expected to benefit from participation in the program, given the range of supportive services available locally. No fewer than 65 percent of the participants shall be in one or more of the following categories: deficient in basic skills; school dropouts; recipients of cash welfare payments; offenders; individuals with disabilities; homeless; or in another category established for a particular service delivery area upon the approval of a request to the governor. According to the director of the Department’s Office of Employment and Training Programs, the primary limitation is funding. Only a very small percentage of the eligible population can be served with existing resources. In addition, some communities do not provide support services, such as shelters, that may be needed to meet the non-training needs of the individuals. In order to effectively service this cohort, it is critical that other local resources are orchestrated to meet the multiple needs of this group. Funding Type: Formula grants Title IIB offers economically disadvantaged young people jobs and training during the summer. This includes basic and remedial education, work experience, and support services such as transportation. Academic enrichment, which may include basic and remedial education, is also part of the program. Title IIC provides year-round training and employment programs for youth, both in and out of school. Program services may include all authorized adult services, limited internships in the private sector, school-to-work transition services, and alternative high school services. For the IIB program, information on the number of homeless persons served is not collected. For the IIC program, for fiscal year 1996, the most recent year for which data were available, 1,800, or 2 percent, of the youth served through this program were homeless. The governor submits a biennial state plan to the Department’s Employment and Training Administration. Title IIC funds are allocated among states according to a formula that reflects relative unemployment and poverty. States use the same formula to suballocate funds to local service delivery areas, retaining a portion to conduct certain state leadership activities. Each state is required to have a State Job Training Coordinating Council. These councils are formed by governors to make recommendations to them on proposed service delivery areas. Amendments to the Jobs Training Partnership Act and Labor’s administrative guidelines have improved homeless people’s access to services by eliminating residency requirements and creating additional incentives for reaching hard-to-serve groups, specifically including the homeless. None. Disadvantaged youth aged 14 to 21 are eligible for the Title IIB (summer jobs) program. In-school youth and out-of-school youth are eligible for the Title IIC program. No fewer than 50 percent of the participants in each service delivery area must be out of school. Eligible in-school youth must be aged 16 to 21, economically disadvantaged, without a high school diploma, and in school full time. At least 65 percent of in-school participants must be hard to serve. Out-of-school youth are eligible if they are 16 to 21years old and economically disadvantaged. Program Limitations: The ability of local administrators to use the IIB (summer) and IIC (year-round) programs is contingent on the services that are available locally to address the needs of eligible youth. The IIB program runs for only 6 to 8 weeks. For continuity, the IIB program would need to be linked with the IIC program and other resources in the community. The IIC program is severely constrained by limits on funding: Over half of the grantees operate programs of less than $250,000. Funding Type: Project grants The objective of this program is to provide employment and training grants to meet the employment and training needs of veterans with service-connected disabilities, veterans of the Vietnam era, and veterans who have recently left military service. Labor is working to improve coordination with VA and to train its own and VA staff working on vocational rehabilitation and counseling. According to a Labor official, JTPA grantees were not required to report the number of homeless people served by this program. State and JTPA administrative entities are eligible to receive grants under the Title IV-C program. All applicants for grants must demonstrate that they (1) understand the unemployment problems of qualified veterans, (2) are familiar with the area to be served, and (3) are able to effectively administer a program of employment and assistance. None. Eligible for services are disabled veterans, veterans from the Vietnam era, or veterans who have left military service and applied for program participation within 12 months of separation. According to a Labor official, section 168 of the Workforce Investment Act of 1998 has substantially changed the eligibility criteria for this program, making veterans who face significant employment barriers eligible for this program. The Department did not identify any limitations for this program. Funding Type: Formula and project grants The Welfare-to-Work program was designed to help states and localities move hard-to-employ welfare recipients into lasting unsubsidized jobs and achieve self-sufficiency. Welfare-to-Work projects are encouraged to integrate a range of resources for low-income people, including funds available through TANF and the Child Care and Development Fund. In addition, coordination efforts should encompass funds available through other related activities and programs, such as JTPA, state employment services, private-sector employers, education agencies, and others. Partnerships with businesses and labor organizations are especially encouraged. States are urged to view Welfare-to-Work not as an independent program but as a critical component of their overall effort to move welfare recipients into unsubsidized employment. States are the only entities eligible for these federal formula grants, although subgrantees include eligible applicable service delivery area agencies under the supervision of the private industry council in the area (in cooperation with the chief elected official(s)). The Secretary of Labor will allot 75 percent of these funds to the state Welfare-to-Work agencies on the basis of a formula and a plan that each state submits. The states, in turn, must distribute by formula no less than 85 percent of their allotments among the service delivery areas. They can retain the balance for special welfare-to-work projects. The balance of the federal appropriated funds will be retained by the Secretary for award through a competitive grant process to private industry councils, political subdivisions, and eligible private entities. Grantees are required to provide $1 in matching funds for each $2 in federal formula funds allotted. The regulations allow the use of in-kind contributions to satisfy up to 50 percent of this requirement. Applications for competitive grants are funded on the basis of the specific guidelines, criteria, and processes established under each solicitation. However, there are no “formula” or matching requirements for these grants. At least 70 percent of the funds must be expended on welfare recipients or on the noncustodial parents of minors with a custodial parent who is a welfare recipient and meets at least two of the following requirements: (1) the individual has not completed secondary school or obtained a certificate of general equivalency and has low skills in reading or mathematics, (2) the individual requires substance abuse treatment for employment, and (3) the individual has a poor work history. In addition, the individual must have received assistance under the state program funded under this component. No program limitations were identified by the Department. Administering Agency: Social Security Administration (SSA) The Supplemental Security Income (SSI) program provides monthly payments to elderly, blind, or disabled individuals with low incomes and few resources. A person does not need to have a permanent residence to be eligible for SSI. The Social Security Administration (SSA) can make special arrangements for delivering SSI checks to homeless persons. Receiving SSI may allow a homeless person to get permanent housing. In some locations, eligibility for SSI is automatically associated with eligibility for Medicaid and/or food stamps. Other federal, state and local programs are also automatically available to persons who are eligible for SSI. SSA does not collect data on the number of homeless persons receiving SSI benefits. SSI is federally administered and funded from the General Trust Fund (not the Social Security Trust Fund). Some states supplement the federal funding with state funds. None. Individuals who are (1) aged 65 or older, (2) blind, or (3) disabled and who meet requirements for monthly income and resources, citizenship or alien status, and U.S. residency are eligible for SSI benefits. A policy analyst for the SSI program reported that one of the most pressing problems for SSA in trying to serve the homeless is that homeless persons do not have a place to “hang their hat.” They also do not have a telephone or fixed address. Although a majority of them have a drop box in which they can receive mail and many have a phone number for messages, these devices do not provide the security of a phone or mailbox associated with a home. Often, the address and phone number a homeless person provides when first applying for SSI are out of date when SSA tries to contact the applicant about medical appointments, further needed documentation, or other matters. Administering Agency: U.S. Department of Veterans Affairs (VA) Funding Type: Direct payments to VA medical centersThis program provides health services and social services to homeless veterans in a domiciliary setting that offers less care than a hospital but more care than a community residential setting. Health care offered through this program includes medication for medical or psychiatric illness, psychotherapy and counseling, health education, and substance abuse treatment. Social services include assisting homeless veterans with housing needs, resume writing, job interviewing, job searching, and/or job placement. Other basic program components include community outreach and referral, admission screening and assessment, medical and psychiatric evaluation, treatment and rehabilitation, and postdischarge community support. The Department provides funds to VA medical centers to address the unmet needs of homeless veterans. The program is primarily a residential treatment program located within VA facilities. Although available to homeless veterans with any health problems, nearly 90 percent of the veterans treated by the program suffer from psychiatric illness or dependency on alcohol or other drugs. According to program officials, participation in the program has been voluntary because funds have been limited and the Department wants to support only those facilities that are strongly committed to assisting homeless veterans. In past years, facilities that wanted to participate prepared a proposal, which was evaluated by a Veterans Health Administration committee, and funds were allocated according to the merits of the individual proposals. None. Veterans who are homeless or at risk of becoming homeless and have a clinical need for VA-based biopsychosocial residential rehabilitation services are eligible for this program. Although each VA facility has a homeless coordinator, VA, with one exception, has no specific requirement for facilities to participate in initiatives for homeless veterans. According to a September 1996 Inspector General’s report, 35 of VA’s 173 hospitals nationwide had Domiciliary Care for Homeless Veterans programs. Administering Agency: U.S. Department of Veterans Affairs (VA) Funding Type: Direct services and contract awardsThis program provides care, treatment, and rehabilitative services to homeless veterans suffering from chronic mental illness. Services are provided in halfway houses, therapeutic communities, psychiatric residential treatment centers, and other community- based treatment facilities. VA refers to this program, and many of the supportive programs (see app. III) as Health Care for Homeless Veterans programs. Although all of these programs have continued to expand and diversify in recent years, the Homeless Chronically Mentally Ill Veterans program remains the core of these efforts, and its core activity is outreach. According to a study performed by VA’s Northeast Program Evaluation Center, one dominant theme of this program has been the increased involvement with community providers. By exchanging resources with other agencies, VA has been able to leverage additional resources for homeless veterans that would otherwise be inaccessible or prohibitively expensive. Community-based residential treatment providers and other providers of services for the homeless may receive contracts from, or enter into partnerships with, local VA medical centers. According to an April 1998 study, there are 62 Homeless Chronically Mentally Ill program sites in 31 states and the District of Columbia, forming the largest integrated network of treatment programs for the homeless in the United States. In addition, the Homeless Chronically Mentally Ill program has active contracts with over 200 community-based residential treatment facilities to provide treatment and rehabilitation to these veterans at an average cost of $41 daily. None. Homeless veterans with substance abuse problems and/or chronic mental illnesses who are eligible for VA health care are also eligible for the HCMI program. Staff seek out homeless veterans in shelters, on the streets, in soup kitchens, or wherever they may reside. Program Limitations: While the program constitutes the nation’s largest integrated network of assistance programs for the homeless, it does not cover every state or every geographical area. Furthermore, access to the program’s contract residential treatment component at individual sites depends on available bed space in programs that meet VA’s criteria for therapeutic support and comply with federal and fire safety codes. Administering Agency: U.S. Department of Veterans Affairs (VA) Funding Type: Project grants The purpose of this program is to assist public and nonprofit entities in establishing new programs and service centers to furnish supportive services and supportive housing for homeless veterans through grants that may be used to acquire, renovate, or alter facilities and to provide per diem payments, or in-kind assistance in lieu of per diem payments, to eligible entities that established programs after November 10, 1992, to provide supportive services and supportive housing for homeless persons. Applicants eligible for grants include public and nonprofit private entities that (1) have the capacity to effectively administer a grant, (2) can demonstrate that adequate financial support will be available to carry out the project, and (3) agree to demonstrate their capacity to meet the applicable criteria and requirements of the grant program. Applicants eligible for per diem payments include public or nonprofit private entities that either have received or are eligible to receive grants. VA distributes the funds directly to the public or private nonprofit agency. Grantees must provide 35 percent of the project’s total costs for grants and 50 percent of the service costs for per diem payments. Eligibility: Veterans—meaning persons who served in the active military, naval or air service and were discharged or released from there under conditions other than dishonorable—are eligible to participate. No aid provided under this program may be used to replace federal, state, or local funds previously used or designated for use to assist homeless persons. In addition, the period of residence for a veteran in transitional housing should be limited to 24 months unless the veteran needs more time to prepare for independent living or appropriate permanent housing has not been located. This appendix describes some additional resources and activities used to assist homeless people. Agency officials and advocates for the homeless did not identify them as “key” programs but nevertheless considered them important. The appendix does not include all of the resources and activities that serve homeless people. Under this law, surplus buildings and other properties on military bases approved for closure or realignment are available to assist homeless persons. Assistance providers may submit notices of interest for buildings and property to local redevelopment authorities that have been designated to plan for the reuse of closing installations. The Department of Defense (DOD) provides planning grants to local redevelopment authorities for bases where it determines that closure will cause direct and significant adverse consequences or where it is required, under the National Environmental Policy Act of 1967, to undertake an environmental impact statement. The Department of Housing and Urban Development’s (HUD) Base Development Team in Washington, D.C., provides policy coordination, and HUD’s field offices provide technical assistance to local redevelopment authorities and assistance providers throughout the planning process. DOD commissaries donate unmarketable but edible food to private food banks that, in turn, provide food to soup kitchens, homeless persons, or assistance providers, as well as other needy people. The donated food is owned by private vendors serving DOD commissaries. If a private vendor finds that the food is unneeded and that it is uneconomical to return the food to the supplier, the vendor donates the food for homeless persons’ use. FEMA certifies food banks and other recipients as eligible to receive the food. DOD provides unneeded bedding articles (cots, blankets, pillows, pillow cases, and sheets) to various non-DOD shelters. Most of the bedding is distributed through the General Service Administration’s Federal Surplus Personal Property Program, but DOD distributes the surplus blankets directly. DOD emphasizes that the program is intended to supply blankets to homeless shelters, not to distribute blankets to homeless individuals generally. The blankets are not intended to be sold. Shelters for the homeless may qualify for this program. The Department will insulate the dwellings of low-income persons, particularly elderly and disabled persons, to conserve needed energy and reduce utility costs. A unit is eligible for weatherization assistance if it is occupied by a “family unit” and if certain income requirements are met. (A “family unit” includes all persons living in the dwelling, regardless of whether they are related). The Battered Women’s Shelters program provides grants to states and Indian tribes to assist them in (1) supporting programs and projects to prevent family violence and (2) providing immediate shelter and related assistance for victims of family violence and their dependents. The Center for Mental Health Services Knowledge Development and Application (KD&A) program promotes continuous, positive service delivery system change for persons with serious mental illnesses and children and adolescent with severe emotional disturbances. This program currently funds several projects/demonstrations related to homelessness, including ACCESS, an interdepartmental effort to test the impact of systems integration on outcomes for homeless people with mental illnesses. The ACCESS project is designed to study both system and client-level outcomes and is now entering its final phases of data collection and analysis. Other projects include (1) an evaluation of the effects of different housing models on residential stability and residents’ satisfaction and (2) an investigation of targeted homeless prevention intervention to persons under treatment for mental illness who are judged to be at risk of subsequent homelessness. The Center also funds the Community Team Training Institute on Homelessness, a fiscal year 1997 initiative that was jointly sponsored by other components of HHS and HUD. Five communities were competitively selected to receive intensive technical assistance to help them achieve a seamless system of care for homeless individuals with multiple diagnoses (chronic health problems, substance abuse, HIV/AIDS, and/or mental disorders). The Center for Substance Abuse Treatment is supporting activities through the Knowledge Development and Application (KD&A) program to develop and test innovative substance abuse treatment approaches and systems. This program tests information derived from research findings and sound empirical evidence and distributes cost-effective treatment approaches on curbing addiction and related behaviors to the field. Under the KD&A program, the Center for Substance Abuse Treatment is collaborating with the Center for Mental Health Services to administer a Homeless Prevention Program, which documents interventions for individuals with serious mental illnesses and/or substance abuse disorders who are at risk of subsequent homelessness. Eight projects, currently in their third and final year, are evaluating strategies that were developed and documented in the first year of the program. Information on this program will be published in a special addition of Alcohol and Treatment Quarterly in the spring of 1999. The Special Projects of National Significance (SPNS) program, part F of the Ryan White Comprehensive AIDS Resources Emergency Act, supports the development of innovative models of HIV/AIDS care, designed to address the special care needs of individuals with HIV/AIDS in vulnerable populations, including the homeless. These projects are designed to be replicable in other parts of the country and have a strong evaluation component. The SPNS program’s HIV Multiple Diagnosis Initiative, a collaboration between the Department and HUD, focuses on integrating a full range of housing, health care, and supportive services needed by homeless people living with HIV/AIDS whose lives are further complicated by mental illness and/or substance abuse. Sixteen nonprofit organizations will receive funding. These organizations will contribute information to a national data set on (1) the service needs of homeless, multiply diagnosed HIV clients and variations in their needs linked to sociodemographic characteristics, health status, and history of status; (2) the types of services being provided; (3) the barriers in service systems to providing appropriate care to clients; and (4) the relationship between comprehensive services and improved patient outcomes with regard to housing, mental health, social functioning, the reduction of high-risk behaviors, adherence to treatment protocols, and overall health and quality of life. The Department of Justice’s Office for Victims of Crime administers the Crime Victims Fund, which distributes grants to states to assist them in funding victim assistance and compensation programs. Under the victim assistance grant program, states are required to give priority to victims of child abuse, domestic violence, and sexual assault by setting aside at least 10 percent of their funding for programs serving these victims. While there is no specific initiative directed towards homeless persons, many local domestic violence shelters provide a safe place for women and children who find themselves on the streets following violence in the home. In addition to providing refuge for domestic violence victims, these shelters offer counseling, criminal justice advocacy, and referrals to other social service programs. Support for the program comes from fines and penalties paid by federal criminal offenders. This credit is a special tax benefit for working people who earn low or moderate incomes. Its purposes are to (1) reduce the tax burden on low and moderate income workers, (2) supplement wages, and (3) make work more attractive than welfare. Workers who qualify for the credit and file a federal tax return can get back some or all of the federal income tax that was taken out of their pay during the year. They may also get extra cash back from the Internal Revenue Service. Even workers whose earnings are too small to have paid taxes can get the credit. The credit reduces any additional taxes workers may owe. Single or married people who worked full time or part time at some point in a year’s time can qualify for the credit, depending on their income. The Department of VA, HUD, and Independent Agencies Appropriations Act of 1990 (P.L. 101-144) authorized the use of rental assistance vouchers (to subsidize rental costs for up to 5 years). VA clinicians help homeless mentally ill and substance abusing veterans locate and secure permanent housing using these rental assistance vouchers. Once housing is secured, clinicians provide veterans with the longer-term clinical and social support they need to remain in permanent housing. This program assists homeless veterans in finding transitional or permanent housing but does not provide rental assistance vouchers. The program involves working with veterans’ service organizations, public housing authorities, private landlords, and other housing resources. As in the initiative with HUD, clinicians provide veterans with the longer-term clinical and social support (case management) they need to remain in housing. In 1991, VA and the Social Security Administration (SSA) initiated a joint project designed to expedite claims for Social Security benefits to which homeless veterans are entitled. Under the project, SSA representatives work with staff from VA’s Homeless Chronically Mentally Ill Veterans and Domiciliary Care for Homeless Veterans programs to identify homeless veterans who are entitled to benefits and help them obtain the necessary income and eligibility certifications, medical/psychiatric examinations, and substance abuse treatment (if such treatment is a condition of the SSA benefit award). According to an April 1998 study, the initiative was operating at four sites and had helped 3,114 veterans file SSA applications. The study reported that 692 veterans had received benefits. Program staff contract with private and public industry, including VA, to secure paying work for homeless veterans. The work is used as a therapeutic tool to improve the veterans’ functional levels (work habits) and mental health. While in the program, veterans participate in individual and group therapy and are medically followed on an outpatient basis. The Veterans Programs for Housing and Memorial Affairs Act (P.L. 102-54) authorized VA to operate therapeutic transitional residences along with furnishing compensated work therapy. This program provides housing in community-based group homes for homeless and nonhomeless veterans while they work for pay in the program. The veterans must use a portion of their wages to pay rent, utilities, and food costs; their remaining wages are set aside to support their transition to independent living. As in the Compensated Work Therapy program, homeless veterans participate in individual and group therapy and are medically followed on an outpatient basis. The Homeless Veterans Comprehensive Service Programs Act of 1992 (P.L. 102-590) provided the impetus to collocate veterans benefits counselors with Homeless Chronically Mentally Ill and Domiciliary Care for Homeless Veterans staff to focus greater efforts on reaching out to homeless chronically mentally ill veterans. Under this program, at selected VA regional offices, the Veterans Health Administration (VHA) is providing reimbursed funding for the commitment of full-time or part-time veterans benefits counselors who collaborate with VA medical centers on joint outreach, counseling, and referral activities, including applying for VA benefits. The VA regional office receives reimbursed funding for the veterans benefits counselor but does not receive an increase in staffing levels. This program provides a 24-hour-a-day therapeutic setting that includes professional support and treatment for chronically mentally ill veterans in need of extended rehabilitation and treatment. According to a 1996 VA document, one such program was funded for homeless veterans in Anchorage, Alaska. Drop-in centers offer safe daytime environments where homeless veterans may find food, take a shower, wash their clothes, participate in a variety of therapeutic and rehabilitative activities, and establish connections with other VA programs that provide more extensive assistance. The centers also offer basic education on topics such as HIV prevention and good nutrition. The drop-in programs serve as “points of entry” to VA’s longer-term and more intensive treatment programs. These centers provide an array of VA and community resources in one framework to develop local comprehensive and coordinated services to help homeless veterans. Staff form strong ties with their communities to eliminate overlap and duplication of efforts and to streamline service delivery. Resources include city, county, and state governments; local representatives of the federal agencies that provide assistance to the homeless; and other local VA activities for homeless veterans. The Veterans’ Medical Programs Amendments of 1992 (P.L. 102-405) authorized VA to conduct a nationwide needs assessment of homeless veterans living within the area served by each VA medical center and regional office. This assessment is being conducted through a series of VA-hosted meetings of public and private providers of assistance to the homeless. The goal of the program is to obtain information on the needs of homeless veterans that have and have not been met in each region and on the assistance available from non-VA providers. A secondary goal is to bring all relevant agencies and organizations together in communitywide efforts to improve the assistance provided to homeless veterans. “Stand-down” is a military term used by VA and other non-VA providers of assistance to homeless persons. In this context, the term denotes an array of services provided in one location for a day or several days. Services include meals, haircuts, clothing, sleeping bags, minor medical care, dental and eye examinations, benefits counseling, legal assistance, and identification cards. Program officials have encouraged VA staff to participate in these community efforts and have provided additional funds, as available. The primary goal of a stand-down is to provide outreach and assistance to homeless veterans; however, these events also serve to bring VA and non-VA community providers together in one effort. According to a VA document, in fiscal year 1995, VA participated in over 45 stand-downs. Vet center staff provide a full range of assistance to veterans and their families, paying particular attention to war-related psychological and social problems that may interfere with returning to civilian life. The staff are specially skilled to do community outreach, which is essential for making contact with lower-income veterans and homeless veterans, and to provide counseling, evaluation, and referral services to other VA facilities. In 1996, when VA released this information, there were 205 vet centers whose staff reported that approximately 5 percent of their annual visits were designed to provide direct assistance to homeless veterans. The Veterans’ Home Loan Program Improvements and Property Rehabilitation Act of 1987 (P.L. 100-198) authorized the Secretary to enter into agreements with nonprofit organizations, states, or political subdivisions to sell real property acquired through default on VA-guaranteed loans, as long as the solvency of the Loan Guaranty Revolving Fund was not affected. The Homeless Veterans Comprehensive Service Programs Act of 1992 (P.L. 102-590) extended VA’s authority to lease, lease with the option to sell, or donate VA-acquired properties. According to a VA document, between July 1988 and December 31, 1995, VA sold, leased, or donated a total of 99 properties. This program locates excess federal and other personal property (e.g., clothing, sleeping bags, toiletries, and shoes) for distribution to homeless veterans at stand-downs or through other VA programs for assisting the homeless. According to a VA document, during fiscal year 1995, VA distributed over $6 million in excess clothing and supplies to homeless veterans. Title V of the Stewart B. McKinney Homeless Assistance Act gives assistance providers an opportunity to lease surplus federal properties for services, such as emergency shelters, offices, and facilities for feeding homeless persons. VA’s surplus property initiative is a national program for homeless veterans that allows VA to provide assistance by transferring leases of surplus real property to nonprofit organizations caring for homeless persons. This initiative has two major components, the Title V Surplus Property Program and direct leases of facilities made by VHA field directors to nonprofit organizations. According to a VA document, in March 1995, VA’s Under Secretary for Health made a special request to VHA field facilities to make more VA properties available to help homeless veterans. Veterans Benefits Administration counselors go out into the community to identify homeless veterans and determine their eligibility for VA benefits. The goal of the program is to improve homeless veterans’ access to VA benefits. This program is conducted through existing resources at applicable VA regional offices. Public agencies and nonprofit, tax-exempt institutions or organizations that provide food, shelter, and support services to the homeless may obtain personal property through the Surplus Federal Personal Property Donation Program. The General Services Administration administers the program through a network of state agencies for surplus property (SASP). Under the Federal Property Act, “excess” federal personal property must first be offered to other federal agencies. Any surplus property no longer needed by the federal government is made available to the SASP. Eligible organizations apply to their state agency. The SASP directors determine eligibility and distribute the property to qualified entities. Property donated for use by the homeless can include blankets, clothing, appliances, furniture, and other items. The objective of this program is to make available, through lease, permit, or donation, certain real federal property for use to assist the homeless. State and local governments and private nonprofit agencies acting as representatives of the homeless may obtain the use of unutilized or underutilized federal properties through lease, permit, or donation. The General Services Administration identifies and sends a list of surplus properties to HUD. Periodically, HUD publishes a Notice of Funding Availability listing suitable and available properties for which organizations seeking to use the properties to assist the homeless can then apply. HHS reviews and approves all applications for the use of these properties by homeless assistance providers. The McKinney Act requires all federal landholding agencies to identify all unutilized, underutilized, excess, and surplus properties, and to send a listing of the properties to HUD. Four federal agencies have special provisions or preferences for selling or leasing certain properties in their inventory that have been acquired through foreclosure to public agencies or nonprofit organizations for use in programs to assist homeless people. These agencies include HUD’s Federal Housing Administration, the U.S. Department of Agriculture’s Rural Housing Service, VA, and the Federal Deposit Insurance Corporation. Targeted programs specifically serve homeless people. Nontargeted programs generally target low-income people with special needs. For audit purposes, blindness is included as a disability. Direct services and contract awards(continued) (continued) Annual contribution contracts and contract administration(continued) (Table notes on next page) Not applicable. This program did not receive funding until 1996. Funding for this fiscal year is included in the outlays for the Section 8 Rental Certificate and Voucher Program. Not applicable. Funding for this program was not provided in this fiscal year. The funding for this program is included in the funding for Consolidated Health Centers. Not applicable. This program was created by the Balanced Budget Act of 1997, and funding was not provided until fiscal year 1998. Not applicable. This program was created by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, and funding was not provided until fiscal year 1997. Includes Entitlement, States, and Small Cities programs. The following are GAO’s comments on the Department of Health and Human Services’ (HHS) letter dated February 10, 1999. 1. After reviewing HHS’ comments, we deleted the comment made by the Special Assistant to the Secretary because it was not central to the discussion in the report. We also revised the reference to “billions of dollars worth of resources” to clarify that the resources assist low-income people generally, including the homeless. 2. We agree that a significant percentage of homeless single men are disabled and that some of their disabilities may qualify them for SSI and/or Medicaid. We added language to the report to clarify this. 3. As appropriate, we changed the attributions in the report. 4. We added the word “foreclosed” to clarify the type of surplus properties. 5. We made the suggested changes to indicate that services are “eligible” rather than “provided.” 6. We agree that while several programs provide or could provide the same service, there is significant variation in the intensity of the same service across programs. This audit was not designed to identify variations in services. 7. We made the suggested change to point out that TANF resources can be used to provide rental assistance. 8. In response, we deleted the column totals from table 2 but retained the row totals because the number of programs that provide a particular type of service is relevant information. 9. We included a sentence that refers to the National Survey of Homeless Assistance Providers and Clients. 10. The section of the report cited by HHS discusses the joint administration of programs or resources. Because HHS’ examples of interagency collaboration do not illustrate this topic, we did not include them in the report. 11. We made the suggested changes to appendix I to indicate that additional services can be provided through HHS’ programs. 12. We deleted the reference to the number of homeless persons served from the program summary for Community Health Centers. 13. We made the suggested technical changes to the program summaries in appendix II . 14. We made the suggested technical changes to appendix III. 15. The three Runaway and Homeless Youth programs target children and youth, including those who are disabled or have mental illnesses, substance abuse disorders, or HIV/AIDS. Thus, we did not make the suggested change. 16. We made the suggested changes to update the information on program funding provided in appendix V. The following are GAO’s comments on the Department of Housing Urban and Development’s (HUD) letter dated February 2, 1999. 1. It is not our intent to hold the Council to the same standards now as when it had its own budget. The purpose of the section on the Council is, first, to indicate that it is one of several mechanisms through which programs and activities for the homeless are coordinated and, second, to explain its status. However, we noted HUD’s concerns, adding some of the points that the Department suggested, such as an example of the Council’s long-term coordination efforts and the statement that the Council’s policy group has discussed ways of improving coordination between targeted and nontargeted programs. We also added infomation on the frequency of the Council’s meetings and stated HUD’s belief that the Council is still very involved in coordinating federal efforts and sharing information. 2. We revised the report to eliminate the reference to mission fragmentation because an assessment as to why so many agencies provide similar services to the homeless was beyond the scope of this review. 3. We revised the report to include the additional agencies. 4. We revised this sentence to reflect the Council’s staffing level. 5. We deleted this footnote. 6. We replaced the word “surplus” with the word “foreclosed.” 7. We revised the sentence to reflect the Council’s last meeting date and deleted the word “formerly.” We also revised the text to make a clear distinction between the Council and its policy-level working group. A copy of the minutes from the policy group’s April 1998 meeting indicates that representatives of HUD and the Department of Defense discussed the distribution both of surplus blankets and of surplus real property on base closure property. We added a reference to the surplus real property. 8. We replaced the word “providing” with the word “includes.” 9. We revised the text to indicate that HUD considers the percentage of homeless persons who move from HUD transitional housing to permanent housing an outcome measure. 10. We made the wording changes suggested by HUD. 11. We revised the report accordingly. 12. At the beginning of the report, we list the criteria we used to select programs for inclusion in the report. We do not state in the report that program duplication exists; we observe that many of the programs offer similar services. 13. We made the technical and editing changes suggested by HUD. 14. The statement that “permanent housing may not be realistic” came from a program evaluation study prepared for HUD, in which homeless service providers expressed the view that since there is a limit (even if it is 5 years) on the length of time the housing is available, it is not necessarily permanent. 15. We included this statement because it was identified as a program limitation in a program evaluation prepared for HUD. 16. We made the technical and editing changes suggested by HUD. 17. In appendix IV, a mark under “general or low-income population” indicates that all or most of the categories of eligible groups are covered. 18. We made the suggested changes to appendix V. Sherrill Dunbar The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the federal approach to meeting the needs of the homeless, focusing on: (1) identifying and describing characteristics of the federal programs specifically targeted, or reserved, for the homeless, and key nontargeted programs available to assist low-income people generally; (2) identifying the amounts and types of funding for these programs in fiscal year (FY) 1997; and (3) determining if federal agencies have coordinated their efforts to assist homeless people and developed outcome measures for their targeted programs. GAO noted that: (1) 50 federal programs administered by eight federal agencies can provide services to homeless people; (2) of the 50 programs, 16 are targeted, or reserved for the homeless, and 34 are nontargeted, or available to low-income people generally; (3) while all of the nontargeted programs GAO identified may serve homeless people, the extent to which they do so is generally unknown; (4) both targeted and nontargeted programs provide an array of services, such as housing, health care, job training, and transportation; (5) in some cases, programs operated by more than one agency offer the same type of service; (6) 26 programs administered by six agencies offer food and nutrition services, including food stamps, school lunch subsidies, and supplements for food banks; (7) in fiscal year (FY) 1997, over $1.2 billion in obligations was reported for programs targeted to the homeless, and about $215 billion in obligations was reported for nontargeted programs that serve people with low incomes, which can include the homeless; (8) over three fourths of the funding for the targeted programs is provided through project grants, which are allocated to service providers and state and local governments through formula grants; (9) information is not available on how much of the funding for nontargeted programs is used to assist homeless people; (10) however, a significant portion of the funding for nontargeted programs is not used to serve the homeless; (11) about 20 percent of the funding for nontargeted programs provided through formula grants; (12) the remainder of the funding for nontargeted programs consists of direct payments and project grants; (13) federal efforts to assist the homeless are being coordinated in several ways, and many agencies have established performance measures for their efforts; (14) some departments administer specific programs jointly; (15) although some coordination is occurring through the use of these mechanisms and most agencies that administer targeted programs for the homeless have identified crosscutting responsibilities related to homelessness under the Government Performance and Results Act, the agencies have not yet described how they will coordinate or consolidate their efforts at the strategic level; and (16) most agencies have established process or output measures for the services they provide to the homeless through their targeted programs, but they have not consistently incorporated results-oriented goals and outcome measures related to homelessness in their plans.
In 2002, DHS established its Directorate of Information Analysis and Infrastructure Protection. In 2005, the directorate was divided into two offices—I&A and the Office of Infrastructure Protection. I&A is headed by the Under Secretary for Intelligence and Analysis, who is responsible for providing homeland security intelligence and information to the Secretary of Homeland Security, other federal officials and agencies, members of Congress, departmental component agencies, and the department’s state, local, tribal, territorial, and private-sector partners. I&A also provides staff, services, and other support to the Under Secretary related to efforts to lead, integrate, and manage intelligence activities across the department. I&A has undergone several transitions and realignments since its inception in 2002, which affect all of the office’s customers, including state and local partners. Several of I&A’s divisions, offices, and branches have some role in helping the office meet its mission to share information with these partners. Most importantly, I&A’s State and Local Program Office was established to manage a program to accomplish DHS’s fusion center mission. Specifically, the office is responsible for deploying DHS personnel with operational and intelligence skills to fusion centers to facilitate coordination and the flow of information between DHS and fusion centers, provide expertise in intelligence analysis and reporting, coordinate with local DHS and Federal Bureau of Investigation (FBI) components, and provide DHS with local situational awareness and access to fusion center information. In addition to the State and Local Program Office’s support to fusion centers, other entities within I&A are engaged in providing intelligence products and other products and services to state and local customers. For example, several analytic divisions—such as those that address border security and domestic threats—are responsible for conducting analysis and preparing intelligence reports on a variety of topics of interest to various stakeholders, including state and local entities. The Collections Requirement Division gathers information needs from state and local partners, among other things, and the Production Management Division is responsible for finalizing intelligence reports that are prepared by the analytic divisions and distributing them to I&A’s customers, including state and local partners. In addition, I&A’s newly formed Customer Assurance Branch is now responsible for gathering and compiling feedback on the intelligence products that I&A provides to its customers, including state and local partners. Since the terrorist attacks of September 11, 2001, several statutes have been enacted into law designed to enhance the sharing of terrorism- related information among federal, state, and local agencies, and the federal government has developed related strategies and guidelines to meet its statutory obligations. Related to I&A, the Homeland Security Act of 2002 assigned the original DHS intelligence component—the Directorate of Information Analysis and Infrastructure Protection—with responsibility to receive, analyze, and integrate law enforcement and intelligence information in order to (1) identify and assess the nature and scope of terrorist threats to the homeland, (2) detect and identify threats of terrorism against the United States, and (3) understand such threats in light of actual and potential vulnerabilities to the homeland. Further, the 9/11 Commission Act directs the Secretary of Homeland Security—through the Under Secretary for I&A—to integrate information and standardize the format of terrorism-related intelligence products. The act further directed the Secretary to create a mechanism for state, local, and tribal law enforcement officers to provide voluntary feedback to DHS on the quality and utility of the intelligence products developed under these provisions. DHS is also charged through the 9/11 Commission Act with developing a curriculum for training state, local, and tribal partners in, among other things, federal laws, practices, and regulations regarding the development, handling, and review of intelligence and other information. As part of DHS’s information sharing with state and local entities, several provisions of the 9/11 Commission Act relate to support provided directly to fusion centers. Most states and some major urban areas have established fusion centers to, among other things, address gaps in terrorism-related information sharing that the federal government cannot address alone and provide a conduit for information sharing within the state. Specific to fusion centers, the act provides for the Under Secretary for Intelligence and Analysis to assign, to the maximum extent practicable, officers and intelligence analysts from DHS components—including I&A— to fusion centers. The act also provides that federal officers and analysts assigned to fusion centers in general are to assist law enforcement agencies in developing a comprehensive and accurate threat picture and to create intelligence and other information products for dissemination to law enforcement agencies. In October 2007, the President issued the National Strategy for Information Sharing, which identifies the federal government’s information-sharing responsibilities to include gathering and documenting the information that state and local agencies need to enhance their situational awareness of terrorist threats. The strategy also calls for authorities at all levels of government to work together to obtain a common understanding of the information needed to prevent, deter, and respond to terrorist attacks. Specifically, the strategy requires that state and local law enforcement agencies have access to timely, credible, and actionable information and intelligence about individuals and organizations intending to carry out attacks within the United States; their organizations and their financing; potential targets; activities that could have a nexus to terrorism; and major events or circumstances that might influence state and local actions. The strategy also recognizes that fusion centers are vital assets that are critical to sharing information related to terrorism, and will serve as primary focal points within the state and local environment for the receipt and sharing of terrorism-related information. I&A has cited this strategy as a key document governing its state and local information-sharing efforts. Thus, in response to the designation of fusion centers as primary focal points, requirements in the 9/11 Commission Act, and the difficulty of reaching out to the thousands of state and local law enforcement entities nationwide, I&A views fusion centers as primary vehicles for sharing information with state and local partners. In October 2001, we first reported on the importance of sharing information about terrorist threats, vulnerabilities, incidents, and lessons learned. Since we designated terrorism-related information sharing a high-risk area in January 2005, we have continued to monitor federal efforts to remove barriers to effective information sharing. As part of this monitoring, in October 2007 and April 2008, we reported on our assessment of the status of fusion centers and how the federal government is supporting them. Our fusion center report and subsequent testimony highlighted continuing challenges—such as the centers’ ability to access information and obtain funding—that DHS and the Department of Justice (DOJ) needed to address to support the fusion centers’ role in facilitating information sharing among federal, state, and local partners. Specifically, the October 2007 report recommended that federal officials determine and articulate the federal government’s role in helping to ensure fusion center sustainability. In response, in late 2008, I&A reported that it had dedicated personnel and other resources, as well as issued guidance, directly supporting fusion centers. We have ongoing work that is assessing fusion center sustainability and efforts to protect privacy, and expect to report the results of this work later this year. In June 2008, we reported on the federal government’s efforts to implement the Information Sharing Environment, which was established to facilitate the sharing of terrorism and homeland security information. We recommended that the Program Manager for the Information Sharing Environment and stakeholders more fully define the scope and specific results to be achieved and develop performance measures to track progress. The Program Manager has taken steps to address these recommendations but has not fully addressed them. We are continuing to review federal agencies’ efforts to implement the Information Sharing Environment and expect to report the results of this work later this year. Finally, in December 2009, we reported on our assessment of DHS and FBI efforts to share information with local and tribal officials in border communities and recommended that DHS and FBI more fully identify the information needs of, and establish partnerships with, local and tribal officials along the borders; identify promising practices in developing border intelligence products with fusion centers and obtain feedback on the products; and define the suspicious activities that local and tribal officials in border communities are to report and how to report them. DHS agreed with the recommendations and provided a number of actions they were taking or planned to take to implement these suggested changes. The FBI did not provide comments. I&A has increased the number of intelligence products it disseminates to its state and local partners and is taking steps to work with fusion centers to increase their dissemination. I&A also has initiatives to identify state and local information needs to ensure that its products provide information of importance to these partners but it has not worked with states to establish milestones for identifying these needs, which could better hold I&A accountable for assisting states in completing this process in a timely manner. Further, I&A has developed a new customer survey intended to gather more detailed feedback on its products, but it could enhance the transparency and accountability of its efforts and provide assurance that partners’ views are informing its products by periodically reporting to its state and local partners on the steps it has taken to assess and respond to this feedback. To address requirements of the Homeland Security Act of 2002, as amended, and the 9/11 Commission Act, I&A prepares intelligence products on a number of topics for its many customers, including its state and local partners. I&A prepares these intelligence products based on a number of factors, including departmental priorities, areas of expertise, and departmental and customer needs. Examples of I&A products that are targeted to or adapted for state and local partners are as follows: Daily Intelligence Highlights: Provide a compilation of significant and developing issues that affect homeland security. Roll Call Release: Designed to provide information on possible tactics or techniques that could be used by terrorists or criminals. I&A prepares these products jointly with the FBI and the ITACG. Topics covered in prior Roll Call Releases include concealment of explosive devices and homemade explosives. Homeland Security Monitor: Provides multiple articles on a theme or topic. Examples of Homeland Security Monitors include the Border Security Monitor and Cyber Security Monitor. Homeland Security Reference Aid: Provides information and context on an issue in various formats, such as primers, handbooks, historical overviews, organizational charts, group profiles, or standalone graphics such as annotated maps and charts. From June 2009 through May 2010, I&A disseminated 16 percent more analytic intelligence products to its state and local partners through fusion centers than the previous year, and more than twice the number released over the previous 2 years. I&A also disseminates analytic products it develops jointly with the FBI, other federal agencies, and fusion centers. For example, of the products released from June 2009 through May 2010, approximately one-third were prepared jointly with the FBI or other federal agencies. In addition, from July 2007 through July 2010, I&A reported that it prepared several dozen joint products with fusion centers. These products included threat assessments for special events, such as the Presidential Inauguration and the Super Bowl. I&A also provides intelligence reports to fusion centers, as well as to federal agencies and members of the intelligence community, in the form of Homeland Intelligence Reports. These reports provide unanalyzed intelligence—generated by a single, unvalidated source—derived from operational or law enforcement data that I&A evaluated because of their homeland security relevance. From June 2009 through May 2010, I&A disseminated thousands of Homeland Intelligence Reports to its state and local partners through fusion centers. I&A officials noted that the number of reports disseminated has increased over time because of the overall increase in the number of submissions from DHS components, such as U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement, as well as greater reporting by state and local partners. In 2009, I&A commissioned a study in response to concerns voiced by state and local first responders and first preventers (e.g., law enforcement, fire departments, emergency management, health services, critical infrastructure providers, and other relevant stakeholders) that they were not receiving enough useful information products from fusion centers. The study examined a number of issues, such as how fusion centers disseminate products to these partners—what the study referred to as the “last mile” of dissemination—in order to identify common challenges and best practices. The March 2010 report contains recommendations for I&A and fusion centers. Recommendations for I&A include ensuring that the results of the study are made widely available; working with fusion centers to discuss how some ideas from the report (e.g., establishing a policy for product dissemination) could be implemented; ensuring that deployed I&A officers can help fusion centers adopt best practices and policies; expanding the development of products geared towards first responders and preventers; and incorporating descriptions of why the distributed product is relevant to the state or local entity. In response to these recommendations, the Acting Director of I&A’s State and Local Program Office said that I&A intelligence officers at fusion centers have been directed to work with their fusion centers to develop better policies and procedures for product dissemination. As of August 2010, I&A had worked with 9 of 50 states to collect and validate their definition of the kinds of information they need for their homeland security efforts. I&A was also working with another 32 states to help identify and define their needs. In 2007, I&A began its initial effort to identify the information needs of its state and local partners in conjunction with a pilot study that found that I&A had not identified fusion center needs for product development or produced intelligence products tailored to those needs. Specifically, the study found that fusion center leaders at pilot sites did not believe that DHS intelligence products fully met their mission needs by providing information of operational importance to state and local law enforcement. The study also found that DHS did not have an intelligence process that identified fusion center needs to inform reporting and analysis, produced products tailored to those requirements, or collected feedback from fusion centers on the value of these products. During 2007, I&A identified the information needs from five of the six fusion centers that it contacted during its pilot study, according to I&A officials. These information needs included topics such as border security and threats posed by prison radicalization. I&A reached out to nine additional fusion centers in 2008, and was able to obtain and validate information needs from four of them, which submitted their needs on a voluntary basis. Thus, over the first year and a half of these efforts, I&A obtained and validated information needs from a total of nine fusion centers. I&A planned to visit an additional eight fusion centers in 2009 but only visited one center before efforts were suspended in March 2009, with no resulting compendium of fusion center needs. According to a senior I&A official, the process I&A was using to obtain these needs was time consuming and inefficient. The official explained that a number of different I&A entities were involved in gathering these needs, visiting fusion centers one at a time, and following up with each to validate the needs. In March 2009, I&A refocused its efforts to identify Standing Information Needs for each state, which I&A defines as “any subject, general or specific, for which there is a continuing need for intelligence, which will establish a foundation for guiding intelligence collection efforts and reporting activities.” Examples include the need for information on individuals or groups that are capable of attacking critical infrastructure and key resources, and emerging cross-border connections between transnational criminal organizations or gangs. According to an Acting Deputy Director of I&A’s Domestic Threat Analysis Division, Standing Information Needs are focused on long-term analytic needs, whereas prior efforts to collect information needs were focused on identifying and providing products in response to more immediate information needs—a function now handled through I&A’s Single Point of Service initiative, which is discussed later in this report. I&A describes its approach to assisting states in identifying their Standing Information Needs as a two-fold process. First, I&A provides states with a list of general topics—such as critical infrastructure protection—that align with DHS’s Standing Information Needs for their use in identifying areas of interest. I&A then poses a series of questions to state fusion center personnel to help them define more detailed information needs under those topics in an organized and complete manner. In October 2009, I&A began soliciting these needs from all state fusion centers with I&A intelligence officers, except for 3 that had taken part in the pilot phase of the program. As of August 2010, 9 states had completed efforts to identify their information needs, 12 states had completed drafts that were awaiting final state approval, and 20 states were in the process of drafting their needs. After the states have finalized their Standing Information Needs, I&A plans to assist them in prioritizing those needs. According to the Deputy Director of I&A’s Collection and Requirements Division, I&A has begun providing products to states in response to Standing Information Needs that the states have submitted. The official noted that these products are labeled in a manner that makes a clear link between the state’s identified need and the product that is issued, and that the products are also sent to other stakeholders that may have similar interests. Thus, I&A reports that it can track states’ needs from the time they are received through each product provided in response to those needs. According to I&A, this current effort is completed manually and is labor intensive. I&A is currently researching tools to automate the Standing Information Needs process to ensure that products are reaching as many customers as possible by distributing reports generated as a result of these needs to all interested parties. I&A is making progress in gathering and responding to state Standing Information Needs and has developed internal milestones for completing the identification of these needs. According to standard program management principles, time frames or milestones should typically be incorporated as part of a road map to achieve a specific desired outcome or result; in this case, development of a nationwide compendium of state and local information needs. According to I&A, because these needs are state-owned and approved documents, I&A cannot compel states to meet its internal milestones. Nevertheless, working closely with states to jointly develop such milestones is particularly important given the past challenges I&A has encountered in identifying these needs, and given that it has spent nearly 3 years in this process and has completed efforts to identify needs from nine states to date. According to the Deputy Director of I&A’s Collection Requirements Division, while assisting states in developing their Standing Information Needs is a significant priority, the biggest challenge the division faces in addressing this priority is limited resources. I&A has two to three staff assigned to work with states to gather these needs and those staff get pulled from this task to deal with other, higher priority issues. For example, the official noted that in the spring of 2010, the staff were taken from this work to advise the U.S. Coast Guard on methods of information gathering and reporting regarding the British Petroleum Deepwater Horizon oil spill. While we recognize that states have the lead in defining their needs, given the importance that both I&A and its state and local partners place on having state and local needs drive intelligence and product development, it is important that these needs be identified as expeditiously as possible. Working with states to establish milestones for developing their information needs and identifying and addressing any barriers to developing those needs and meeting milestones could better hold I&A accountable for assisting the states in the timely completion of this process. Historically, the primary mechanism I&A used to collect feedback on its intelligence products was to include a reference to an unclassified e-mail address in each product that recipients could use to submit comments. Other feedback mechanisms include Web sites used to disseminate information, teleconferences, and information gathered by I&A officers located at fusion centers, a practice that officials at 6 of the 10 fusion centers we contacted preferred versus replying via e-mail. The level of feedback I&A has received on its products through this e-mail address has increased and has largely been positive. Specifically, from June 2008 through May 2009, I&A’s report to Congress on voluntary customer feedback—required by the 9/11 Commission Act—shows that I&A received 175 feedback responses on intelligence products from state and local customers, versus 50 responses during the prior reporting period. I&A’s analysis of the responses show that about 67 percent were positive, meaning that respondents felt they were useful for planning and resource allocation. Appendix I presents more information on how I&A categorizes the feedback it has received. Officials at 9 of the 10 fusion centers we contacted said that they found I&A’s products to be generally helpful. For example, officials from 2 fusion centers cited I&A reports on the attempted Christmas Day 2009 airline bombing as examples of relevant information that was provided to them in a timely manner. Regarding Homeland Intelligence Reports, I&A said that state and local partners’ feedback has been minimal, and that it is continuing to encourage them to comment on these reports so that I&A can adjust these products to meet its partners’ needs. One example cited in I&A’s latest customer feedback report to Congress illustrates the importance of obtaining feedback for supporting I&A efforts to improve its future products. Specifically, a fusion center expressed concerns that the perspectives of 3 southwest border state fusion centers were not included in an assessment that I&A headquarters produced on border violence. The feedback resulted in teleconferences and other I&A actions to ensure that state and local perspectives are included in future assessments of border violence. According to I&A officials, the amount and detail of feedback received to date, while positive, has been of limited use in improving product development. Thus, in 2010 I&A began using a new customer satisfaction survey to gather more meaningful feedback from state and local partners on its intelligence products and other areas of support. For example, the survey asks respondents how the product was used to support their mission, how it could be improved, and their level of satisfaction with the timeliness and relevance of the product to the respondents’ intelligence needs. I&A plans to use the survey results to establish who in the state and local community is accessing its reports, and to make improvements to intelligence products that increase customer satisfaction. According to the Chief of I&A’s newly formed Customer Assurance Branch—which is responsible for managing efforts to collect and analyze feedback on I&A’s analytic services—I&A began deploying the survey to all recipients of products marked “For Official Use Only” in March 2010. As of May 2010, I&A officials said that they had received several hundred responses to this survey, approximately half of which were from state, local, tribal, and territorial partners—more than double the number of responses from these partners over the previous year of reporting. The results of these feedback surveys are to be sent directly to the analysts and divisions preparing intelligence products for incorporation into ongoing and future work, according to agency officials. The officials noted that this survey is to be one part of a larger effort to capture and manage feedback on not only I&A’s intelligence products, but also services that it provides internally to its analysts and report preparers. According to I&A, once it has gathered data for one full quarter, it will begin to examine different ways that it can compile and assess the information gathered from these surveys. I&A anticipates that its efforts will include organizing feedback survey responses by the type of product issued (e.g., Homeland Security Monitor), analytic division, and product topic (e.g., border security or critical infrastructure). Organizing feedback in this way could help I&A determine the value and responsiveness of its particular product types to state and local customer needs, and in turn help I&A focus its limited resources. At the time of our review, I&A planned to report the results of such analyses to Congress through its upcoming 2010 report to Congress on voluntary feedback from state and local customers. I&A has also taken initial steps to report the results of its feedback analysis directly to state and local customers. Specifically, during the summer of 2010, I&A provided briefings on the value of this feedback during two stakeholder forums, according to an official from I&A’s Customer Assurance Branch. This official added that I&A plans to continue using stakeholder forums—such as conferences and meetings of fusion center directors—to report on I&A’s assessment of state and local feedback and its use in refining I&A products. However, I&A had not developed plans on when it will provide such reporting, how frequently, or in what level of detail. Standards for Internal Control in the Federal Government require agencies to ensure effective communication with external stakeholders that may have a significant impact on an agency achieving its goals—in this case, I&A’s state and local information-sharing partners. In addition, standard program management principles call for time frames or milestones to be developed as part of a road map to achieve a specific desired result. As I&A moves forward with its efforts to collect and analyze feedback from state and local partners, developing plans for reporting the results of its feedback analysis—including time frames and level of detail—to these partners and the actions it has taken in response could help I&A demonstrate that the feedback is important and makes a difference. In turn, this could encourage state and local partners to provide more feedback and ultimately make I&A’s products and services more useful. In addition to intelligence products, I&A provides a number of other services to its state and local partners to enhance information sharing, analytic capabilities, and operational support that generally have been well-received, based on our discussions with officials at 10 fusion centers and published third-party reports on I&A operations. For example, I&A has deployed intelligence officers—who assist state and local partners in a number of information-sharing efforts—to more than half of all fusion centers. I&A also facilitates access to information-sharing networks, provides training directly to fusion center personnel, and operates a 24- hour service to respond to state and local requests for information and other support. As part of its efforts to support fusion centers, I&A’s State and Local Program Office assigns intelligence officers to fusion centers. These officers serve as DHS’s representative to fusion centers and assist them in a number of efforts—such as providing connectivity to classified data systems, training opportunities, and warnings about threats—and generally educating them on how to better use DHS capabilities to support their homeland security missions. In addition, I&A assigns regional directors to fusion centers who, among other things, are responsible for supervising I&A intelligence officers at fusion centers within their region and providing operational and intelligence assistance to the centers, particularly those without intelligence officers on-site. As of August 2010, I&A had deployed 62 intelligence officers and 6 regional directors to fusion centers. This represents an increase of 32 officers and the same number of regional directors since June 2009. I&A plans to have an intelligence officer deployed to each of its 72 designated fusion centers, as well as appoint 10 regional directors, by the end of fiscal year 2011. Figure 1 shows the locations where I&A intelligence officers and regional directors had been deployed as of August 2010. Of the 10 fusion centers we contacted, 7 had an I&A intelligence officer or regional director on site and fusion center officials at all 7 locations had positive comments about the support the I&A officials provided. Fusion center officials at the other 3 locations said that they received support through regional directors in their area or an I&A officer in a neighboring state. Fusion center officials at 8 of the 10 centers noted that the presence of I&A officers or regional directors (on site or in their region) was important for obtaining intelligence products from DHS. According to one director, the center was recently assigned an I&A officer who alerted center officials to products of which they were previously unaware. In particular, the director noted that the I&A officer was able to access and share Border Patrol daily reports that were very helpful to local law enforcement operations. In addition, officials at 9 of the 10 fusion centers we contacted said that the I&A officers were particularly helpful in providing technical assistance (e.g., guidance on how the center should operate) or in notifying the centers about available training. As of May 2010, I&A had funded and facilitated the installation of the Homeland Secure Data Network (HSDN) at more than half of all fusion centers, which allows the federal government to share Secret-level intelligence and information with state, local, and tribal partners. Additional centers are undergoing facilities certification in order to be accredited to house HSDN. I&A has established a goal of deploying HSDN to all 72 fusion centers. In addition, DHS’s Homeland Security Information Network (HSIN) is used for sharing sensitive but unclassified information with state and local partners through a number of “community of interest” portals. One of the key portals is HSIN-Intel, which houses a section known as the Homeland Security State and Local Intelligence Community of Interest (HS SLIC)—a virtual community for federal, state, and local intelligence analysts to interact. As of June 2010, HS SLIC had approximately 1,900 state and local users, an increase from the approximately 1,082 state and local users in September 2008. In addition to the HSIN portal, HS SLIC program officials in I&A facilitate weekly teleconferences, biweekly secure teleconferences, and quarterly conferences to share information with interested state and local parties. In an April 2009 report, the Homeland Security Institute (HSI) credited HS SLIC with fostering “the broader sharing of homeland security intelligence and information.” In addition, all 10 of the fusion centers we contacted were using HS SLIC, and 6 of the 10 cited it as useful for identifying relevant information that supports fusion center activities. In response to a 9/11 Commission Act requirement to develop a curriculum for training state, local, and tribal partners in the intelligence cycle and other issues involving the sharing of federal intelligence, I&A has a number of courses for state and local analysts and officials. For example, I&A’s State and Local Program Office offers training courses directly to fusion center personnel, as shown in table 1. Course feedback that I&A provided to us is largely positive. Further, officials from 8 of the 10 fusion centers we contacted reported receiving training provided or sponsored by I&A and were generally satisfied with this training. In addition to the courses above, I&A’s Intelligence Training Branch offers courses that are geared towards DHS intelligence analysts but made available to state and local analysts. These cover various topics, such as basic overviews of the intelligence community, critical thinking and analytic methods, and skills for writing intelligence products and briefings. Participant feedback scores provided as of late 2009 indicate that the courses are well-received, and I&A has begun to provide some of this training directly to state and local analysts at field locations. I&A also provides products and support in response to a variety of state and local information requests through a 24-hour support mechanism called the Single Point of Service. The service was established in May 2008 in response to an I&A-sponsored contractor study that recommended that I&A provide state and local partners with a 24-hour resource to request support, communicate product requirements, and share critical information with DHS and its components. Through the Single Point of Service, I&A has consolidated and standardized its tracking of state and local customer queries and communication by use of a single term—State and Local Support Request—which includes requests for information, production, administrative tasks, analysis, and various support functions. In addition, I&A has developed a set of goals, key performance indicators, and measures to track various performance aspects of service, such as the timeliness of responses and percentage of responses completed. Additional information on these items, as well as descriptions of State and Local Support Request categories is contained in appendix II. To date, fusion centers that have I&A intelligence officers on site have used the Single Point of Service the most. Specifically, in the first quarter of fiscal year 2010, deployed I&A intelligence officers accounted for 76 percent of all requests submitted. According to I&A officials, the I&A intelligence officers on site are the focal points for the fusion center to submit requests to the Single Point of Service. According to the HSI report, the Single Point of Service program “greatly increased I&A’s response to the information needs of fusion centers,” and that the 11 fusion centers that it spoke with “credited this program with significantly improving the process for requesting and receiving a timely response from DHS.” Appendix III contains additional information on I&A products and services and other initiatives designed to support fusion centers and facilitate information sharing. Part of I&A’s mission is to share information with state and local partners, but I&A has not defined how it intends to meet this mission or established a framework to hold itself and its divisions accountable for meeting it. As of September 2010, I&A had developed a high-level officewide strategy that defines goals and objectives and had taken initial steps to further define the portion of its mission related to state and local information sharing. However, I&A had not yet identified and documented the programs and activities that are most important for executing this mission or how it will measure its performance in meeting this mission and be held accountable for results. I&A has undertaken a variety of initiatives to support its state and local information-sharing mission and has taken initial steps to determine how it could better achieve this mission. Historically, I&A’s state and local programs and activities have been in response to a variety of factors, including its focus on addressing statutory requirements and efforts to leverage and support fusion centers that state and local agencies had established. I&A’s efforts to implement this mission have also been affected by administration changes and changing and evolving I&A leadership priorities. In addition, I&A has had to balance resources for supporting fusion centers and other state and local information-sharing programs and activities against other competing priorities. State and local partners are one of a number of customer sets the office supports along with the Secretary, other DHS components such as U.S. Customs and Border Protection, other federal agencies, and the intelligence community—with each competing for resources. For example, although Congress—through the 9/11 Commission Act—has stressed the importance of supporting fusion centers, DHS has not provided consistent funding for I&A to support the centers, although I&A has made investments on its own. Specifically, until the fiscal year 2010 budget cycle, DHS did not request funds to support the deployment of I&A personnel to these centers. Rather, I&A had to reprogram funds from other areas to support this critical part of its state and local mission. According to the then-Director of I&A’s State and Local Program Office, the lack of a consistent funding stream to support these deployments delayed I&A’s efforts to provide needed resources to these centers. I&A sponsored a study in 2007 to identify how it could enhance DHS’s support to fusion centers, a key part of its efforts to meet its state and local mission. The results of the study identified several areas for improvement, including the need to better respond to fusion center requests for information and provide centers with reporting and analysis that addresses their mission-critical information needs. One of the initiatives I&A took in response that provided a more organized and integrated approach to supporting state and local customers was creating a single point within the office that these customers could contact for their questions and requests for support and that would be held accountable for responding to these needs. In addition, in 2008, I&A sponsored an agencywide study that was conducted by the HSI to evaluate I&A programs related to its role in providing homeland security intelligence and information to various federal officials and agencies, members of Congress, and the department’s state and local partners, among others. The resulting April 2009 report noted that I&A is an emerging organization that is still in the initial stages of its organizational development, including developing its strategic planning capabilities and strategic business processes. The report also noted that the lack of a strategic plan hindered I&A’s efforts to conduct any type of officewide program or resource planning that could be appropriately tied to its mission, goals, and objectives. As a result, HSI found that various I&A components had developed their own goals, priorities, processes, and procedures and, in some cases, may be working at cross-purposes. HSI also found that the lack of I&A efforts to allocate resources to support strategic goals and objectives prevented managers from organizing their efforts for long-term effectiveness, which left them unable to plan for growth or to adapt to emerging issues. As a first step, HSI recommended that I&A go through a strategic planning process and develop an overarching strategic plan in order to provide I&A leadership with a road map for making organizational changes. Specifically, HSI recommended that I&A develop a strategy that defines its overall mission, goals, objectives, priorities, and performance measures. In December 2009, I&A developed a strategy that contains 4 overall goals that the office as a whole is to meet. For example, 1 of the goals is to serve as the premier provider of homeland security information and intelligence, and another goal is to build partnerships and foster teamwork. The strategy also contains 12 objectives that I&A plans to use to meet these goals. Two of these objectives focus on its state and local partners. The first is to strengthen the national network of fusion centers. Specifically, through a proposed Joint Fusion Center Program Management Office, I&A was to lead a DHS-wide effort to support fusion centers. The role of this office was to ensure coordination across all departmental components with the dual priorities of strengthening fusion centers and DHS intelligence products. According to DHS, the office was to have five primary responsibilities to make fusion centers more effective. Specifically, the office was to survey state, local, and tribal law enforcement to get feedback on what information these “first preventers” need to do their job; develop a mechanism to gather, analyze, and share national, regional, and local threat information up and down the intelligence network; coordinate with fusion centers to continuously ensure they get the appropriate personnel and resources from DHS; provide training and exercises to build relationships between fusion center personnel and promote a sense of common mission; and train fusion center personnel to respect the civil liberties of American citizens. According to I&A officials, in August 2010, I&A did not receive congressional approval to establish this office. The officials noted that I&A’s State and Local Program Office would assume the roles and responsibilities that were planned for the Joint Fusion Center Program Management Office. The second objective that specifically addresses state and local partners is “to build, support, and integrate a robust information sharing capability among and between federal, state, local, tribal, and private sector partners.” According to the Director of I&A’s Program and Performance Management Division, most of the other 10 objectives will affect state and local partners—even though the objectives do not articulate this or discuss related programs and activities—and will involve components from across I&A’s divisions and branches. For example, other goals and objectives involve identifying customer information needs, developing analytic products, obtaining feedback on products, and measuring performance. The Director noted that I&A may revise the strategy’s goals and objectives in response to the February 2010 DHS Quadrennial Homeland Security Review Report to Congress, which outlines a strategic framework to guide the homeland security activities of DHS components. Appendix IV contains additional information on the goals and objectives in I&A’s strategy. I&A has begun its strategic planning efforts, but has not yet defined how it plans to meet its state and local information-sharing mission by identifying and documenting the specific programs and activities that are most important for executing this mission. Congressional committee members who have been trying to hold I&A accountable for achieving its state and local mission have been concerned about I&A’s inability to demonstrate the priority and level of investment it is giving to this mission compared to its other functions, as evidenced by hearings conducted over the past several years. I&A recognizes that it needs to take steps to address its state and local information-sharing mission and define and document priority programs and activities. For example, in June 2010, I&A conducted focus groups with representatives of various customer sets—including its state and local partners—to gain a better understanding of their needs, according to the Director of I&A’s Program and Performance Management Division. In addition, I&A has defined how it expects the State and Local Program Office to support fusion centers (through the roles and responsibilities originally envisioned for the Joint Fusion Center Program Management Office). However, I&A has not defined and documented the programs and activities that its other components—such as the Collections and Requirements Branch and the Production Management Division—will be held accountable for implementing that collectively will ensure that I&A meets its state and local mission. In addition, I&A’s current strategy addresses the role of the then-proposed Joint Fusion Center Program Management Office, but it generally does not provide information on the state and local programs and activities that I&A’s components will be responsible for implementing. In its April 2009 report, HSI recommended that I&A divisions and branches create derivative plans that are linked to the strategy. Among other things, the derivative plans were to identify priority programs and activities, assign roles and responsibilities, and describe performance measures and incentives tied to performance. I&A leadership would then be responsible for ensuring that the divisions and branches implement their plans. I&A has decided not to develop the more specific derivative component plans or a plan or road map for how it will specifically meet its state and local mission. As a result, I&A cannot demonstrate to state and local customers, Congress, and other stakeholders that it has assessed and given funding priority to those programs and activities that it has determined are most effective for sharing information with state and local partners. According to the Director of I&A’s Program and Performance Management Division, more detailed plans are not needed because the organizational components know which parts of the strategy—and related state and local programs and activities—they are responsible for completing. However, relying on these components to know their roles and responsibilities without clearly delegating, documenting, and tracking implementation does not provide a transparent and reliable system of accountability for ensuring that the state and local mission is achieved. I&A officials said that the State and Local Program Office is to guide I&A’s efforts to share information with state and local partners. However, they could not distinguish, for example, how this office would operate in relation to the other components or what authority or leverage it would have over these components’ competing programs, activities, and investment decisions to ensure the state and local mission is achieved. Our prior work has found that successful organizations clearly articulate the programs and activities that are needed to achieve specified missions or results, and the organization’s priorities—including investment priorities—among these programs and activities. Defining and documenting how I&A plans to meet its state and local information- sharing mission—including programs, activities, and priorities—could help I&A provide transparency and accountability to Congress, its state and local partners, and other stakeholders. I&A has not defined what state and local information-sharing results it expects to achieve from its program investments and the measures it will use to track the progress it is making in achieving these results. Currently, I&A has four performance measures related to its efforts to share information with state and local partners. All four of these measures provide descriptive information regarding activities and services that I&A provides to these partners. For example, they show the percentage of fusion centers that are staffed with I&A personnel and count the total number of state and local requests for support, as shown in table 2 below. However, none of these are measures that could allow I&A to demonstrate and report on the actual results, effects, or impacts of programs and activities or the overall progress it is making in meeting the needs of its partners. For example, the measure on the percentage of I&A personnel in fusion centers provides useful information on I&A efforts to deploy analysts to the field, but it does not provide information related to the effectiveness of the I&A personnel or the value they provide to their customers, such as the extent to which these personnel enhance information sharing, analytic capabilities, and operational support. Developing such measures could help I&A support program and funding decisions. Our past work and the experience of leading organizations have demonstrated that measuring performance allows organizations to track progress they are making toward intended results—including goals, objectives, and targets they expect to achieve—and gives managers critical information on which to base decisions for improving their programs. They also show that adhering to results-oriented principles provides a means to strengthen program performance. These principles include defining the results to be achieved and the measures that will be used to track progress towards these results. Our prior work also indicates that agencies that are successful in measuring performance strive to establish goals and measures at all levels of an agency so that decision makers have as complete information as they need for measuring and managing an agency’s performance. I&A recognizes that it needs to develop more results-oriented measures to assess the effectiveness of its state and local information-sharing efforts. I&A intends to add additional performance measures to its strategic plan later this year, according to the Director of I&A’s Program and Performance Management Division. The official noted, however, that these new measures will initially provide descriptive information about I&A’s state and local programs and activities. The official said that I&A would develop measures that allow it to evaluate the extent to which these programs and activities are achieving their intended results at a later date, but he could not provide any details or documentation on next steps or time frames. The official explained that developing such measures for information sharing and obtaining related data needed to track performance is a challenge not only to I&A but to other federal agencies. Standard program management principles note that time frames or milestones should typically be incorporated as part of a road map to achieve a specific desired outcome or result. We also have recognized and reported that it is difficult to develop performance measures that show how certain information-sharing efforts have affected homeland security. Nevertheless, we have recommended that agencies take steps towards establishing such measures to hold them accountable for the investments they make. We also recognize that agencies may need to evolve from relatively easier process measures that, for example, count the number of products provided to more meaningful measures that weigh customer satisfaction with the timeliness, usefulness, and accuracy of the information provided, until the agencies can establish outcome measures that determine what difference the information made to state or local homeland security efforts. I&A may have the opportunity to develop measures that would provide more meaningful information by using the results of its new customer satisfaction survey. For example, I&A is gathering feedback on, among other things, how timely and responsive state and local customers find the information that I&A provides to them. I&A could possibly use this feedback to set annual targets for the level of timeliness and responsiveness that it would like to achieve and use the survey results to track progress towards these targets over time. I&A could in turn use this performance data to decide on future improvements. Since I&A was just beginning to collect and analyze the results of its customer satisfaction survey, it was too soon to tell if the survey results could produce the data on which to base performance measures. Nevertheless, establishing plans and time frames for developing ways to measure how I&A’s information- sharing efforts have affected homeland security could help I&A, the department, and Congress monitor and measure the extent to which I&A’s state and local information-sharing efforts are achieving their intended results, make needed improvements, and inform funding decisions. I&A has evolved in the more than 5 years since it was created and has developed more effective relationships with its state and local partners, especially through its support to fusion centers. It has also developed a variety of products and services to support these partners. I&A has opportunities, however, to build on these relationships, leverage these efforts, and demonstrate to Congress and these partners that it is meeting its statutory mission to share information with these partners to help protect the homeland. For example, working with states to establish milestones for identifying each state’s information needs and identifying and working to resolve any barriers to completing this process could help hold I&A accountable for the timely completion of this process, which is an important step in supporting the development of future I&A products. Periodically informing state and local partners of how I&A analyzed the feedback they provided and what actions I&A took in response to this feedback and analyses could help strengthen I&A’s working relationships with these partners and encourage them to continue to provide I&A feedback, which could ultimately make I&A’s products and services more useful. Defining and documenting the specific programs and activities I&A’s components and divisions will be held responsible for implementing so that I&A collectively can meet its state and local mission could help to establish clear direction and accountability. Finally, committing to plans and time frames for developing outcome-based performance measures that gauge the information-sharing results and impacts of I&A’s state and local efforts and how these efforts have affected homeland security could help I&A and Congress establish accountability for funding provided. By taking all of these steps, I&A could potentially increase the usefulness of its products and services, the effectiveness of its investments, and the organization’s accountability to Congress, key stakeholders, and the public for sharing needed homeland security information with state and local partners. To help I&A strengthen its efforts to share information with state and local partners, we recommend that the Secretary of Homeland Security direct the Under Secretary for I&A to take the following four actions: Work with states to establish milestones for the timely completion of efforts to identify state information needs and identify and work to resolve any barriers to this timely completion. Periodically report to state and local information-sharing partners on the results of I&A’s analysis of the product and services feedback these partners provide and the actions I&A took in response to this feedback. Define and document the programs and activities its divisions and branches will be expected to implement in order for I&A to collectively meet its state and local information-sharing mission and provide accountability and transparency over its efforts. Establish plans and time frames for developing performance measures that gauge the results that I&A’s information-sharing efforts have achieved and how they have enhanced homeland security. On August 6, 2010, we provided a draft of the sensitive version of this report to DHS for review and comment. In its written comments, DHS stated that the department, particularly I&A, concurred with all four recommendations and discussed efforts planned or underway to address them. Specifically, DHS agreed with our first recommendation related to the need for I&A to work with states to establish milestones for the timely completion of efforts to identify state information needs and identify and work to resolve any barriers to this timely completion. According to DHS, I&A has established internal milestones for the timely completion of this process. DHS noted, however, that while I&A advises and assists states with the development of their information needs, ultimately those outcomes are owned and controlled by the states themselves and, thus, I&A is unable to impose its milestones on them. Nevertheless, DHS noted that I&A is confident that it can work with states to develop mutually- agreed upon milestones for completing this process and will report progress towards meeting these milestones on a regular basis. Working with states to develop such milestones and reporting on progress will address the intent of our recommendation. DHS also agreed with our second recommendation that I&A periodically report to state and local partners on the results of I&A’s analysis of the products and services feedback these partners provide and the actions I&A took in response to this feedback. DHS noted that I&A plans to regularly report the results of its partners’ products and services feedback, as well as the actions I&A took in response to that feedback, to these partners, DHS management, and Congress. In September 2010, after providing written comments, I&A officials informed us that they have taken steps to report the results of feedback analysis to state and local customers. Specifically, during the summer of 2010, I&A provided briefings on the value of this feedback during two stakeholder forums, according to an official from I&A’s Customer Assurance Branch. The official added that I&A plans to continue using stakeholder forums—such as conferences and meetings of fusion center directors—to report on I&A’s assessment of state and local feedback and its use in refining I&A products. However, I&A had not developed plans for reporting the results of its feedback analysis moving forward—including time frames and level of detail—which would address the intent of this recommendation. Further, DHS agreed with our third recommendation that I&A define and document the programs and activities its divisions and branches will be expected to implement in order for I&A to collectively meet its state and local information-sharing mission and provide accountability and transparency over its efforts. DHS noted that I&A was in the process of developing a new strategic plan that will include strategic-level measures and implementation plans. DHS added that the plan will establish organizational strategic objectives that I&A—through its divisions and branches—will be expected to achieve, to include information sharing with state and local entities, and will provide the measures by which its success will be gauged. Developing a plan that defines and documents how I&A plans to meet its state and local information-sharing mission— including programs, activities, and priorities—will meet the intent of this recommendation. Finally, DHS agreed with our fourth recommendation that I&A establish plans and time frames for developing performance measures that gauge the results that I&A’s information-sharing efforts have achieved and how they have enhanced homeland security. DHS noted that I&A is in the process of developing a new strategic implementation plan that will include strategic-level measures. DHS added that the plan will provide a basis for gauging, among other things, the results of I&A’s information sharing efforts. We support I&A’s intention to develop additional performance measures. However, to fully address the intent of our recommendation, I&A should commit to plans and time frames for developing outcome-based performance measures that gauge the information-sharing results and impacts of I&A’s state and local efforts and how these efforts have affected homeland security. The full text of DHS's written comments is reprinted in appendix VI. DHS also provided technical comments, which we considered and incorporated in this report where appropriate. We are sending copies of this report to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report were Eric Erdman, Assistant Director; David Alexander; Adam Couvillion; Elizabeth Curda; Geoffrey Hamilton; Gary Malavenda; and Linda Miller. Table 3 presents data on how the Office of Intelligence and Analysis (I&A) categorized the voluntary feedback responses over the past 2 annual periods for which data were gathered. Table 4 below describes the categories of Single Point of Service (SPS) State and Local Support Requests (SLSRs) received and tracked by the Office of Intelligence and Analysis (I&A). I&A has developed a set of priorities for its state and local customers and External Operations Division—shown in table 5—which it reports using as the basis for determining performance measures and quantifying data collected through the SLSR acceptance and response process. In order to measure its progress towards meeting these priorities, I&A has developed a set of measures, goals, key performance indicators, and metrics for the SPS program as shown in table 6. The results of data gathered for many of these metrics are presented in tables 7 through 10. I&A has seen an increase in SLSR submissions since the SPS was initiated in May 2008, which leveled off in the last two quarters of fiscal year 2009, but saw a subsequent increase in the first quarter of fiscal year 2010, as shown in table 7. I&A attributes the surge in Quarter 2 to SPS marketing at the 2009 National Fusion Center Conference. Regarding the Quarter 3 decline, I&A cited several factors, such as credibility concerns by customers following the release of a report on “right wing extremism” that drew criticism from Congress and the media, the extension of production time frames due to a more rigorous report review process, and/or natural decline. I&A did not address the subsequent decline in Quarter 4, though it did indicate that the final month of the quarter saw a rebound in submissions due to an outreach program conducted by SPS leadership that month. Regarding the first quarter of fiscal year 2010, I&A attributed this increase to a surge in administrative requests, as it began tracking all administrative-type SLSRs regardless of their significance. Thus, this growth is at least partially attributable to enhanced data collection rather than demand-driven growth. As shown in table 8, a majority of SLSRs are submitted from states with embedded I&A intelligence officers at fusion centers, and many of the requests are coming directly from these officers. In addition, California, Texas, Ohio, and North Carolina—all states with deployed I&A intelligence officers—have consistently been among the states with the highest number of SLSRs. The average number of days to completion steadily increased through the first three quarters of fiscal year 2009, but saw a decline in the fourth quarter, and this rate held steady in the first quarter of fiscal year 2010. As shown in table 10, the number of SLSRs that remained open at the end of each quarter has steadily increased. I&A attributes much of this increase, in part, to the increased number of Homeland Intelligence Report Production SLSRs, which have an estimated 90-day production time line. In its first quarter fiscal year 2010 report, I&A reported that it has a number of initiatives in place to improve SLSR response times, which include the following: Developing an I&A policy to define the roles and responsibilities of the stakeholders. Updating the performance measures to better reflect the timeliness of workflow processes throughout the SLSR life cycle. Introducing a standardized request form to ensure customer needs are clearly articulated before a SLSR is submitted. Assigning individuals to closely communicate and work with I&A branches to reduce the number of open and overdue SLSRs. In support of the Office of Intelligence and Analysis’s (I&A) objective to strengthen the national network of fusion centers, the Department of Homeland Security’s (DHS) National Preparedness Directorate and the Department of Justice’s (DOJ) Bureau of Justice Assistance—in coordination with the Office of the Director of National Intelligence, the Office of the Program Manager for the Information Sharing Environment, the Federal Bureau of Investigation (FBI), and representatives from the state and local community—partnered in 2007 to develop the Fusion Process Technical Assistance Program. As part of this program, the DHS/DOJ partnership delivers and facilitates a number of publications, training courses, workshops, and other initiatives to fusion centers. Examples of these programs include training on fusion process orientation and development, state and local anti terrorism training workshops, and regional fusion center workshops. I&A’s role in this partnership involves, among other things, serving as the subject matter expert to support program development, reviewing and approving materials developed in support of the program, and having its intelligence officers at fusion centers serve as primary contacts for coordination of service deliveries. As of the end of 2009, this program has delivered 184 programs and services to fusion centers and their staff. One form of technical assistance comes through direct outreach efforts with fusion centers. One example is the National Fusion Center Conference, which takes place annually and provides fusion centers with opportunities to learn about key issues, such as funding and sustainment, achieving baseline capabilities, privacy and civil liberties protection, and many other issues. These agencies in conjunction also support regional fusion center conferences and other training programs. In addition, I&A— along with the Federal Emergency Management Agency (FEMA)—has jointly sponsored regional FEMA workshops with the intent of fostering understanding between regional FEMA and fusion center staff regarding their missions, information-sharing systems, and available intelligence products. Another key area of technical assistance provided to fusion centers involves the development of privacy policies. DHS’ Offices of Privacy and Civil Rights and Civil Liberties are working in partnership with the Bureau of Justice Assistance, the Global Justice Information Sharing Initiative, and the Office of the Program Manager for the Information Sharing Environment to assist fusion centers in developing privacy policies with the intent of safeguarding privacy and civil liberties without inhibiting information sharing. In 2007 and 2009, these entities provided Privacy Policy Technical Assistance sessions to fusion centers. As of July 2010, 63 fusion centers had received the Privacy Policy Technical Assistance sessions. In addition, in response to fusion center input, these entities have developed a session called “Discussion on Development, Review, and Dissemination of Fusion Center Products,” which focuses on the need for a privacy policy and implementation and how to avoid difficulty when developing intelligence products. This partnership has also begun to collect and review the privacy policies of fusion centers. As of July 2010, DHS’s Office of Privacy had received a total of 63 draft privacy policies for review, with 11 fusion centers having completely satisfied the privacy policy review and development process. I&A also supports information sharing with its state and local partners through its involvement with the ITACG. ITACG is a group of state, local, tribal, and federal homeland security, law enforcement, and intelligence officers at the National Counterterrorism Center that facilitates the development, production, and dissemination of federally coordinated terrorism-related intelligence reports through existing FBI and DHS channels. The state, local, and tribal analysts in ITACG review these federal reports and provide counsel and subject matter expertise to these entities developing the reports in order to better meet the information needs of state, local, and tribal and private entities. Section 521(a) of the 9/11 Commission Act required the Director of National Intelligence, through the Program Manager for the Information Sharing Environment and in coordination with DHS, to coordinate and oversee the creation of ITACG. I&A supports ITACG by chairing and providing other membership on the ITACG Advisory Council, which is tasked with setting policy and developing processes for the integration, analysis, and dissemination of federally coordinated information. The Advisory Council’s membership is at least 50 percent state and local. I&A also funds the costs of detailing state, local, and tribal analysts to ITACG. Regarding the ITACG state, local, and tribal detailees’ contributions to federal intelligence reports, the Program Manager for the Information Sharing Environment reports that as of November 2009, these detailees have participated in the production of 214 intelligence products. The ITACG detailees have also participated in the development of the Roll Call Release, discussed earlier in this report, in coordination with I&A and FBI. The Program Manager for the Information Sharing Environment reported that from December 2008 (when this product line was created) through November 2009, 26 Roll Call Release documents were published. In addition, the detailees work with the National Counterterrorism Center to develop a daily, secret-level digest of intelligence that is of interest to state and local entities. DHS/I&A contributed to development of the Baseline Capabilities for State and Major Urban Area Fusion Centers, published by DOJ’s Global Justice Information Sharing Initiative in September 2008. I&A officials have stated that one of their key responsibilities—particularly for those officers at fusion centers—is to help ensure that fusion centers are taking appropriate steps to meet these baseline capabilities. At the 2010 National Fusion Center Conference, it was announced that I&A and its federal partners had developed an assessment tool for fusion centers’ use in determining how they measure against the baseline capabilities, and where gaps in meeting the capabilities exist so that resources can be most effectively targeted. This document stems from the previously developed Fusion Center Guidelines, published by the Global Justice Information Sharing initiative in August 2006. In August 2009, DHS entered into an agreement with DOD that grants select fusion center personnel access to DOD’s classified information network, the Secure Internet Protocol Router Network. Under this arrangement, properly cleared fusion center officials would be able to access specific terrorism-related information through the Homeland Security Data Network system. The Secretary of DHS cited this as “an important step forward in ensuring that first preventers have a complete and accurate picture of terrorism threats.” Section 512 of the 9/11 Commission Act directed DHS to create a Homeland Security Information Sharing Fellows Program. This program would detail state, local, and tribal law enforcement officers and intelligence analysts to DHS in order to promote information sharing between DHS and state, local, and tribal officers and analysts, assist DHS analysts in preparing and disseminating products that are tailored to state, local, and tribal law enforcement officers, and intelligence analysts. I&A officials have stated that as of June 2010, there were two state and local fellows in-house, with a third to join by the end of the summer. I&A plans to have fellows serve on 90-day rotations, working with I&A’s analytic divisions on product development. In addition, I&A has also deployed Reports Officers to a number of border states (though not necessarily fusion centers), in accordance with DHS priorities to focus on analysis of border security issues. Reports Officers serve in key state and local partner locations (as well as DHS headquarters and select DHS components) to enhance information sharing and integration of information acquisition and reporting efforts. As of July 2010, I&A had deployed Reports Officers to six locations in Southwest Border states, as well as one additional southern state. DHS’s Office of the Chief Security Officer grants security clearances to state, local, and tribal personnel. Table 11 lists the goals and objectives from the Department of Homeland Security (DHS) Office of Intelligence Analysis (I&A) Strategy. Establishing goals and measuring performance are essential to successful results-oriented management practices. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their programs. Our body of work on results-oriented management practices has identified key attributes of success. This work indicates that agencies that are successful in achieving goals strive to establish practices and performance systems at all levels of the agency that include the key attributes described in this appendix. Addresses important dimensions of program performance and balances competing priorities. Performance goals and measures that successfully address important and varied aspects of program performance are key aspects of a results-orientation. Federal programs are designed and implemented in dynamic environments where competing program priorities and stakeholders’ needs must be balanced continuously and new needs must be addressed. As a result, programs are often forced to strike difficult balances among priorities that reflect competing demands, such as timeliness, service quality, customer satisfaction, program cost, and other stakeholder concerns. Sets of performance goals and measures could provide a balanced perspective of the intended performance of a program’s multiple priorities. Use intermediate goals and measures to show progress or contribution to intended results. Intermediate goals and measures, such as outputs or intermediate outcomes, can be used to show progress or contribution to intended results. For instance, when it may take years before an agency sees the results of its programs, intermediate goals and measures can provide information on interim results. Also, when program results could be influenced by external factors, agencies can use intermediate goals and measures to identify the programs’ discrete contribution to a specific result. Show baseline and trend data for past performance. With baseline and trend data, the more useful performance plans provided a context for drawing conclusions about whether performance goals are reasonable and appropriate. Decision makers can use such information to gauge how a program’s anticipated performance level compares with improvements or declines in past performance. Identify projected target levels of performance for multiyear goals. Where appropriate, an agency can convey what it expects to achieve in the long term by including multiyear performance goals in its performance plan. Such information can provide congressional and other decision makers with an indication of the incremental progress the agency expects to make in achieving results. Aligns goals and measures with agency and departmentwide goals. Performance goals and measures should align with an agency’s long- term strategic goals and mission as well as with higher-level departmentwide priorities, with the relationship clearly articulated. Such linkage is important in ensuring that agency efforts are properly aligned with goals (and thus contribute to their accomplishment) and in assessing progress toward achieving these goals. Goals and measures also should cascade from the corporate level of the agency to the operational level to provide managers and staff with a road map that shows how their day-to-day activities contribute to achieving agency and departmentwide performance goals. In addition, measures used at the lowest levels of the agency to manage specific programs should directly relate to unit results and upwards to the corporate level of the agency. Assigns accountability for achieving results. We have previously reported that the single most important element of successful management improvement initiatives is the demonstrated commitment of top leaders in developing and directing reform efforts. Top leadership must play a critical role in setting results-oriented goals and quantifiable measures that are cascaded to lower organizational levels and used to develop and reinforce accountability for achieving results, maintain focus on the most pressing issues confronting the organization, and sustain improvement programs and performance, especially during times of leadership transition. One way to reinforce accountability is through the use of employee performance appraisals that reflect an organization’s goals. Provides a comprehensive view of agency performance. For each key business line, performance goals and measures should provide a comprehensive view of performance, including customers’ and stakeholders’ priorities. Goals and measures should address key performance dimensions such as (1) factors that drive organizational performance—including financial, customer, and internal business processes, and workforce learning and growth; and (2) aspects of customer satisfaction, including timeliness, quality, quantity, and cost of services provided. Doing so can allow managers and other stakeholders to assess accomplishments, make decisions, realign processes, and assign accountability without having an excess of data that could obscure rather than clarify performance issues. Links resource needs to performance. One of the ways that performance management can be promoted is if this information becomes relevant for (1) identifying resources (e.g., human capital, information technology, and funding) needed to achieve performance goals; (2) measuring cost; and (3) informing budget decisions. When resource allocation decisions are linked to performance, decision makers can gain a better understanding of the potential effect of budget increases and decreases on results. Provides contextual information. Performance reporting systems should include information to help clarify aspects of performance that are difficult to quantify or to provide explanatory information such as factors that were within or outside the control of the agency. This information is critical to identifying and understanding the factors that contributed to a particular result and can help officials measure, assess, and evaluate the significance of underlying factors that may affect reported performance. In addition, this information can provide context for decision makers to establish funding priorities and adjust performance targets and assess means and strategies for accomplishing an organization’s goals and objectives.
Information sharing among federal, state, and local officials is crucial for preventing acts of terrorism on U.S. soil. The Department of Homeland Security (DHS), through its Office of Intelligence and Analysis (I&A), has lead federal responsibility for such information sharing. GAO was asked to assess (1) actions I&A has taken to enhance the usefulness of intelligence products it provides to state and local partners, (2) other services I&A provides to these partners, and (3) to what extent I&A has defined how it intends to share information with these partners. To conduct this work, GAO reviewed relevant statutes, strategies, best practices, and agency documents; contacted a nongeneralizable sample of 10 fusion centers—where states collaborate with federal agencies to improve information sharing—based on geographic location and other factors; and interviewed I&A officials. This is a public version of a sensitive report that GAO issued in September 2010. Information DHS deemed sensitive has been redacted. To enhance the usefulness of intelligence products it provides to state and local partners, I&A has initiatives underway to identify these partners' information needs and obtain feedback on the products, but strengthening these efforts could support the development of future products. As of August 2010, I&A had finalized information needs--which are owned and controlled by the states--for 9 of the 50 states. I&A was working with remaining states to identify their needs, but it had not established mutually agreed upon milestones for completing this effort, in accordance with program management principles. Working with states to establish such milestones and addressing any barriers to identifying their needs could better assist states in the timely completion of this process. In addition, I&A has begun issuing a new customer feedback survey to recipients of its products and plans to begin analyzing this feedback to determine the value of the products, but it has not developed plans to report the results of its analyses to state and local partners. Reporting the results to these partners and actions it has taken in response could help I&A demonstrate that the feedback is important and makes a difference, which could encourage state and local partners to provide more feedback and ultimately make I&A’s products and services more useful. In addition to intelligence products, I&A provides a number of other services to its state and local partners--primarily through fusion centers--that have generally been well received by the center officials GAO contacted. For example, I&A has deployed more than 60 intelligence officers to fusion centers nationwide to assist state and local partners in areas such as obtaining relevant intelligence products and leveraging DHS capabilities to support their homeland security missions. I&A also facilitates access to information-sharing networks disseminating classified and unclassified information, provides training directly to center personnel, and operates a 24-hour service to respond to state and local requests for information and other support. Historically, I&A has focused its state and local efforts on addressing statutory requirements and responding to I&A leadership priorities, but it has not yet defined how it plans to meet its state and local information-sharing mission by identifying and documenting the specific programs and activities that are most important for executing this mission. Best practices show that clearly identifying priorities among programs and activities is important for implementing programs and managing results. Further, I&A's current performance measures do not allow I&A to demonstrate the expected outcomes and effectiveness of programs and activities that support state and local partners, as called for in program management principles. I&A officials said they are planning to develop such measures, but had not established time frames for doing so. Defining and documenting how I&A plans to meet its state and local information-sharing mission and establishing time frames for developing additional performance measures could better position I&A to make resource decisions and provide transparency and accountability over its efforts. GAO recommends that I&A establish milestones for identifying the information needs of state and local partners, report to these partners on how I&A used feedback they provided to enhance intelligence products, identify and document priority programs and activities related to its state and local mission, and establish time frames for developing additional related performance measures. DHS agreed with these recommendations. GAO recommends that I&A establish milestones for identifying the information needs of state and local partners, report to these partners on how I&A used feedback they provided to enhance intelligence products, identify and document priority programs and activities related to its state and local mission, and establish time frames for developing additional related performance measures. DHS agreed with these recommendations.
Recognizing the need to modernize its IT systems, FBI proposed a major technology upgrade plan to Congress in September 2000. FBI’s Information Technology Upgrade Project, which FBI subsequently renamed Trilogy, was FBI’s largest automated information systems initiative to date. Trilogy consisted of three parts: (1) the Information Presentation Component (IPC) to upgrade FBI’s computer hardware and software, (2) the Transportation Network Component (TNC) to upgrade FBI’s communication network, and (3) the User Application Component (UAC) to upgrade and consolidate FBI’s five most important investigative applications. To expedite the contracting process, FBI entered into an interagency agreement with GSA to support FBI’s use of the FEDSIM Millennia governmentwide acquisition contract for the implementation of Trilogy’s three functional components, IPC, TNC, and UAC. FEDSIM, serving as contracting agency, was to provide all contract administrative services necessary to support the task orders. Because the Trilogy project was so large, DOJ required FBI to use two contractors for the three Trilogy components. FBI combined the IPC and TNC portions of Trilogy into one task order because both components involved physical infrastructure enhancements. IPC provided for new desktop computers, servers, and commercial-off-the-shelf automation software, including Web-browser and e-mail to enhance usability by the agents. TNC upgraded the complete communication infrastructure, including high-capacity wide-area and local- area networks, authorization security, and encryption of data transmission and storage. The IPC/TNC task order was awarded in May 2001 to DynCorp (now Computer Sciences Corporation (CSC)). The IPC/TNC upgrades would provide the physical infrastructure needed to run the applications developed under UAC, the third Trilogy component. The third component of Trilogy—the UAC task order—was awarded in June 2001 to Science Applications International Corporation (SAIC). The goal of UAC was to replace FBI’s paper case files with electronic files and improve efficiency. The heart of the UAC portion became the development of the VCF system to replace the obsolete Automated Case Support system, FBI’s primary investigative application that uploads and stores case files electronically. The above two Trilogy contracts were awarded on a cost-plus-award fee basis for labor charges, meaning that the contractor’s costs incurred are reimbursed and fees may be awarded to the contractor based on performance. The FAR states that cost-reimbursement type contracts may only be used if appropriate government surveillance during performance will provide reasonable assurance that efficient methods and effective cost controls are used. The aspects of these contracts related to the purchase of equipment were based on fixed-price arrangements, meaning that a set price for the equipment is agreed to up front. In addition to the two primary contracts discussed above, FBI awarded two additional contracts to assist with the technical oversight, monitoring, and integration of the two primary Trilogy contracts described above. The first of the two additional contracts was awarded in February 2001, also through GSA FEDSIM, to Mitretek for Systems Engineering and Technical Assistance (SETA) services. Under the SETA contract, Mitretek was required to assist FBI with a wide array of tasks, including program and contract management, fiscal and budgetary oversight, cost estimating, and several other technical aspects of the Trilogy project. The second of the two additional contracts was awarded to SAIC for the integration of the three Trilogy components. In July 2004, the VCF was scaled back to the Initial Operating Capability and the remaining deliverables were cancelled after the (1) initial deliverable was rejected by FBI and (2) VCF was determined to be infeasible and cost prohibitive to implement as originally envisioned. After a 90-day limited pilot that ended in March 2005, VCF offline and the pilot results were then to be analyzed by FBI for requirements development of its new electronic information management system initiative. In August 2005, FBI released a solicitation for proposals to develop FBI’s new electronic information management system, referred to as Sentinel. The solicitation was sent to more than 40 eligible companies under a National Institutes of Health governmentwide acquisition contract. Similar to VCF, the goal of Sentinel is to replace FBI’s legacy case management capabilities with an integrated, paperless file management and workflow system. According to FBI policy, assets valued at $1,000 or more, as well as certain sensitive items, such as firearms, laptop computers, and central processing units, are considered to be “accountable” assets, regardless of cost, and must be accounted for individually in FBI’s Property Management Application (PMA). PMA is an automated management system that allows FBI to track the cost, location, and history of its accountable assets. PMA includes a variety of data fields to identify each item, including the acquisition date, received date, acquisition cost, last inventory date, bar code number, serial number, cost center for the office where the item is located, description of the item, and other information. Ongoing deficiencies in FBI’s management of property have been identified by DOJ’s OIG and FBI’s independent financial statement auditor. In August 2002, the DOJ OIG issued a report that revealed significant problems with FBI’s management of laptop computers, including findings that FBI did not reconcile its property management data with purchase data from its accounting system, did not have an inventory record for accountable assets not in PMA that were lost or stolen, and could not verify whether the number of items purchased agreed with the number of items recorded in PMA. Additionally, in September 2004, the DOJ OIG reported on weaknesses in FBI’s controls over nonaccountable property at FBI’s Baltimore field office after an employee pleaded guilty to the theft and sale of FBI photography equipment. Annually, since fiscal year 1999, FBI’s independent financial statement auditors have identified internal control weaknesses in the area of property management. They specifically reported that FBI needed to improve its procedures related to the timely and accurate recording, reconciling, and reporting of property and equipment in PMA. Internal control is a major part of managing any organization. As required by 31 U.S.C. 3512(c),(d), commonly referred to as the Federal Managers’ Financial Integrity Act of 1982, the Comptroller General issues standards for internal control in the federal government. These standards provide the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. According to these standards, internal control comprises the plans, methods, and procedures used to meet missions, goals, and objectives. Internal control is the first line of defense in safeguarding assets and preventing and detecting fraud and errors. Internal control, which is synonymous with management control, helps government program managers achieve desired results through effective stewardship of public resources. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives and help ensure that actions are taken to address risks. Control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. They include a wide range of diverse activities. Some examples of control activities include (1) establishing physical controls over vulnerable assets to reduce the risk of loss or unauthorized use and periodically counting and comparing such assets to control records; (2) ensuring that documentation and records are properly managed and maintained and that transactions are appropriately documented and readily available for examination; (3) assigning accountability for the custody and use of resources and records to help reduce the risk of errors, fraud, misuse, or unauthorized alteration; and (4) implementing management level reviews at the functional level to ensure that appropriate control activities are being employed, such as reconciliations of summary information to supporting detail. FBI’s review and approval process for Trilogy contractor invoices, which was carried out by a review team consisting of officials from FBI, GSA, and Mitretek, did not provide an adequate basis to verify that goods and services billed were actually received by FBI or that payments were for allowable costs. This occurred in part because responsibility for the review and approval of invoices was not clearly defined in the Mitretek contract and in the interagency agreements related to Trilogy project oversight. In addition, contactor invoices frequently lacked detailed information required by the contracts and other additional information that would be needed to facilitate an adequate invoice review process. Despite this, invoices were paid without requests for additional supporting documentation necessary to determine the validity of the charges. These weaknesses in FBI’s review and approval process made the agency highly vulnerable to payment of unallowable or questionable contractor costs. While the review and approval process differed for each contractor and type of invoice charge, in general the process carried out by the review team lacked key procedures to reasonably ensure that goods and services billed were actually received by FBI or that the amounts billed and paid were for allowable costs. Internal control guidance requires agencies to establish controls that reasonably ensure, among other things, that funds, property, and other assets are safeguarded against waste, loss, unauthorized use, or misappropriation. Contractor invoices included costs for labor, including related overhead costs; travel; other direct costs (ODC); subcontractor labor; and purchased equipment. Table 1 provides a summary of total payments made to Trilogy contractors for these categories, as well as total Trilogy costs in each category. Each member of the review team—which included personnel from FBI; GSA, the contracting agency; and Mitretek—was to perform some level of review of the invoices submitted by the contractors for payment. During the project, each of the review team members, at times, worked on-site with the contractors. As is discussed later, the specific roles of each party were not clearly defined, which limited the effectiveness of the invoice review and approval process. Figure 1 illustrates this invoice review and approval process. Our review disclosed serious gaps in the review process for each of the major categories of contractor costs, as follows. Labor—According to GSA, it typically reviewed labor charges by looking for unusual or excessive hours worked or rates charged and recalculating some amounts to ensure mathematical accuracy. GSA also stated that its personnel generally compared average fully burdened labor rates (labor, overhead, fringe benefits, and general and administration costs) charged to ceiling rates (maximums) established in the Trilogy contracts. However, GSA was not able to provide us with an explanation for or evidence of how they resolved clearly questionable labor charges we identified, including hours billed far in excess of a normal pay period. For example, we identified one individual who charged 371 hours for one 4-week period (an average of 93 hours per week) and 359 in the following 5-week period (an average of 70 hours per week). There was no evidence that GSA had questioned whether these seemingly excessive hours were valid. GSA stated that these types of issues were usually resolved on the telephone and therefore they usually did not maintain any documentation of their inquiries. On-site members of the review team indicated that they generally knew the contractor employees working on the project and reviewed the hours billed for reasonableness. However, the review team did not have a systematic process in place to help ensure that individuals listed on invoices had actually worked on Trilogy the number of hours being billed or that the job classifications and related billing rates were appropriate. In addition, there was no documented assessment of whether the overall hours being billed for a particular activity were in line with expectations. Subcontractor Labor—The review team paid contractor invoices for subcontractor labor without any attempt to assess the validity of the charges. The GSA official responsible for paying the invoices stated that the review team relied on the contractors to properly bill for the costs related to their subcontractors and to validate the subcontractor invoices. However, the review team had no process in place to assess whether or not the contractors were properly validating their subcontractor labor charges or to assess the allowability of those charges. In addition, we found that CSC, which billed the bulk (i.e., about $116 million) of the subcontractor labor costs, did not always have sufficient documentation of subcontractor charges to enable CSC, or anyone else, to perform any assessment of the allowability of those costs. For example, the only supporting documentation CSC could provide us for about $2 million in subcontractor labor charges we selected for review were subcontractor invoices that lacked some of the basic information needed to assess the labor costs, such as the names of the subcontractor employees, hours billed, or individual labor rates. Travel—These charges were reviewed differently by the review team for SAIC and CSC invoices. For SAIC travel, GSA told us they compared invoiced amounts to travel authorizations and verified the per diem and lodging rates in the authorizations to those prescribed under the Federal Travel Regulation. However, travel authorizations were not always submitted and approved before travel occurred and in some cases were based on actual amounts. The review team told us that they reviewed SAIC travel vouchers or receipts in a few instances over 4 years when amounts billed were higher than expected to verify the amounts charged on the travel invoices. However, there was no systematic process to review travel costs billed to the Trilogy project. For CSC travel, because CSC’s travel authorizations did not include details by employee or the estimated cost for each trip and frequently covered several trips, the GSA official who paid the invoices told us she relied on members of the review team that worked on-site to review the travel invoices. These on-site review team officials indicated that their review process was based on their general understanding of who was traveling. However, we determined that no one on the review team obtained travel vouchers or receipts to verify that amounts billed by CSC were a necessary and proper charge to the Trilogy project and were reasonable based on the location and length of travel required. Other Direct Costs (ODC)—These charges were paid without validation of the actual amounts included in the invoices. The review team relied on contractors to obtain purchase orders for ODC charges. For SAIC ODC invoices, the review team generally tracked actual charges billed on invoices compared to purchase order amounts. However, there was no review of receipts or other documentation to validate the actual charges on invoices. CSC ODC invoices were paid without matching the charges to a purchase order or documentation of the actual cost incurred. Therefore, the review team had no basis for confidence that CSC ODC charges were approved ahead of time or appropriately billed. CSC ODC charges also included subcontractor ODC. We asked CSC for supporting documentation for selected subcontractor ODC and found that CSC’s only support was subcontractor invoices that included only a brief description of the nature of the charge and the amount. No supporting receipts or other documentation necessary to verify the charges was provided. For example, CSC billed FBI for ODC of $456,211 on an invoice submitted in November 2003. The only description on the invoice for these charges was “other direct costs.” We requested from CSC any documentation they had in their files to support this charge from its subcontractor, CACI Inc. - Federal (CACI). CSC was able to provide an invoice with one line entitled “facilities/materials” and a spreadsheet with a general summary of the charges. Further, the e-mail exchange presented in figure 2 shows that CSC recognized that they did not have enough detail to review the ODC charge, but approved the invoice anyway. As noted below, the final entry in the exchange is, “It’s not what we asked for but at this point it doesn’t really matter. Approve it.” Equipment—Charges for equipment purchased by contractors and billed to FBI were reviewed merely by tracking the total cost of equipment invoices to ensure that the total amount did not exceed the approved amount on purchase orders. However, neither GSA, FBI, nor Mitretek performed procedures to ensure that individual equipment items billed by the contractors were actually received before payment. Discussions with the contractors revealed that this was a high-risk area because some of the invoices they submitted were for equipment that had not yet been delivered to FBI. The review team approved and paid these invoices without question. In addition, FBI purchased some IPC/TNC equipment directly from vendors and delivered the equipment to contractor locations, but did not have a mechanism in place to physically verify receipt of that equipment at FBI sites before paying the related invoices. There was also no subsequent verification by the review team that all equipment purchased through contractors and vendors was ultimately received by FBI. The insufficient invoice review and approval process was at least in part the result of a lack of clarity in the interagency agreement between FBI and GSA FEDSIM, as well as in FBI’s oversight contract with Mitretek. We have identified the management of interagency contracting as a high-risk area, in part because it is not always clear with whom the responsibility lies for critical management functions in the interagency contracting process, including contract oversight. The lack of clarity in roles and responsibilities was evident in our interviews with the review team, where each party indicated that they believed another party was responsible for a more detailed review. While contract management and oversight teams were identified in the interagency agreements, key roles and responsibilities for the review and approval of invoices were not clearly defined. For example, the terms and conditions of the interagency agreement with GSA only vaguely described GSA’s role in contract administration. However, the agreement did not specify the invoice review and approval steps to be performed. Likewise, the Mitretek contract provided a general description of its oversight duties, but did not specifically mention its responsibilities related to the invoice review and approval process. We did note, however, that FBI did not approve an invoice for payment until after it was notified by Mitretek that it had reviewed the invoice. Based on our discussions with the review team, Mitretek would review its own invoices before sending them forward to FBI for payment approval. The failure to establish an effective review process was compounded by the fact that not all invoices provided detailed information required by the contracts and other information that would be needed to perform adequate reviews. Trilogy contractors were required to comply with various invoicing provisions of the FAR and the Trilogy contracts, including requirements to provide labor and various overhead rates, travel costs by trip, transaction detail for ODC, and purchase orders for equipment purchases. However, we found that the contractors, particularly CSC, often did not meet these requirements. For example: CSC labor invoices did not include information related to individual labor rates or indicate which overhead rates were applicable to each employee—information needed to verify mathematical accuracy and to determine that the components of the labor charges were valid. CSC invoices provided a summary of travel charges by category (airfare, lodging, etc.), but did not provide required information related to an individual traveler’s trip costs. The travel invoices also did not provide cost detail by travel authorization number. Therefore, there was no way to determine that the trips billed were approved in advance or that costs incurred were proper and reasonable based on the location and length of travel. CSC and SAIC invoices for ODC provided a summary of charges by category (shipping, office supplies, etc.); however, CSC did not provide required cost detail by transaction. In some cases, the category of charges was not even identified. For example, as shown in figure 3, within the ODC invoice, a subcategory entitled “other direct costs” made up $1.907 million of the $1.951 million invoice current billing total. No additional information was provided in the invoice to explain what made up these “other direct costs.” For purchased equipment, CSC invoices included a summary sheet— indicating the total price billed, a brief description of items purchased, and the quantity of each item purchased—and a copy of the related “Bill of Material” (BOM). However, they did not individually identify each asset being billed by bar code, serial number, or some other method that would allow verification of assets billed to assets received. SAIC invoices also lacked the detailed information necessary to individually identify assets. This severely impeded FBI’s ability to determine whether it had actually received the assets included on invoices and to subsequently track individual accountable assets on an item-by-item basis. We also found that Mitretek, a member of FBI’s review team, submitted invoices that did not include detailed information needed to perform adequate reviews. For example, Mitretek’s invoices did not include individual labor rates needed to verify rates charged with salary information or overhead rates needed to recalculate labor costs. As previously noted, Mitretek reviewed its own invoices before sending them forward to FBI for payment approval. Even though contractor invoices, particularly those from CSC, frequently lacked key information needed to review charges, we found through inquiries with the review team and the contractors that invoices were generally paid without requesting additional supporting documentation. Because of the lack of fundamental internal controls over the process used to pay Trilogy invoices, FBI was highly vulnerable to payment of unallowable contractor charges. In an attempt to determine the validity of FBI’s payments, we used forensic auditing techniques, including data mining and document analysis, to select certain contractor costs and requested supporting documentation from the contractors. We identified about $10.1 million of questionable contractor costs paid by FBI. These included payments for first-class travel and other excessive airfare costs, incorrect billings for overtime hours, potentially excessive labor rates, and other questionable subcontractor costs. The following sections provide additional information on the payments for questionable costs we found. Given FBI’s poor control environment and the fact that we only reviewed selected FBI payments to Trilogy contractors that we identified with data mining and other forensic auditing techniques, other payments for questionable costs may have been made that have not been identified. During our review of CSC’s supporting documentation for selected travel charges we found 19 first-class airline tickets purchased costing a total of $20,025, many of which exceeded the basic coach-class fares by significant margins. For example, in one case a traveler flew first class round trip between Providence, Rhode Island and San Francisco, California for $2,159. We estimated that a coach-class ticket for this same trip would have cost $1,119. In addition, 1 day after returning to Providence, this traveler flew back to San Francisco. The documentation provided by CSC did not explain or justify this first-class travel or unusual travel itinerary. The CSC contract called for airfare to be reimbursed to the extent allowable pursuant to the Joint Travel Regulations (JTR), which state that travelers must use basic economy or coach class unless the use of first-class travel is properly authorized and justified. Because the documentation provided by CSC for the 19 first-class tickets costing $20,025 that we identified did not contain authorizations or justifications, the cost of this travel in excess of a coach-class ticket is potentially unallowable. Table 2 provides specific examples of these potentially unallowable first-class travel costs. During our review of FBI’s payments for travel costs, we also identified 75 unusually expensive coach-class tickets that were purchased by the contractors for $100,847, which exceeded basic coach-class fares by approximately $49,848. Upon further inquiry with several airlines, we determined that most of these tickets were for “full fare” coach-class tickets. We noted that the airlines used most often by the contractors indicated that it is possible to obtain a free upgrade to first class with the purchase of the more expensive full-fare coach ticket. We found that in some instances, the current price of a full-fare coach ticket was higher than the current price of a first-class ticket. As discussed above, the JTR requires travelers to use basic economy or coach class unless the use of first-class travel is properly authorized and justified. The JTR defines economy class as basic accommodations that include a service level available to all passengers regardless of fare paid. Since full-fare coach tickets allow a traveler to upgrade to first class at no additional cost, full- fare coach class does not appear to be basic accommodations available to all passengers regardless of fare paid. As such, the purchase of full-fare coach-class tickets is a questionable cost. While the contracts incorporated the JTR, we determined that the JTR applies to civilian employees of the Department of Defense and is not considered appropriate “travel regulations” for contractors. The FAR, which would be appropriate for contractors, requires the use of the lowest customary standard, coach, or equivalent airfare and indicates that costs in excess of the lowest standard, coach, or equivalent airfare are unallowable. Had these provisions of the FAR been applied, the excessive cost of these tickets would have been potentially unallowable. We noted 62 full-fare coach tickets billed by CSC for $85,336, compared to an estimated cost of $41,978 for the basic fully refundable coach-class fares. We also identified 6 full-fare coach tickets billed by SAIC. In addition, we noted 5 trips billed by SAIC for subcontractor travel with excessive airfare costs for which the airfare class was not included in the supporting documentation provided by SAIC. Therefore, we could not determine whether these 5 trips were first class, full-fare coach, or some other class of travel that exceeded basic coach-class fares. These 11 tickets cost $11,610, compared to an estimated cost of $7,897 for the basic fully refundable coach-class fare. We further found 2 excessive airfare coach tickets billed by Mitretek that were upgraded to first class. These 2 tickets cost $3,901, compared to an estimated cost of $1,123 for the basic restricted coach- class fares. In total, the additional cost of $49,848 for the full-fare coach tickets and other excessive airfare are considered questionable. Table 3 provides examples of the excessive airfare travel costs of CSC, SAIC, and Mitretek. During our review of labor charged by SAIC, we found that SAIC billed the Trilogy project for overtime hours worked by employees that exceeded the hours that would have been charged if SAIC followed the overtime policy informally agreed to by SAIC and FBI. Our calculations indicate that FBI may have overpaid an estimated $400,000 for these excess overtime charges. SAIC’s task order, awarded in June 2001, stated that if work beyond the standard 40-hour work week was necessary to support the requirements of the task order, the government would not object to SAIC employees working an extended work week (EWW) (hours in excess of 40 per week). For designated EWW periods, exempt staff (professional staff normally not eligible for overtime compensation) would be paid a pro rata share (straight time) of their weekly salary based on the extended hours worked. EWW periods required SAIC management approval and were used when exempt staff were required to work extended hours for short periods of time due to special circumstances, such as accelerated project schedules or circumstances where employees could not dictate their work schedule. The first EWW period started August 31, 2002, and throughout the Trilogy project SAIC management approved 11 EWW periods for employees working on various Trilogy tasks. In March 2003, after the fourth EWW period started, SAIC implemented an EWW policy, agreed to with FBI, which decreased the amount of hours that would be billed to FBI. This policy stated that exempt staff would be compensated for hours worked that were greater than 90 hours in a 2-week pay period on an hour-for-hour basis. That meant that the first 10 hours of overtime would be uncompensated. In addition, a ceiling of 120 hours was established, meaning that employees would not be compensated for hours worked in excess of 120 in a pay period. SAIC agreed that it would not bill FBI for this uncompensated overtime. During our review of employee labor billings for the Trilogy project, we found that SAIC employees who charged EWW time after the March 2003 policy frequently charged for all hours worked beyond 80 in a pay period and that the cost of these hours was billed to and paid by FBI. We also noted some instances where employees charged EWW beyond the 120-hour ceiling per pay period, which were also billed. We discussed this issue with SAIC management and they agreed that their billing of EWW costs was not consistent with the policy that was established in March 2003 and indicated that they would research the issue further to determine whether corrections are necessary. Based on our review of the labor charges, it appears that FBI may have overpaid for more than 4,000 hours of EWW labor charges. Using average fully burdened labor rates for employees incorrectly billing EWW, we estimated that FBI may have overpaid EWW costs by approximately $400,000. During our review of labor charged by CSC/DynCorp, we found that DynCorp Information Systems (DynIS), a subsidiary of DynCorp that billed about $42 million or 94 percent of DynCorp’s direct labor, charged actual labor rates that may have exceeded rates that GSA asserts were established ceiling rates pursuant to the task order. CSC asserts that ceiling rates were never established. If ceiling rates were established, we estimated that FBI overpaid CSC by approximately $2.1 million. When DynCorp entered into the GSA FEDSIM Millennia contract, it agreed to ceiling rates that would be charged for its various labor categories, such as clerical and senior technician. The Millennia contract also stated that ceiling rates applicable to subcontractors would be negotiated separately for each task order awarded under the Millennia contract. After entering into the Millennia contract, DynCorp acquired a company that was renamed DynIS. Because DynIS’ labor rates were not considered when DynCorp’s ceiling rates were established under Millennia, DynCorp’s proposal for the Trilogy task order listed DynIS as a subcontractor. In May 2001, GSA issued a Trilogy task order award document to DynCorp that had a section entitled “Ceiling Rates Applicable to DynIS” that included the following statement: “Ceilings are placed on all labor category and indirect rates used to establish the total cost for this task order…These ceiling rates are subject to negotiation pending the results of [Defense Contract Audit Agency] DCAA’s audit.” GSA officials told us they believed that DynIS labor category hourly rates in DynCorp’s Trilogy proposal represented established labor category ceiling rates. GSA officials stated that they negotiated DynIS labor category ceiling rates with DynCorp. However, CSC stated that labor category ceiling rates were never established because they were never negotiated with GSA. In March 2003, CSC/DynCorp submitted and GSA approved a modification to the task order that, according to GSA, increased labor rates for several categories. However, CSC claims that this modification did not affect the ceiling rates because the ceilings were never established. Based on our review of DynCorp’s labor invoices, we noted that several of DynIS’ rates charged exceeded the labor rates that GSA contended were ceiling rates. For example, DynIS billed over 14,000 hours for work performed during 2001 for senior IT analysts working on the Trilogy project based on an average hourly rate of $106.14. However, if ceiling rates were established, the DynCorp proposal indicated that the Trilogy project would be charged a maximum of $68.73 per hour for a senior IT analyst working in the field or $96.24 per hour for a senior IT analyst working at headquarters. If ceiling rates were established, we estimated that FBI overpaid CSC/DynCorp by approximately $2.1 million for DynIS labor costs. We identified certain other payments to contractors that were for questionable costs. These costs were not supported by sufficient documentation to enable an objective third party to determine if each payment was a valid use of government funds. We further identified costs that were questionable as to whether they were necessary. Table 4 summarizes these questionable costs, which totaled about $7.5 million. A discussion of each of these questionable costs is provided below. CSC did not provide us adequate supporting documentation for almost $2 million of about $3.3 million of subcontractor labor charges we selected to review. The only documentation CSC could provide us for these charges were subcontractor invoices that lacked some of the basic information needed to assess the labor charges, such as the names of the subcontractor employees, hours billed, or individual labor rates. Therefore, CSC could not fully substantiate that the costs for services provided by the subcontractors that were charged to FBI’s Trilogy project were appropriate. CSC hired a subcontractor, CACI, to schedule and conduct training related to the Trilogy project. CACI billed more than $17 million ($13 million for labor and $4 million for facilities, equipment rentals, and other direct costs) to provide FBI agents and employees basic, intermediate, and advanced training in Microsoft Office applications, including Word, Excel, PowerPoint, and Outlook. FBI officials stated that FBI decided to conduct off-site, hands-on training for employees (instead of internal or CD-based training) because of the number of employees who had limited experience using computers and because FBI had insufficient space to set up training labs at their existing facilities. During our review of CSC ODC, we selected $4.7 million of these training charges from CACI and found that CSC was unable to provide us with adequate support for these charges. Subsequently, we requested supporting documentation from CACI for selected charges totaling about $3.5 million of these training costs. Our examination identified the following issues: CACI could not adequately support almost $3 million that it paid to one event planning company. Since FBI decided to conduct their training off- site, CACI hired an event planner, which it paid almost $3.2 million to reserve hotel conference rooms, rent computer equipment for training sessions, and set up the conference rooms for the training. The bulk of the $3.2 million related to one purchase order for training at 72 sites over 3 months, which stated that costs could not exceed $2,992,526. This purchase order provided for payment of 50 percent of this amount to the event planner at the time the purchase order was issued (to cover costs that include prepayments for obtaining training facilities) and four equal monthly payments for the remaining balance. CACI provided us with the purchase order, which included a description of the services to be performed by the event planner. They also provided us copies of invoices from the event planner that included general descriptions of the services billed. CACI could not provide any further evidence of the actual costs of goods or services that were provided by the event planner, such as hotel invoices for the rental of conference rooms. CACI stated that documentation supporting actual costs of the event planner was not applicable because its agreement with the event planner was “fixed priced.” CACI stated that the payment terms in the purchase order required only that CACI pay the event planner a series of payments in fixed amounts. However, CACI’s assertion that supporting documentation of actual costs was not applicable was not supported by the terms of the purchase order, which included a related statement of work that specifically required documentation to support costs claimed by the event planner. According to the statement of work, the event planner was required to (1) provide data on actual costs incurred twice a month, (2) make every attempt to obtain the best pricing with respect to all costs, and (3) charge CACI only for services rendered, allowing for any cost savings from advance payments to be returned to CACI upon request. CACI purchased about 30,000 ink pens and 30,000 highlighters for training sessions, at a cost of $19,705 and $32,314, respectively. The pens were custom made for the Trilogy training program. While there was supporting documentation for these costs and FBI officials stated that they preapproved the purchases as part of their acceptance of the Trilogy Pre-Training Education Plan, we question whether these purchases were necessary. Example 3—Other Direct Costs/Equipment Disposal CSC was unable to provide us adequate supporting documentation for $762,262 in equipment disposal costs billed by two subcontractors. The documentation provided consisted of a spreadsheet that summarized costs of the subcontractors, but did not include receipts or other support to prove that these costs were actually incurred. Example 4—Subcontractor Labor Invoice–Duplicate Payment Our review of SAIC’s subcontractor labor charges found that FBI was billed twice for the same subcontractor invoice totaling $26,335. SAIC officials agreed that they double billed and stated that they would make a correction. FBI did not adequately maintain accountability for computer equipment purchased for the Trilogy project. FBI relied extensively on contractors to account for Trilogy assets while they were being purchased, warehoused, and installed. However, FBI did not establish controls to verify the accuracy and completeness of contractor records it was relying on, to ensure that only the items approved for purchase were acquired by the contractors, and to ensure that it received all those items acquired through its contractors. Moreover, once FBI took possession of the Trilogy equipment, it did not establish adequate physical control over the assets. Consequently, we found that FBI could not locate over 1,200 assets purchased with Trilogy funds, which we valued at approximately $7.6 million. In addition, during its physical inventory counts for fiscal years 2003 through 2005, FBI identified over 30 pieces of Trilogy equipment valued at about $167,000 that it reported as having been lost or stolen. Due to the significant weaknesses we identified in FBI’s property controls, the actual amount of lost or stolen equipment could be even higher. FBI relied on contractors to maintain records related to the purchasing, warehousing, and installation of about 62 percent of the equipment purchased for the Trilogy project. FBI’s primary contractor responsible for delivering computer equipment to FBI sites was CSC. FBI officials told us they met regularly with CSC and its subcontractors to discuss FBI’s equipment needs and a deployment strategy for the delivery of equipment. Based on these meetings, CSC instructed its subcontractors to purchase equipment, which was subsequently shipped to and put under the control of the subcontractors. Once equipment arrived at the subcontractors’ warehouses, they were responsible for affixing bar codes on accountable items—all items valued above $1,000 and certain others considered sensitive that are required by FBI policy to be tracked individually. In addition, FBI directly purchased about $19.1 million of equipment for the Trilogy project that was shipped directly to CSC or its subcontractors. When equipment was shipped from subcontractor warehouses to FBI sites, the shipment included two CSC subcontractor-prepared reports. The first report, similar to a bill of lading, included all items shipped, including nonaccountable items such as cables. However, there was no requirement for FBI officials receiving the items to verify that the items included on this report were actually received. The second report listed accountable assets that were delivered such as desktop computers, scanners, printers, and network equipment that were available for installation at that location. This report was then used by the subcontractor during the installation of equipment at each FBI location to prepare the “Site Acceptance Listing” documenting equipment that had been accepted and installed at the site. At the completion of the site installation, both FBI and subcontractor officials were required to sign this Site Acceptance Listing. According to FBI headquarters officials, verification of the subcontractor-prepared Site Acceptance Listings represented a key control over Trilogy equipment, providing assurance that FBI received what it should have. However, based on our inquiries at two field offices we visited, we found that FBI officials who received equipment and signed the Site Acceptance Listing, may not have always verified the accuracy and completeness of these lists. An official from the Baltimore field office acknowledged that he signed these lists without verifying that the items included had actually been delivered and installed at his site. In addition, officials from the Newark field office said they felt comfortable that they had received all the items they were supposed to because of their close working relationship with the subcontractor who performed the installation; however, they acknowledged that they did not independently verify equipment included on the contractor lists that they had signed. FBI did not prepare its own independent lists of ordered, purchased, or paid-for assets, and therefore, it had no choice but to rely solely on the contractor lists to account for its Trilogy assets. Furthermore, when FBI received shipments from contractors, it did not compare purchasing and billing documentation to receiving documentation to verify that all items purchased were received as required by FBI’s accountable asset manual. According to FBI policy, when shipments are received, a designated property custodian is responsible for ensuring that the items received are the same as those that were ordered and for determining whether a complete or partial shipment was received. However, FBI did not require that these procedures be followed for the Trilogy project because purchasing and billing documentation for the project was not site specific; instead, the program office instructed FBI staff to only verify the number of boxes received and not to open the boxes to verify the assets received until the deployment team arrived. In addition, FBI did not perform an overall reconciliation of total assets ordered and paid for to those received. Such a reconciliation would have been made difficult by the fact that invoices FBI received from CSC did not include item-specific information—such as bar codes, serial numbers, or shipping location. However, failure to perform such a reconciliation left FBI with no assurance that it had received all of the assets it paid for. Assets that were delivered to FBI sites by contractors were not entered into FBI’s Property Management Application (PMA) in a timely manner, increasing the risk that assets could be lost or stolen without detection. FBI policy requires property management personnel to identify accountable items and enter them into PMA within 30 days of receipt. However, FBI officials acknowledged that Trilogy equipment had not been entered into PMA within 30 days, as required. We compared installation dates recorded in CSC’s database of assets deployed to dates assets were recorded in PMA. As shown in table 5, we found that 71.6 percent of the CSC items that were recorded in PMA, representing 84 percent of the dollar value, were entered more than 30 days after receipt, contrary to FBI policy. In addition, 16.9 percent of the assets, representing 37 percent of the dollar value, were entered more than a year after receipt. When an asset is not recorded in the property system, there is no systematic means of identifying where it is located or when it is moved, transferred, or disposed of and no record of its existence when physical inventories are performed. This severely limits the effectiveness of the physical inventory in detecting missing assets. In an effort to identify the assets that should have been entered into PMA, FBI attempted to create, in 2005, an after-the-fact inventory listing of accountable and nonaccountable assets deployed. Because FBI had not prepared its own independent inventory listing of Trilogy assets ordered and paid for, it used the CSC-prepared list of equipment deployed as its basis to determine accountable assets. According to FBI, this list was supposed to include all CSC-deployed equipment that had been affixed with a bar code. However, FBI’s ability to accurately identify accountable assets was hampered by its loss of control over bar codes. FBI policy identifies the use of bar codes as “the key control” for maintaining individual asset accountability and requires that bar codes be affixed to all accountable assets. Despite the importance of maintaining a reliable bar code system, FBI relied on contractors to affix the bar codes, but then did not track the bar code numbers given to contractors, the bar code numbers they used, or the bar code numbers returned. Moreover, FBI provided incorrect instructions to contractors, initially directing them to bar code certain types of nonaccountable computer pieces. An FBI official stated that when creating its after-the-fact listing of accountable and nonaccountable assets from the CSC listing, FBI tried to identify and list as nonaccountable those items that had been mistakenly bar coded. However, we found that FBI’s accountable asset listing still included some nonaccountable assets that had been bar coded in error. Further, we noted that FBI’s listing of nonaccountable assets incorrectly included some accountable items such as uninterruptible power supplies and network switches. As a result, FBI could not reliably determine the complete universe of Trilogy assets that should have been bar coded and designated as accountable property to be tracked separately by PMA. We also compared FBI’s after-the-fact listing of accountable assets identified from the CSC-prepared listing to the asset records in FBI’s PMA. We found that FBI’s listing and or PMA included several errors and omissions in the listings, including: accountable assets for which there was no listed bar code or serial incorrect bar codes (for example, text bar codes or bar codes with too items for which locations were listed as “unknown”; assets with the same bar code with different serial numbers and/or incomplete and inaccurate asset descriptions; items that matched to PMA by bar code but not by serial number; and items that matched to PMA by serial number but not by bar code. The FBI official who prepared the accountable asset listing said he gave this listing to each site with instructions to ensure that all of the assets had been entered into PMA in preparation for a 2005 physical inventory count. However, FBI did not follow up to determine whether all of the records in the inventory listing were actually entered into PMA. For site officials using the listing, the lack of complete and accurate information included in the inventory listing may have limited their ability to track some of the assets and ensure they were accounted for in PMA. FBI policy requires complete physical inventories of all accountable assets at least once every 2 years. Annually, a complete physical inventory of all accountable assets that are also capitalized assets (i.e., those with an acquisition cost of $25,000 or more) and “sensitive” property (e.g., laptop computers and weapons which are susceptible to theft) is performed. FBI’s most recent biennial inventory of accountable assets occurred in the spring of 2005. To complete its inventory, FBI used scanner technology, directing employees responsible for performing the inventory to scan all items found at FBI locations that contained a bar code. PMA was updated to reflect the items that were located and scanned during the inventory and generated reports to identify new accountable assets that were not previously entered in the system. However, FBI did not compare the results of its inventory to its listing of accountable assets purchased under Trilogy to ensure that all of these assets were actually located during the inventory. Failing to perform this elemental step undermines the fundamental purpose of conducting physical inventories. Given that FBI did not ensure that all accountable Trilogy assets that should have been in its possession (i.e., those it paid for) were located during the physical inventory, we undertook several procedures in an attempt to do so. To perform this test work, we used FBI’s inventory listing of CSC-purchased accountable equipment as well as similar FBI listings of assets FBI purchased directly (government furnished equipment or GFE) and that were purchased by SAIC. Although FBI’s inventory listing of CSC- purchased accountable equipment included inaccurate and incomplete information, as previously discussed, we were able to reconcile the total number of items for selected types of equipment from its listing of accountable CSC-purchased equipment to the number of these assets invoiced by CSC. This provided some assurance that the listing of accountable CSC-deployed equipment purchased by both CSC and FBI for those asset types includes all accountable assets FBI paid for and that should be in FBI’s possession. This was done for selected CSC-purchased accountable assets, which represented approximately 76 percent of the total number of CSC-purchased equipment, and all SAIC-purchased assets. Therefore, we used these asset listings to determine whether accountable assets were located during FBI’s most recent physical inventory. We obtained several iterations of PMA listings and inventory reports from FBI and attempted to trace the assets to these reports. Collectively, these listings and reports should have included all accountable Trilogy assets in FBI’s possession at the time of its 2005 inventory. Based on this comparison, we identified 1,205 accountable Trilogy assets, with an estimated value of approximately $7.6 million that FBI has been unable to locate or otherwise account for. We estimated this value using the lowest per-unit-cost based on the Trilogy equipment-pricing sheets that were prepared by FBI and used in recording the cost of the same types of assets in PMA. If we could not identify a price for a certain type of accountable asset in FBI’s equipment-pricing sheets, we identified the lowest price on the accountable and capitalized assets spreadsheet prepared by FBI’s finance division. When the cost was not available on either of these documents, or when the item was unknown, we did not attempt to estimate the asset’s value. As a result, our estimated value of lost or stolen equipment does not include 103 of the 926 CSC-purchased items we identified, such as Paradyne frame savers and Optical HBA drivers, and therefore is understated. Table 6 provides a description and estimated value for the assets for which we could identify unit cost. As of November 30, 2005, FBI was unable to sufficiently explain why these items were not accounted for in PMA and/or could not provide adequate documentation that the assets had been located. An FBI official stated that some of the assets included in the listing of CSC-purchased equipment would not be expected to be in PMA because some were replaced. For example, according to the official, some of the CSC-purchased switches were replaced due to a heating malfunction. However, FBI did not provide us with documentation related to replaced items, and therefore we could not determine which units if any were replaced and/or which units should still be on hand. The FBI official also told us that, even though he attempted to remove all nonaccountable items from the listing of CSC-purchased equipment, some nonaccountable items may still have been included. For example, FBI told us some purchased components that were a part of an accountable asset unit may have been bar coded even though the item by itself was not an accountable item. Using FBI guidance on accountable property, we determined that 103, or about 11.1 percent, of the missing 926 CSC- purchased assets may represent nonaccountable units. Because FBI was unable to provide us with location information for these items, we could not definitively determine whether they represent nonaccountable components or are separate accountable assets that were not in PMA and could not be located. FBI had no further explanation for why it could not locate the missing assets we identified or whether the missing assets we identified may expose confidential and sensitive information and data to unauthorized users. In addition to the missing items discussed above, FBI could not initially locate another 25 purchased assets—highly-sensitive encryption equipment—in its PMA system. Subsequently, FBI officials were able to provide the bar codes, locate the encryption equipment, and provide evidence that all of the items were now in its PMA system. The officials stated the equipment was not originally required to be bar coded or tracked in PMA, but that it was tracked several different ways by serial number. The officials also explained that the problem resulted mostly from FBI modifications to the equipment that required revisions to the serial numbers listed in the invoices. Regardless of the fact that the equipment was subsequently located after research and inquiries, such highly sensitive equipment needs to be properly and timely accounted for to ensure the precise location of the equipment can be immediately determined at all times. In addition to the items we found missing, FBI’s property management division reported 37 CSC-purchased Trilogy assets, totaling approximately $167,000, that were determined to be lost or stolen during its physical inventory counts for fiscal years 2003 through 2005. The assets reported as lost or stolen included computers and servers, which may have contained sensitive and confidential information. According to FBI policy, for items in PMA that cannot be located during the inventory, a “Report of Lost or Stolen Property” must be submitted to FBI headquarters. Due to security concerns, FBI did not provide us copies of these reports for the property items that were not located during the 2003, 2004, and 2005 inventories. Therefore, it is unclear what type of security risk if any these lost/stolen assets represent. FBI’s Trilogy IT project spanned 4 years and the reported costs exceeded $500 million. Our review disclosed that there were serious internal control weaknesses over the process used by FBI and GSA to approve contractor charges related to Trilogy, which made up the vast majority of the total reported project cost. While our review focused specifically on the Trilogy program, the significance of the issues identified during our review may be indicative of more systemic contract and financial management problems at FBI and GSA, in particular when using cost-reimbursable type contracts and interagency contracting vehicles. These weaknesses resulted in the payment of millions of dollars of questionable contractor costs, which may have unnecessarily increased the overall cost of the project. Unless FBI strengthens its controls over contractor payments, its ability to properly control the costs of future projects involving contractors, including its new Sentinel project, will be seriously compromised. Additionally, to the extent that GSA enters into similar interagency agreements, it will continue to be exposed to oversight lapses until it reassesses its procedures. Further, weaknesses in FBI’s controls over the equipment acquired for Trilogy resulted in millions of dollars in missing equipment, and call into question FBI’s ability to adequately safeguard its equipment, as well as confidential and sensitive information that could be accessed through that equipment from unauthorized use. We are making the following 27 recommendations to the Director of FBI and the Administrator of General Services to (1) facilitate the effective management of interagency contracting, (2) mitigate the risks of paying unallowable costs in connection with cost-reimbursement type contracts, and (3) improve FBI’s accountability for and safeguarding of its computer equipment. To improve FBI’s controls over its review and approval process for cost- reimbursement type contract invoices, we recommend that the Director of FBI instruct the Chief Financial Officer to establish policies and procedures so that: Future interagency agreements establish clear and well-defined roles and responsibilities for all parties included in the contract administration process, including those involved in the invoice review process, such as contracting officers, technical point of contacts, contracting officer’s technical representatives, and contractor personnel with oversight and administrative roles. Appropriate steps are taken during the invoice review and approval process for every invoice cost category (i.e., labor, travel, other direct costs, equipment, etc.) to verify that the (1) invoices provide the information required in the contract to support the charges, (2) goods and services billed on invoices have been received, and (3) amounts are appropriate and in accordance with contract terms. The resolution of any questionable or unsupported charges on contractor invoices identified during the review process is properly documented. Labor rates, ceiling limits, treatment of overtime hours, and other key terms for cost determination are clearly specified and documented for all contracts, task orders, and related agreements. Future contracts clearly reflect the appropriate Federal Acquisition Regulation travel cost requirements, including the purchase of the lowest standard, coach, or equivalent airfare. An appropriate process is in place to assess the adequacy of contractor’s review and documentation of submitted subcontractor charges before such charges are paid by FBI. In light of the findings in this report, we recommend that the Administrator of General Services instruct the director of FEDSIM to reassess its procedures in connection with (1) interagency contracts and (2) delegated contract administration responsibilities, including the following: Clearly defining the roles and responsibilities of each party in interagency agreements, and particularly those related to reviewing and approving invoices. Assessing the adequacy of its invoice review and approval polices, including specific steps to be performed by each party so that (1) invoices provide the information required in the contract to support the charges, (2) goods and services billed on invoices have been received, (3) amounts are appropriate and in accordance with contract terms, and (4) the resolution of any questionable or unsupported charges on contractor invoices identified during the review process is clearly documented. Clearly documenting labor rates, ceiling limits, treatment of overtime hours, and other key terms for cost determination for all contracts, task orders, and related agreements. Clearly reflecting in future contracts the appropriate Federal Acquisition Regulation travel cost requirements, including the purchase of the lowest standard, coach, or equivalent airfare. Confirming that contractors properly review and support submitted subcontractor charges. To address issues on the Trilogy project that could represent opportunities for recovery of costs, we recommend that the Administrator of General Services, in coordination with the Director of FBI: Confirm SAIC’s informal Extended Work Week policy and work with SAIC to determine and resolve any overpaid amounts. Further investigate whether DynIS’ labor rates exceeded ceiling rates and pursue recovery of any amounts determined to have been overpaid. Determine whether other contractor costs identified as questionable in this report should be reimbursed to FBI by contractors. Consider engaging an independent third party to conduct follow-up audit work on contractor billings, particularly areas of vulnerability identified in this report. To improve FBI’s accountability for purchased assets, we recommend that the Director of FBI instruct the Chief Financial Officer to: Establish policies and procedures so that (1) purchase orders are sufficiently detailed so that they can be used to verify receipt of equipment at FBI sites, and (2) contractor invoices are formatted to tie directly to purchase orders and facilitate easy identification of equipment received at each FBI site. Reinforce existing policies and procedures so that when assets are delivered to FBI sites, they are verified against purchase orders and receiving reports. Copies of these documents should be forwarded to FBI officials responsible for reviewing invoices as support for payment. Establish policies and procedures so that invoices are paid only after all verified purchase order and receipt documentation has been received by FBI payment officials and reconciled to the invoice package. Establish a policy to require that, upon receipt of property at FBI sites, FBI personnel immediately identify all accountable assets and affix bar codes to them. Revise FBI’s policies and procedures to require that all bar codes are centrally issued and tracked through periodic reconciliation of bar codes issued to those used and remaining available. Assigned bar codes should also be noted on a copy of the receiving report and forwarded to FBI’s Property Management Unit. Revise FBI policies and procedures to require that accountable assets be entered into PMA immediately upon receipt rather than within the current 30-day time frame. Require officials inputting data into PMA to enter (1) the actual purchase order number related to each accountable equipment item bought, (2) asset descriptions that are consistent with the purchase order description, and (3) the physical location of the property. Establish policies and procedures related to the documentation of rejected or returned equipment so that the (1) equipment that is rejected immediately upon delivery is notated on the receiving report that is forwarded to FBI officials responsible for invoice payment; and (2) equipment that is returned after being accepted at an FBI site (e.g., items returned due to defect), is annotated in PMA, including the serial number and location of any replacement equipment, under the appropriate purchase order number. Reassess overall physical inventory procedures so that all accountable assets are properly inventoried and captured in the PMA system and that all unlocated assets are promptly investigated. Expand the next planned physical inventory to include steps to verify the accuracy of asset identification information included in PMA. Establish an internal review mechanism to periodically spot check whether the steps listed above—including verifications of purchase orders and receiving reports against received equipment, immediate identification and bar coding of accountable assets, maintenance of accurate asset listings, prompt entry of assets into PMA, documentation of rejected and returned equipment, and improved bar coding and inventory procedures—are being carried out. Investigate all missing, lost, and stolen assets identified in this report to (1) determine whether any confidential or sensitive information and data may be exposed to unauthorized users; and (2) identify any patterns related to the equipment (e.g., by location, property custodian, etc.) that necessitates a change in FBI policies and procedures, such as assignment of new property custodians or additional training. In written comments reprinted in appendix III, FBI stated that it concurred with our recommendations and that it has made and continues to make significant structural and procedural changes to address our recommendations, taking critical steps to strengthen internal controls. FBI also provided additional information related to Trilogy assets we identified as missing. In written comments reprinted in appendix IV, GSA stated that it accepted our recommendations, did not believe that 1 of them was needed, and described some of the improvements to its internal controls and other business process changes already implemented. GSA also expressed concern with some of our observations and conclusions related to the invoice review and approval process and our analysis of airfare costs. FBI and GSA also provided technical comments, which we have incorporated as appropriate. In its comments, FBI stated that executive management at FBI has directed a sustained effort to address and correct weaknesses identified in our report and other Trilogy reviews. FBI further stated that attention is being focused on four areas: (1) audit capability, (2) property management, (3) contracting services, and (4) IT investments. If properly implemented, the activities outlined in FBI’s letter should help improve FBI accountability for future IT acquisitions and other contract services. In this regard, vigilant oversight will be needed to ensure controls are correctly designed and operating effectively to protect assets and prevent improper payments. Further, in its comments, FBI stated that more than 44,000 pieces of accountable property were successfully deployed and tracked in the FBI’s PMA during the Trilogy project. FBI also stated that the 1,404 items we initially reported as missing or improperly documented represented approximately 3 percent of the accountable assets. We question both of these statements. Because of the control weaknesses discussed in our report, FBI does not have a reliable basis to know the number of Trilogy assets it purchased or how many should have been tracked as accountable assets. Further, since we did not test all the assets purchased, more may be missing. FBI also stated that as of January 2006, it had accounted for more than 1,000 of the 1,404 items we reported as missing or improperly documented. During our agency comment period, FBI indicated that it found 237 items we previously identified as missing and provided evidence, not made available during our audit, to sufficiently account for 199 of these items. We adjusted the missing assets listing in our report to reflect 1,205 (1,404 – 199) assets as still missing. In February 2006, FBI informed us that the approximately 800 remaining items noted in its official agency response included (1) accountable assets not in PMA because they were either incorrectly identified as nonaccountable assets or mistakenly omitted, (2) defective accountable assets that were never recorded in PMA and subsequently replaced, and (3) nonaccountable assets or components of accountable assets that were incorrectly bar coded. We considered these same issues during our audit and attempted to determine their impact. For example, as stated in our report, FBI told us that components of some nonaccountable assets that were part of a larger accountable item may have been mistakenly bar coded. Using FBI guidance on accountable property, we determined that 103 or about 11 percent of the 926 missing assets purchased by CSC may have represented nonaccountable components. Because FBI could not provide us with the location information, we could not definitively determine whether the items were accountable assets or not. During the course of our audit, FBI was not able to provide us with any evidence to support their other statements regarding the reasons the assets could not be located. While we are encouraged by FBI’s current efforts to account for these assets, its ability to definitively determine their existence has been compromised by the numerous control weaknesses identified in our report. Further, the fact that assets have not been properly accounted for to date means that they have been at risk of loss or misappropriation without detection since being delivered to FBI—in some cases for several years. While GSA said it accepted all of our recommendations, it expressed reservations regarding our recommendation that GSA should clearly reflect appropriate FAR travel cost requirements in future contracts. GSA stated in its comment letter that it believed that the requirements outlined in the applicable FAR section 31.205-46 and stipulated in the task orders were more than adequate. In a subsequent conversation, we asked the GSA contracting officer why the language in the CSC and SAIC task orders and the Mitretek contract, which stated that long-distance travel would be reimbursed to the extent allowable pursuant to the JTR, was considered appropriate by GSA. The GSA contracting officer stated that, while not specified in the contract language, the reference to the JTR related only to per diem rates and allowances when determining the reasonableness of the travel costs, such as lodging and mileage reimbursements. She further stated that the FAR would apply to all other travel reimbursement determinations. We do not agree that our recommendation is unnecessary. In our view, the references to the JTR create ambiguity. The FAR cost allowability clause 52.216-7 states that when determining allowability, in addition to FAR cost principles, the terms of the contract also apply. Therefore, the reference to the allowability under the JTR could have caused confusion with the contractors regarding what long-distance travel costs were allowed, including airfare costs. We continue to believe that the task orders should have more clearly described the applicable travel requirements. Regarding the invoice review and approval process, GSA stated that each member of the review team—FBI, GSA, and Mitretek—played a unique and mutually understood role. In particular, GSA stated that Mitretek’s role in the invoice review and approval process was significant and that it was reasonable for GSA to have relied on input from FBI, via Mitretek, in approving invoices for payment. GSA also referred to procedures to preapprove ODC and equipment purchases. Further, GSA stated that it believed that the procedures to process invoices were generally sound and that contractors are required to maintain records to adequately demonstrate that costs claimed have been incurred, and are reasonable, allowable, and allocable. GSA also stated that it will have DCAA audit the contract costs to determine if any costs are unallowable, unreasonable, or unallocable and will use the audit results as a basis to pursue remedies to recoup funds and assess penalties as may be applicable. We disagree with GSA regarding review team roles and the review process. Based on discussions with members of the review team, our review of supporting documentation, and our assessment of the outcomes of the review process, it is clear that the invoice review and approval process was inadequate. The roles and responsibilities of the review team members were not clearly defined or documented and this led to confusion among the review team members about each member’s role. Regarding Mitretek’s role, Mitretek officials stated that they performed a limited review of only labor invoices. Before relying on others, GSA should have verified its understanding of each member’s roles and responsibilities and confirmed that the appropriate functions were being performed. In addition, while there were procedures to preapprove ODC and equipment purchases, the review team did not effectively link the preapproval and the invoice review and approval processes, especially in relation to CSC, in part because CSC invoices lacked detailed information needed to verify that charges billed were in fact preapproved. Further, while contractors are required to maintain records to adequately support costs, we found that the review team generally did not request additional documentation such as travel vouchers or subcontractor invoices to support amounts billed. If the review team had a systematic process in place to review costs, it may have questioned some of the excessive airfare we identified and found, as we did, that CSC lacked documentation to adequately support subcontractor charges. It is a management function and sound business practice to have a process in place to ensure that contractors have such documentation. Having such processes and questioning amounts billed would also allow for corrective measures to be implemented as, and if, problems were found. In addition, while we agree that a DCAA audit of contract costs can provide a detective control to help determine whether contractor costs were proper, reliance on an after-the-fact audit is not an acceptable replacement for the type of real-time monitoring and oversight of contractor costs—preventative controls—we recommend in this report. Further, a DCAA audit of civilian contractor costs is not automatic and would require an additional cost to the government to procure. The review team largely operated in an environment of trust without an adequate basis for knowing whether the contractor billings were reasonable and costs claimed were allowable. Effective internal control calls for a sound, on-going invoice review and approval process as the first line of defense in preventing unallowable costs. Regarding our analysis of travel costs, GSA stated that our conclusions did not account for the ever-changing travel schedules and itineraries necessitated by changes in FBI requirements. GSA also stated that a hypothetical standard coach-class ticket does not provide a benchmark to make a valid price comparison, even if adjusted for inflation, because the airline travel industry has had significant changes with respect to pricing of airline tickets. GSA also stated that they believe it is impossible at this date to look back over 5 years and estimate what may have been a reasonable airfare price. We disagree. Our analysis and conclusions related to travel did take into account the possible conditions that could justify airfare costs in excess of the lowest customary coach-class fare. The FAR requires supporting documentation for first-class and other excessive airfare costs of the nature we identified to justify the higher airfare costs. No such documentation was provided to us to justify the excessive costs we identified. To estimate the cost of the coach-class tickets, we assumed that tickets were purchased 3 days in advance (which was the average based on the trips we reviewed) and did not include a Saturday night stay over. Specifically, we (1) used the Web sites of the airlines used by each traveler, (2) searched for standard fully-refundable coach-class tickets with the same destinations, (3) calculated an average cost based on the lowest and highest ticket prices available at the time of our search, and (4) adjusted the average cost for inflation applicable to airfare. We believe that this approach, which closely approximated what travelers were doing at that time, resulted in reasonable estimates as to how much the travel should have cost. We also believe that adjusting current fares for inflation applicable to airfare results in a reasonable benchmark to compare to historical prices, since it does take into account price changes as a result of changes in the airline industry, including the effects of competition. Lastly, we fully agree with GSA that the passage of time makes it difficult to determine historical airfare costs, which is another reason that costs should be reviewed real time instead of as part of an after-the-fact audit. An after-the-fact analysis is no substitute for the contemporaneous monitoring and oversight that we recommend in this report. More specific discussions are provided following GSA’s comments, which are reprinted in appendix IV. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Director of the FBI, the Acting Administrator of GSA, and interested congressional committees. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9508 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix V. The Federal Bureau of Investigation’s (FBI) Trilogy project experienced several delays, as shown in figure 4. To determine whether the Federal Bureau of Investigation’s (FBI) internal controls provided reasonable assurance that improper payments to contractors would not be made or would be detected in the normal course of business, we used the Standards for Internal Control in the Federal Government, Guide for Evaluating and Testing Controls Over Sensitive Payments, and The Executive Guide on Strategies to Manage Improper Payments: Learning from Public and Private Sector Organizations as a basis for assessing FBI’s internal control structure over its Trilogy program. We also reviewed our prior reports, as well as those by the Department of Justice’s Office of Inspector General on Trilogy issues; Trilogy contracts and interagency agreements; and contractor invoices and other documentation supporting goods provided and services rendered. In addition, we conducted interviews with officials from FBI, the General Services Administration, the Department of the Interior, and the contractors, and performed walkthroughs to gain an understanding of the processes used to review and approve invoices. To determine whether FBI’s payments to contractors were properly supported as a valid use of government funds, we performed data mining, document analysis, and other forensic auditing techniques to select transactions to test. We reviewed documentation maintained by the review team, contractors, or subcontractors to assess the allowability of costs based on Trilogy contract documents and applicable federal regulations, such as the Federal Acquisition Regulation, Federal Travel Regulation, and Joint Travel Regulations. While we identified some payments for questionable costs, our work was not designed to identify all questionable payments or to estimate their extent. The following provides more details on our testing of payments to FBI’s contractors—Science Applications International Corporation (SAIC), Computer Sciences Corporation (CSC), and Mitretek—for labor, subcontractor labor, travel, and other direct costs (ODC). To test payments for labor costs, we obtained from SAIC a database of hours charged to the Trilogy project by employee and pay period. Using this database and labor invoice detail, we selected 21 employees based on either (1) a high number of hours worked, (2) a high dollar amount billed, or (3) billing in more than one labor category. For these 21 employees, to test the labor rates billed, we compared rates billed to salary information. In addition, for subsets of these 21 employees, we compared the hours billed to hours reported in SAIC’s labor database and tested the mathematical accuracy of the labor costs billed. To determine the reasonableness of extended work week (EWW) hours charged to the Trilogy project, using the database we analyzed the total, EWW, and uncompensated hours charged by employee. We also compared the average fully burdened labor rates charged to ceiling rates to determine whether the rates were below the ceilings. To test payments for subcontractor labor costs, we obtained from SAIC a database of all subcontractor labor charges. In order to determine whether the database was complete, we verified that the database reconciled with SAIC’s subcontractor billings. We then selected subcontractor invoices to review based on a high dollar amount billed or unusual billing patterns. We analyzed supporting documentation such as subcontractor invoices and time sheets from SAIC for about $17.2 million, or 37 percent, of payments for SAIC subcontractor labor. To test payments for travel costs, using detail included in SAIC’s travel invoices and copies of travel authorizations provided by SAIC, we selected transactions to review based on (1) high airfare costs, (2) actual costs that exceeded authorized amounts, and (3) unusual billing patterns. We analyzed supporting documentation, such as travel vouchers, receipts, and subcontractor invoices, from SAIC for about $154,000, or 45 percent, of payments for SAIC travel costs. To test payments for ODC, using detail included in SAIC’s ODC invoices, we selected transactions with unusually large amounts within a category or with an unusual category description. We analyzed supporting documentation, such as invoices or other documentation, from SAIC for about $307,000, or 61 percent, of payments for SAIC ODC. Because CSC was unable to readily provide us transaction-level detail for all labor, travel, and ODC charges, we selected 11 invoices based on the amounts billed and the time periods covered. CSC was able to provide us transaction-level detail for these 11 invoices, which represented $14.7 million or about 33 percent of labor costs; $3.1 million or about 33 percent of travel costs; and $2.4 million or about 27 percent of ODC charges. Using these 11 invoices as our data source we performed the following tests of CSC labor, travel, and ODC. We recalculated the total labor charged for three labor categories in 7 of the 11 invoices to verify that the invoice amounts were calculated correctly. We also selected 11 employees based on either (1) high number of hours worked, (2) a high dollar amount billed, or (3) billing in more than one labor category. For these 11 employees, we compared the hours billed to time sheets and verified hourly rates by reviewing each employee’s salary history. In total, the 11 selected employees billed around $850,000 on the 11 invoices we reviewed. We tested the reliability of the detail provided by comparing the hours and amounts to labor invoices. We compared the average fully burdened rates charged to ceiling rates. We selected travel charges that were high in amount or exhibited unusual billing patterns. We reviewed travel vouchers for these selected charges. Because we identified possible first-class and unusual coach- class travel in these selections, we obtained and reviewed additional supporting documentation for CSC-purchased airline tickets beyond the initial 11 invoices selected for review. We selected ODC transactions with unusually large amounts within a category or in an unusual category (such as computer hardware). Because of anomalies we identified in our initial review, we selected additional transactions to review beyond the initial 11 invoices. In total, we analyzed supporting documentation for about $7.0 million, or about 80 percent, of payments for CSC ODC during the Trilogy project. To test payments for subcontractor labor costs, we obtained from CSC transaction-level detail for 12 of its subcontractors during the Trilogy project. From the transaction-level detail, we selected charges to review based on (1) high number of hours worked, (2) a high amount billed, and (3) other unusual billing patterns. We obtained and analyzed supporting documentation, such as subcontractor invoices, from CSC for about $3.3 million, or 4 percent, of the $75 million charged by CSC as subcontractor labor costs during the Trilogy project. To test payments for labor costs, we obtained transaction detail for three labor invoices, which represented $1.5 million or 8 percent of the payments for Mitretek labor. We tested the mechanical accuracy of the invoice calculation and selected one of the invoices and verified hours billed compared to time cards and hourly rates charged compared to salary histories. To test payments for travel costs, we obtained and analyzed the supporting documentation, such as travel vouchers, for all travel costs on two invoices. These invoices represented $11,211 or about 13 percent of payments to Mitretek for travel costs. To test payments for ODC, we obtained and analyzed the supporting documentation, such as invoices and receipts, for all ODC costs on two invoices. These invoices represented $139,083 or about 8 percent of payments to Mitretek for ODC. To determine whether FBI maintained proper accountability for assets purchased with Trilogy project funds, we used our Standards for Internal Control as a basis to assess FBI’s control structure over its Trilogy assets. We interviewed FBI, contractor, and subcontractor staff to identify and assess the controls in place over the ordering, purchasing, and receipt of Trilogy equipment. The following provides more details on our testing of Trilogy equipment purchased for FBI by CSC and SAIC, or directly by FBI: To determine whether FBI approved for purchase all assets acquired for the Trilogy project, we obtained FBI consents to purchase, Bills of Material, and invoices and compared the total assets approved to be purchased to assets actually purchased. To determine whether FBI Trilogy accountable assets listed in PMA were recorded in a timely manner, we obtained documentation from FBI and contractors for accountable assets purchased by CSC that identified the bar codes assigned to accountable assets and the date the equipment was received by FBI. We did not perform this test for SAIC-purchased assets because the assets represented only .8 percent of the total assets purchased with Trilogy funds. We also did not perform this test for FBI direct purchases since the supporting documentation did not provide bar codes or serial numbers for individual assets. We compared the bar codes on the listings to FBI’s Property Management Application (PMA) which included the date the asset was entered into PMA. To assess the accuracy and completeness of the FBI-prepared listings of CSC- and SAIC-purchased assets, we (1) analyzed the listings to identify any irregularities such as duplicate bar codes or missing information; (2) obtained the CSC equipment invoices and compared the total number of pieces billed on the CSC invoices for four selected accountable asset types that represented about 76 percent of the total CSC assets purchased to FBI’s listing; and (3) obtained the SAIC listings of Trilogy equipment returned to FBI, SAIC’s equipment invoices, and FBI’s listing of VCF assets and compared for each item the amount of equipment per the invoices to the SAIC listing and then to FBI’s VCF listing. To determine whether FBI had in its possession all accountable assets purchased for it by CSC and SAIC, we compared the complete listing of bar codes from FBI’s VCF and CSC listings to PMA to identify any bar codes not recorded in PMA. To test the accuracy of the data included in the PMA accountable asset records, we compared the data for each accountable asset, such as bar code number, serial number, asset description, and asset location, to FBI’s listing and followed up on any discrepancies. To identify Trilogy assets that had been reported as lost or stolen by FBI, we obtained a listing of all assets identified as lost or stolen by FBI during its annual inventories for years 2003, 2004, and 2005. We then compared this listing, by bar code, to FBI’s CSC and VCF equipment listings to determine which of these assets had been acquired for the Trilogy project. The scope of our review covered all assets purchased from the inception of the Trilogy contracts (May 2001) through December 2004 and included Trilogy assets that were either purchased directly by FBI or by one of the two primary Trilogy contractors, CSC and SAIC. We provided FBI a draft of this report and GSA a draft of applicable sections of this report for review and comment. The FBI Finance Division Acting Assistant Director and General Services Acting Administrator provided written comments, which are reprinted in appendixes III and IV, respectively. FBI and GSA also provided technical comments, which we have incorporated as appropriate. We also discussed with Trilogy contractors any findings that related to them. We performed our work in accordance with generally accepted government auditing standards in Washington, D.C. and at two FBI field sites and various other GSA and contractor locations in Virginia from May 2004 through December 2005. 1. We referred to the Department of Justice Office of Inspector General report only to provide background information related to previously reported issues with the Trilogy project. 2. See “Agency Comments and Our Evaluation” section. 3. Processing invoices timely as envisioned by the Prompt Payment Act does not lessen the government’s responsibility to verify costs billed by contractors. It is conceivable that the essential validation work could have been performed immediately after payment and any adjustments to correct prior billing errors could have been made to future invoices. 4. No documentation of any such inquiries was provided to support the General Services Administration’s (GSA) comment. Documenting such inquiries allows a subsequent reviewer to draw similar conclusions and would be beneficial to any subsequent audit, including by the Defense Contract Audit Agency (DCAA). 5. Contrary to GSA’s comment, the review team—Federal Bureau of Investigation (FBI), GSA, and Mitretek—approved CSC’s invoices that lacked information required by its task order, including employee billing rates and detail for subcontractor labor. We were not provided documentation indicating that any Computer Sciences Corporation (CSC) invoice had been rejected. 6. While the review team compared billed labor rates against Millennia ceiling rates for certain labor costs, it did not evaluate labor rates compared to ceiling rates for subcontractor labor, which represented about $163 million of Trilogy costs. Had the review team reviewed labor charges more thoroughly, it may have identified the potential overcharging of labor rates discussed in this report related to DynCorp Information Systems (DynIS). 7. According to the Federal Acquisition Regulation (FAR), it is the contractor’s responsibility to maintain supporting documentation for costs billed, including subcontractor labor costs. 8. Based on our discussion with on-site members of the review team, CSC travel vouchers were not obtained to review amounts billed on travel invoices. Had the vouchers been reviewed, the review team would have had a basis for questioning the first-class and excessive airfare costs we identified. 9. The travel administrator’s obligation to obtain the best fare does not relieve the government of its responsibility to review travel costs. In addition, we noted instances where the itinerary from the travel administrator indicated that a full-fare ticket was obtained at the traveler’s request, even though the ticket cost more than twice as much as the lowest logical fare that was also noted on the itinerary. 10. The approval process discussed by GSA relates to the travel authorization, which was the request to travel. We found that the review team lacked an adequate process to review travel vouchers that include the traveler’s receipts to confirm that the authorized trips were taken and that the costs were in accordance with applicable travel regulations. Also see comments 8 and 9. 11. The next sentence of the relevant section of the FAR cited by GSA states, “However, in order for airfare costs in excess of the above standard airfare to be allowable, the applicable condition(s) set forth above must be documented and justified.” No such documentation was provided to us for any of the first class or other excessive airfares we identified. 12. Our report stated that other direct costs (ODC) were paid without validation of the actual amounts included in the invoices and that the review team relied on the contractors to obtain purchase orders for ODC charges. It further stated that neither GSA, FBI, nor Mitretek performed procedures to ensure that equipment billed by the contractors was actually received before payment. 13. CSC ODC invoices lacked sufficient detail to validate amounts billed compared to what was approved and we were not provided documentation indicating that such information was requested by the review team. Further, the CSC invoices did not include the detail necessary for the review team to specifically identify the items purchased. We also found that some assets were paid for before they were received and that the FBI did not perform an overall reconciliation of total assets ordered and paid for to those received. 14. A GSA contracting officer representative told us that he was aware of the informal extended work week policy agreement, but could not provide documentation of the policy. 15. Our report stated that DynIS charged labor rates that may have exceeded rates that GSA asserts were established ceiling rates pursuant to the task order. 16. Based on GSA’s acceptance of our recommendations on page 1 of its comments, we assume that the intent was to state that “GSA accepts each of GAO’s recommendations.” Staff members who made key contributions to this report include Steven Haughton (Assistant Director), Marie Ahearn, Brooks Bare, Ed Brown, Marcia Carlsen, Richard Cambosos, Lisa Crye, Tyshawn Davis, Bonnie Derby, Abe Dymond, Lori Ryza, Kara Scott, Brooke Whittaker, and Matt Wood.
The Trilogy project--initiated in 2001--is the Federal Bureau of Investigation's (FBI) largest information technology (IT) upgrade to date. While ultimately successful in providing updated IT infrastructure and systems, Trilogy was not a success with regard to upgrading FBI's investigative applications. Further, the project was plagued with missed milestones and escalating costs, which eventually totaled nearly $537 million. In light of these events, Congress asked GAO to determine whether (1) internal controls provided reasonable assurance that improper payment of unallowable contractor costs would not be made or would be detected in the normal course of business, (2) payments to contractors were properly supported as a valid use of government funds, and (3) FBI maintained proper accountability for assets purchased with Trilogy project funds. FBI's review and approval process for Trilogy contractor invoices, which included a review role for the General Services Administration (GSA) as contracting agency, did not provide an adequate basis to verify that goods and services billed were actually received and that the amounts billed were appropriate, leaving FBI highly vulnerable to payments of unallowable costs. This vulnerability is demonstrated by FBI's payment of about $10.1 million in questionable contractor costs we identified using data mining, document analysis, and other forensic auditing techniques. These costs included first-class travel and other excessive airfare costs, incorrect charges for overtime hours, potentially overcharged labor rates, and charges for which the contractors could not provide adequate supporting documentation to substantiate the costs purportedly incurred. FBI also failed to establish controls to maintain accountability over equipment purchased for the Trilogy project. These control lapses resulted in more than 1,200 missing pieces of equipment valued at approximately $7.6 million that GAO identified as part of its review. In addition, in its own inventory counts, FBI identified 37 pieces of Trilogy equipment valued at approximately $167,000 that had been lost or stolen.
Overall, our analysis of the $33 billion in reported excess commodity disposals in fiscal years 2002 through 2004 showed that $4 billion related to items in new, unused, and excellent condition. Of the $4 billion, we determined that $3.5 billion (88 percent) included substantial waste and inefficiency because new, unused, and excellent condition items were being transferred or donated outside of DOD, sold on the Internet for pennies on the dollar, or destroyed rather than being reutilized. As discussed in our report, our analysis of $18.6 billion in fiscal year 2002 and 2003 excess commodity disposal activity identified $2.5 billion in excess items that were reported to be in new, unused, and excellent condition (A condition). Although federal regulations and DOD policy require reutilization of excess property in good condition, to the extent possible, our analysis showed that DOD units only reutilized $295 million (12 percent) of these items. The remaining $2.2 billion (88 percent) of the $2.5 billion in disposals of A-condition excess commodities were not reutilized, but instead were transferred, donated, sold, or destroyed. Similarly, our analysis of $14.3 billion in fiscal year 2004 disposal activity identified $1.5 billion in excess commodity items that were reported to be in A condition. Of the $1.5 billion in A-condition excess items, DOD units reutilized $200 million (13 percent) and transferred, donated, sold, or destroyed the remaining $1.3 billion (87 percent). We also found that during fiscal years 2002 and 2003, DOD purchased at least $400 million (over $200 million each year) of identical items instead of reutilizing available excess items in A condition. To illustrate continuing reutilization program waste and inefficiency, we purchased several new and unused excess DOD commodity items that were being purchased by DLA, were currently in use by the military services, or both. Our analysis of transaction data and our tests of controls for inventory accuracy indicate that the magnitude of waste and inefficiency could be much greater due to military units improperly downgrading condition codes of excess items that are in new, unused, and excellent condition to unserviceable and the failure to consistently record national stock numbers (NSN) needed to identify like items. DRMS is responsible for disposing of unusable items, often referred to as “junk,” as well as facilitating the reutilization of usable items. Although the majority of DOD’s excess property disposals relate to items in unserviceable condition, DOD also disposed of billions of dollars of serviceable items, including excess commodities in A condition from fiscal years 2002 through 2004. Our analysis of DRMS data showed that $28.1 billion of the $33 billion in excess DOD commodity disposals from fiscal year 2002 through fiscal year 2004 consisted of items listed in unserviceable condition, including items needing repair, items that were obsolete, and items that were downgraded to scrap. The remaining $4.9 billion in excess commodity disposals consisted of items reported to be in serviceable condition, including $4 billion in excess commodities reported to be in A condition. However, of the $4 billion, DOD units reutilized only $495 million (12 percent) of these items during the 3-year period. The data reliability issues noted above and our interviews, case studies, and statistical sample results indicate that the magnitude of waste and inefficiency associated with disposals of A-condition items could be much greater. As shown in figure 1, items that were not reutilized by DOD were transferred to federal agencies or special programs, donated to states, sold to the public, or destroyed by demilitarization or through scrap and hazardous materials contractors. We found that the percentage of DOD reutilization of excess property was higher in fiscal year 2002 than in fiscal years 2003 and 2004. According to DRMO officials, reutilization was higher in fiscal year 2002 because excess items were pulled back to support deployment to Afghanistan and Iraq. In fiscal year 2003, procurement to support the war on terrorism began to keep up with the demand for supplies, and reutilization of excess property decreased. DRMS officials attribute the fiscal year 2004 increase in DOD reutilization to the establishment of the Joint Services Nuclear, Biological, and Chemical Equipment Assessment Program (JEAP) to inspect excess military clothing, tents, and other textile items and reissue items in good condition. The increase in disposal activity in fiscal years 2003 and 2004 relates to turn-ins of property used in support of Operation Enduring Freedom and Operation Iraqi Freedom. Table 1 shows disposal activity related to A-condition excess commodities for fiscal years 2002 through 2004. Our analysis of fiscal year 2002 and 2003 DLA commodity purchases and DRMS excess property inventory data identified numerous instances in which the military services ordered and purchased items from DLA at the same time identical items—items with the same NSN—that were reported to be in new, unused, and excellent condition were available for reutilization. We found that DOD purchased at least $400 million of identical items during fiscal years 2002 and 2003—over $200 million each year—instead of using available excess A-condition items. The magnitude of unnecessary purchases could be much greater because NSNs needed to identify identical items were not recorded for all purchase and turn-in transactions. For example, we determined that DLA buyers and item managers did not record NSNs for 87 percent (about $4.9 billion) of the nearly $5.7 billion in medical commodity purchases by military units during fiscal years 2002 and 2003. Further, as discussed later, improper downgrading of condition codes to unserviceable could also result in an understatement of the magnitude of unnecessary purchases. While our statistical tests found a few instances of inaccurate serviceable condition codes, most condition code errors related to the improper downgrading of condition to unserviceable. To determine whether the problems identified in our analysis were continuing, we monitored DRMS commodity disposal activity from May 2004 through April 2005. We found that DOD continued to transfer, donate, and sell excess A-condition items instead of reutilizing them. To illustrate these problems, we requisitioned several excess new and unused items at no cost and purchased other new and unused commodities at minimal cost. We based our case study selections on new, unused items that DOD continued to purchase. As discussed in our report, we used the GSA Federal Disposal System, available to all federal agencies, to requisition several new and unused excess DOD commodity items during our audit in fiscal year 2004 and the first half of fiscal year 2005, including a medical instrument chest, two power supply units, and two circuit cards, at no charge. These items had an original DOD acquisition cost of $55,817, and we paid only $5 shipping cost to obtain all of them. We also purchased, at minimal cost, several excess DOD commodity items in new and unused condition over the Internet at govliquidation.com—the DRMS liquidation contractor’s Web site. The items we purchased included tents, boots, three gasoline burners (stove/heating unit), a medical suction apparatus, and bandages and other medical supply items with a total reported acquisition cost of $12,310. We paid a total of $1,466 for these items, about 12 cents on the dollar, including buyer’s premium, tax, and shipping cost. From December 2004 through April 2005, we purchased several new, unused excess DOD commodity items, including over 8,000 military badges, medals, and insignias; 8 new, unused Cooper Trendsetter SE tires; and Class A military uniforms. Although these items had a total reported acquisition cost of $11,522, we paid a total of $1,427 for these items, including tax, buyer’s premium, and shipping cost. New, unused DOD badges, medals, and insignias. On December 6, 2004, we purchased 8,526 excess DOD badges, medals, and insignias that are used to indicate rank, the unit or program to which a military member or civilian employee is assigned, or service awards. These items had a reported acquisition cost of $9,518. We paid a total of $1,102, including buyer’s premium and tax, for these items—about 12 cents on the dollar. Units and program areas designated by the badges and insignias include Army Rangers, Mountain, and Airborne; Air Force Air Traffic Controller; and DOD Scientific Consultant. Rank insignias include Air Force Chief Master Sergeant and Air Force Technical Sergeant; Navy Captain, Midshipman Lieutenant, and Midshipman Lieutenant Commander; and Army Command Sergeant Major and Master Sergeant. The listed condition code of these items ranged from A4 (serviceable, usable condition) to H7 (unserviceable, condemned condition). However, our inspection of the badges and insignias that we purchased showed that none of them had been used, and many of them were in original manufacturer packages. Further, DOD is continuing to purchase and use most of these items. The photograph in figure 2 shows examples of some of the badges, medals, and insignias that we purchased. New, unused excess DOD tires. We purchased eight new, unused Cooper Trendsetter SE 13-inch steel-belted radial tires on February 18, 2005. According to the Army project officer, these tires are used on over- the-road passenger vehicles, and one customer ordered them for use on a forklift. DOD units are continuing to purchase and use these same tires. The most recent purchase of 50 of these tires was made in April 2005. The eight tires had a total reported acquisition value of $404. We paid $113 for the tires, including buyer’s premium and tax, and an additional $154 shipping cost. The tires were listed in A4 condition (usable, with some wear). However, we found that the tires still had manufacturer labels on the tread and blue paint over the whitewalls, indicating that they were new and unused. The tires were turned in as excess by the North Island Naval Air Station’s Aircraft Intermediate Maintenance Detachment. According to the Army Tank Automotive and Armaments Command Project Officer, the NSN listed on the turn-in document was incorrect. We found that inaccurate item descriptions, including NSNs, prevent items from being selected for reutilization. Figure 3 is a photograph of the excess DOD tires that we purchased over the Internet in February 2005. New, unused Class A military uniforms. We purchased several Class A military uniforms over the Internet on April 7, 2005. The uniforms were listed as being in H7 (unserviceable, condemned) condition. Although the uniforms that we purchased over the Internet from DOD’s liquidation contractor had a listed acquisition cost of $1,600, we paid a total of $58, including buyer’s premium and sales tax, to acquire them—about 4 cents on the dollar. After receiving our purchase we determined that we had in fact purchased 27 new, unused uniform coats; 4 pairs of new, unused uniform trousers; 54 jackets in excellent condition; 45 pairs of trousers in excellent condition; and 5 women’s uniform skirts and 1 pair of slacks in excellent condition. DOD is continuing to purchase and issue two of the four types of trousers that we purchased over the Internet. According to the DLA clothing and textiles product manager for dress uniforms, the Army switched from a matte finish gold button to a shiny sta-briteTM gold button on October 1, 2003. Although the Army ordered and paid for the new replacement buttons for existing dress uniforms, it later determined that hiring a contractor to replace the buttons or sending the coats back to the manufacturers for button replacement would be very expensive. The Army decided to use the coats with the older buttons to fill Reserve and Junior Reserve Officer Training Corps (ROTC and JROTC) orders until current supplies are exhausted. However, our monitoring of DOD liquidation sales found that many class A uniforms with the older buttons are being sold over the Internet for pennies on the dollar instead of being issued to ROTC and JROTC. In addition, we observed the new sta-briteTM buttons being sold over the Internet in May 2005. Figure 4 is a photograph of one of the excess new, unused Class A uniforms with the matte finish buttons that we purchased over the Internet in April 2005. We also purchased an earlier sales lot of the same Class A military uniforms over the Internet on February 16, 2005. Our winning bid was $81 for 166 uniform jackets and trousers, which had a listed acquisition cost of $10,424. However, when we arrived at the Great Lakes sales location near Chicago to pick up the uniforms, DOD liquidation contractor personnel were unable to locate them. Contractor personnel explained that our purchase may have been mistakenly given to another customer. To compensate, we were offered other items available for sale. However, these items were not in A condition. Instead of accepting them, we requested and received a refund. As discussed later, another of our Internet purchases was damaged due to a leaky roof at the Norfolk liquidation sales location. The $3.5 billion in DOD waste and inefficiency that we identified in our analysis of fiscal year 2002 through 2004 excess property disposal activity stemmed from management control breakdowns across DOD. Key factors in the overall DRMS management control environment that contributed to waste and inefficiency in the reutilization program included (1) unreliable excess property inventory data; (2) inadequate DRMS oversight, accountability, physical control, and safeguarding of property; and (3) outdated, nonintegrated excess inventory and supply systems. In addition, for many years our audits of DOD inventory management have reported that continuing unresolved logistics management weaknesses have resulted in DOD purchasing more inventory than it needed. DOD reutilization program waste and inefficiency is symptomatic of the inventory and supply chain management issues that have been considered high risk by GAO since 1990. Our analysis of fiscal year 2002 through fiscal year 2003 excess commodity turn-ins showed that $1.4 billion (40 percent) of the $3.5 billion of A-condition excess items consisted of new, unused DLA supply depot inventory. Our analysis of fiscal year 2004 excess commodity turn-ins showed that $1.3 billion (48 percent) of the $2.7 billion of A-condition excess items consisted of new, unused DLA supply depot inventory. Our interviews, case studies, screening visits, and statistical tests of excess commodity inventory led us to conclude that unreliable data are a key cause of the ineffective excess property reutilization program. GAO’s internal control standards require assets to be periodically verified to control records. In addition, DRMS policy requires DRMO personnel to verify turn-in information, including item description, quantity, condition code, and demilitarization code, at the time excess property is received and entered into DRMO inventory. However, we found that DRMS and DLA supply depot management have not enforced this requirement. Further, Army, Navy, and Air Force officials told us that unreliable data are a disincentive to reutilization because of the negative impact on their operations. DLA item managers told us that because military units have lost confidence in the reliability of data on excess property reported by DRMS, for the most part they have requested purchases of new items instead of reutilizing excess items. Military users also cited examples of damage to excess items during shipment that rendered the items unusable. In addition, other reutilization users advised us of problems related to differences in quantities and the types of items ordered and received that could have a negative impact on their operations. Military service officials also told us about the types of problems they have experienced with property acquired from DRMOs. Army, Navy, and Air Force medical officials, in particular, told us that they do not reutilize excess medical items stored at DRMOs because items can become damaged during shipment to and movement within the DRMO warehouses. Other users of excess DOD property, including special program, federal agency, and state officials gave us numerous examples of problems they encountered with requisitions of excess DOD property. Several officials noted that these problems have caused them to lose confidence in the reutilization process. The following examples are typical of what we were told. An Army official told us that he requisitioned 20 excess padlock sets. When he received the padlocks the keys were missing. After his second attempt to requisition excess DOD padlocks with keys failed, he threw the padlocks in a dumpster because they were useless to him and it would cost too much to return them to the DRMO. An Army official told us that items may be in new, unused condition when they leave the DRMO, but are damaged during shipment. The official cited his experience with an order of thin copper sheets for use in testing electronic equipment. The sheeting was shipped on a pallet that was too small and other material was stacked on top of it. A Fairchild Air Force Base official told us that the 92nd Logistics Readiness Squadron requisitioned 80 sleeping bags from the Hawaii DRMO but only received 56 of them. The official told our investigators that the sleeping bags were sealed in heavy-duty plastic bags and were in excellent condition. However, some of the boxes the sleeping bags were shipped in had been damaged by rain and handling by the time he received them. Our statistical tests found significant problems with controls for assuring the accuracy of excess property inventory. Estimated error rates for the five DRMOs we tested ranged from 8 percent at one DRMO to 47 percent at another, and estimated error rates for the five DLA supply depots we tested ranged from 6 percent to 16 percent, including errors related to physical existence of turn-ins and condition code. Our condition code tests determined whether the condition code was accurately recorded as serviceable or unserviceable. We estimated that errors related to condition code accuracy ranged from 6 percent to 26 percent at the 5 DRMOs we tested. Overall, we found that DRMO errors were caused by erroneous turn-in documentation prepared by military units and the failure of DRMO personnel to verify turn-ins at the time they were received and correct errors before recording the receipts in excess inventory. Most DLA supply depot errors related to untimely recording of transactions for changes in inventory status and inaccurate quantities. We did not find problems with condition codes at the DLA depots. An example from our Norfolk DRMO statistical sample illustrates how erroneous inventory data can result in waste and inefficiency. On June 30, 2004, the Navy’s Environmental Health Center in Portsmouth, Virginia, turned in six new, unused Level III biological safety cabinets with a total acquisition cost of $120,000. The Navy unit turned in the Level III cabinets as excess because of erroneous specifications that resulted in ordering cabinets that were too large and cumbersome to meet deployment needs. The Navy unit improperly used a local stock number (LSN) to describe the safety cabinets on the turn-in document and a demilitarization code that indicated there were no restrictions on the disposal of these items. However, Level III safety cabinets are subject to trade security controls, and therefore they are required to be identified by an NSN or other information that accurately describes the item, the end item application, and the applicable demilitarization code. Further, the DOD risk assessment performed in response to a recommendation in our November 2003 report called for Level III biological safety cabinets to be destroyed when no longer needed by DOD. Although Norfolk DRMO personnel advised DRMS officials of the need to correct the turn-in document errors in July 2004, as of the end of our audit in February 2005, the information had not been corrected and the safety cabinets had not been posted to the DRMS reutilization Web page to indicate that they were available for reutilization. Our in-house scientists who often meet with DOD scientists at the U.S. Army Biological Warfare Research Center at the Dugway Proving Ground learned that the DOD scientists were planning to purchase a Level III safety cabinet and informed them of the availability of the six Level III safety cabinets at the Norfolk DRMO. The DOD scientists told us that they were unaware the Navy had excessed the safety cabinets and said that they could use all six of them. We subsequently confirmed that as a result of our efforts, the DOD scientists at Dugway had requisitioned the six Level III safety cabinets for reutilization. We found hundreds of millions of dollars in potential waste and inefficiency associated with the failure to safeguard excess property inventory from loss, theft, and damage. As previously discussed, our statistical tests of excess commodity inventory at five DRMOs and five DLA supply depots identified significant numbers of missing items. Because the DRMOs and DLA supply depots had no documentation to show that these items had been requisitioned or sent to disposal contractors, they cannot assure that these items have not been stolen. According to DRMS data, DRMOs and DLA supply depots reported a total of $466 million in excess property losses related to damage, missing items, theft, and unverified adjustments over a period of 3 years. However, as discussed below, we have indications that this number is not complete. Also, because nearly half of the missing items reported involved military and commercial technology that required control to prevent release to unauthorized parties, the types of missing items were often more significant than the number and dollar value of missing items. Weaknesses in accountability that resulted in lost and stolen property contributed to waste and inefficiency in the excess property reutilization program. As shown in table 2, our analysis of reported information on excess property losses at DRMOs and DLA supply depots found that reported losses for fiscal years 2002 through 2004 totaled $466 million. Because 43 percent of the reported losses related to military technology items that required demilitarization controls, these weaknesses also reflect security risks. GAO Standards for Internal Control in the Federal Government require agencies to establish physical control to secure and safeguard assets, including inventories and equipment, which might be vulnerable to risk of loss or unauthorized use. Our investigations of reported losses found that the failure to verify and accurately document transactions and events at the beginning of the disposal process and report and investigate losses as they occur obscures or eliminates the audit trail. Weaknesses in accountability leave DOD vulnerable to the risk of theft, and fraud, waste, and abuse with little risk of detection. DRMO losses. Our statistical samples identified missing turn-ins at two of the five DRMOs we tested and missing quantities at all five DRMOs tested, including many items that were in new, unused, and excellent condition. Because DRMO officials did not have documentation to show whether these items had been reutilized, transferred, sold, or destroyed, there is no assurance of whether the missing items reflected bookkeeping errors or if they related to theft. Missing items in our statistical samples included turn- ins of 72 chemical and biological protective suits, 21 pairs of chemical and biological protective gloves, 47 wet weather parkas that were subject to demilitarization controls, and 7 sleeping bags, a cold weather coat, computer equipment, and various other items. Reported DRMO losses included 76 units of body armor, 75 chemical and biological protective suits (in addition to those identified in our Columbus DRMO sample), 5 guided missile warheads, and hundreds of military cold weather parkas and trousers and camouflage coats and trousers. Three DRMOs— Kaiserslautern, Meade, and Tobyhanna—accounted for $840,147, or about 45 percent, of the nearly $1.9 million in reported fiscal year 2004 losses of military clothing and equipment items requiring demilitarization. Our follow-up investigations found a pervasive lack of physical accountability over excess inventory, which leaves DOD vulnerable to the risk of theft and fraud, waste, and abuse. In many cases, it is not possible to determine whether discrepancies represent sloppy recordkeeping or the loss or theft of excess property due to the failure to verify turn-in documents and correct errors at the time excess items were received at the DRMOs. In the case of our Columbus DRMO sample, we found that inventory records were not adjusted for missing quantities in our sample. Instead, DRMO personnel recorded the entire amount of the listed quantities as being transferred to either the liquidation sales contractor or the Joint Service Nuclear Biological and Chemical Equipment Assessment Program (JEAP) for inspection and reissue of military clothing and equipment. Our review of transaction data for Columbus DRMO transfers showed that JEAP did not confirm most of the items reported as transferred. For example, JEAP confirmed receiving only 7 of the 17 turn-ins of clothing and textile items. Further, the Columbus DRMO recorded a transaction to show that the 72 chemical and biological protective suits identified as missing during our statistical tests of Columbus DRMO inventory were transferred to JEAP on November 10, 2004. However, our follow-up with JEAP officials found that they have no record of receiving the protective suits. The Columbus DRMO’s apparent manipulation of the inventory data avoided reporting the missing items as losses. Our follow-up investigations of other selected DRMO losses found the following. An Air Force turn-in of 75 chemical and biological protective suits was received, placed in the Shaw RIPL (a receipt in place location under authority of the Jackson DRMO) warehouse on May 28, 2002, and subsequently disappeared. DRMO personnel told DRMS investigators that the 75 protective suits may have been included in a November 15, 2002, shipment to the Jackson DRMO in South Carolina. However, because DRMO personnel recorded box counts instead of turn-in document numbers and item counts, there is no detailed record of the items that were shipped between the two excess property warehouses. Twenty units of body armor reported lost at the Meade DRMO initially had been ordered by Israel on November 8, 2000. Our investigators confirmed that the body armor was never picked up for shipment to Israel. According to the loss report, the items were relocated from the shipping area to the demilitarization storage area of the DRMO on May 8, 2002. A loss investigation was initiated by the Area Manager for the Meade DRMO in March 2004. However, because the Meade DRMO contractor had improperly destroyed inventory records after 2 years, attempts to determine the events surrounding the loss were fruitless. Our investigation of 18 reports on a total of 52 units of body armor missing from the Hood DRMO during fiscal years 2002 and 2003 determined that these items were stored outside in an unsecure area resulting in the theft of at least 48 units of body armor. A DRMS investigative report noted that items requiring demilitarization had been stored in this area over a 2-year period, even though the security fence had barbed wire that was cut or missing and the high ground level outside the fence provided easy access. According to a DRMO official, a work order for the fence repair had been submitted but the repairs had not been made. The Naval Operational Logistics Support Center-Ammo, which was responsible for a turn-in of guided missile warheads, the DRMO that received these items, and the Demilitarization Center each recorded a different quantity for the turn-in. However, quantity discrepancies were not resolved at any point during the turn-in and disposal process. As a result, there is no audit trail to determine whether or where, when, or how the reported loss or a recordkeeping error occurred. For example, the Navy unit reported a turn-in of 24 warheads that had been used in testing but were certified as inert. DRMO personnel counted canisters and loose components and determined there were 32 warheads. The Anniston Demilitarization Center reported that a total of 27 warheads were received for destruction. DLA supply depot losses. Our statistical samples showed missing items at four of the five DLA supply depots that we tested. Because depot officials did not have documentation showing that these items had been reutilized or sold, there is no assurance that the missing items did not relate to theft. Missing items in our DLA depot statistical samples included several sensitive items, such as classified radio frequency amplifiers and circuit boards, aircraft parts, and computer equipment that required trade security or demilitarization controls. We obtained DRMS data on DLA supply depot reports of excess property losses, including missing and damaged property and unverified adjustments. We investigated reported losses of selected aircraft parts at two DLA supply depots—Oklahoma City and Warner Robins—that reported the largest amount of depot losses. DLA Directive 5025.30, DLA One Book, includes a section on Inventory Adjustment Research (dated October 21, 2004), which sets inventory accuracy goals for DLA supply depots and requires causative research—an in-depth investigation—of adjustments for selected items and suspected fraud, waste, and abuse to determine why they occurred. A Financial Liability Investigation of Property Loss is required if the adjustment meets specific criteria, including (1) gains or losses of classified or sensitive material; (2) an adjustment in excess of $2,500 for pilferable material; and (3) a loss where there is a suspicion of fraud, theft, or negligence. However, we found that DLA depot personnel did not thoroughly investigate most adjustments related to reported losses of sensitive items with demilitarization controls that we selected for investigation. Supply depot officials told us that they assumed the losses represented inventory recordkeeping errors, even though causative research results were inconclusive. In addition to reported losses, we found significant instances of property damage at DRMS liquidation contractor sales locations. Because the terms and conditions of liquidation sales specify that all property is sold “as is” and assigns all risk of loss to buyers, the buyers have no recourse when property is damaged after being sold or is not in the advertised condition. As a result, customers who have lost money on bids related to damaged and unusable items might not bid again, or they may scale back on the amount of their bids in the future, affecting both the volume of excess DOD items liquidated and sales proceeds. On October 7, 2004, we purchased numerous usable items in original manufacturer packaging, including 35 boxes of bandages, 31 boxes of gauze sponges and surgical sponges, 12 boxes of latex gloves, and 2 boxes of tracheostomy care sets. We paid a total of $167, including buyer’s premium, tax, and transportation cost, for these items, which had a reported total acquisition cost of $3,290. However, these items had become damaged due to rain and a leaky roof at the Norfolk, Virginia, liquidation sales location. The majority of property damage that we observed at liquidation contractor sales locations is primarily the result of DRMS management decisions to send excess DLA supply depot property to two national liquidation sales locations without assuring that its contractor had sufficient human capital resources and warehouse capacity to process, properly store, and sell the volume of property received. For example, excess DOD property sent to the Huntsville, Alabama, liquidation sales location was stored outside unprotected from weather, including sun, wind, rain, and hurricanes during the summer and fall of 2004. The liquidation contractor’s failure to record these items in sales inventory at the time they were received, when combined with lost and illegible property labels due to weather damage, resulted in a significant loss of accountability for many of these items. Inefficient, nonintegrated excess inventory and supply management systems lack controls necessary to prevent waste and inefficiency in the reutilization program. For example, because the DRMS Automated Inventory System (DAISY) and DLA’s Standard Automated Materiel Management System (SAMMS) are outdated and nonintegrated, they do not share information necessary to (1) identify and alert DLA item managers of excess property that is available to fill supply orders and (2) prevent purchases of new items when A-condition excess items are available for reutilization. We have continued to report that long-standing weaknesses with DLA’s inventory systems related to outdated, nonintegrated legacy systems and processes result in DOD and military units not knowing how many items they have and where these items are located. DLA has acknowledged serious deficiencies in its automated inventory management systems. Although DLA has an effort under way to replace SAMMS with the Business Systems Modernization (BSM) and DRMS has a Reutilization Modernization Program (RMP) under way to upgrade DAISY, so far these have been separate, uncoordinated efforts and they do not adequately address identified process deficiencies. While the systems improvement efforts are intended to integrate supply and excess inventory systems to support the reutilization program, they are not focused on resolving long- standing problems related to unreliable condition code data and incomplete data on NSNs. The accuracy of these two data elements is critical to the ability to identify like items that are available for reutilization at the time purchases are made. To effectively address problems with reutilization program waste and inefficiency, DRMS and DLA will need to exercise strong leadership and accountability to improve the reliability of excess property data; establish effective oversight and physical inventory control; and develop effective integrated systems and processes for identifying and reutilizing excess property. In addition, the military services will need to provide accurate information on excess property turn-in documentation, particularly data on condition codes, and item descriptions, including NSNs that are key to identifying items for reutilization. Improved management of DOD’s excess property and a strong reutilization program would help save taxpayers hundreds of millions of dollars annually. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. We would be pleased to answer any questions that you may have. For more information regarding this testimony, please contact Gregory D. Kutz at (202) 512-9505, or [email protected] or Keith A. Rhodes at (202) 512- 6412, or [email protected]. Individuals making key contributions to this testimony included Mario Artesiano, Stephen P. Donahue, Gayle L. Fischer, Jason Kelly, Richard C. Newbold, Ramon Rodriguez, and John Ryan. Numerous other individuals contributed to our audit and investigation and are listed in our companion report. Technical expertise was provided by Sushil K. Sharma, PhD, DrPH. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO was asked to assess the overall economy and efficiency of the Department of Defense (DOD) program for excess property reutilization (reuse). Specifically, GAO was asked to determine (1) whether and to what extent the program included waste and inefficiency and (2) root causes of any waste and inefficiency. GAO was also asked to provide detailed examples of waste and inefficiency and the related causes. GAO's methodology included an assessment of controls, analysis of DOD excess inventory data, statistical sampling at selected sites, and detailed case studies of many items. DOD does not have management controls in place to assure that excess inventory is reutilized to the maximum extent possible. Of $33 billion in excess commodity disposals in fiscal years 2002 through 2004, $4 billion were reported to be in new, unused, and excellent condition. DOD units reutilized only $495 million (12 percent) of these items. The remaining $3.5 billion (88 percent) includes significant waste and inefficiency because new, unused, and excellent condition items were transferred and donated outside of DOD, sold for pennies on the dollar, or destroyed. DOD units continued to buy many of these same items. GAO identified at least $400 million of fiscal year 2002 and 2003 commodity purchases when identical new, unused, and excellent condition items were available for reutilization. GAO also identified hundreds of millions of dollars in reported lost, damaged, or stolen excess property, including sensitive military technology items, which contributed to reutilization program waste and inefficiency. Further, excess property improperly stored outdoors for several months was damaged by wind, rain, and hurricanes. GAO ordered and purchased at little or no cost several new and unused excess commodities that DOD continued to buy and utilize, including tents, boots, power supplies, circuit cards, and medical supplies. GAO paid a total of $2,898, including tax and shipping cost, for these items, which had an original DOD acquisition cost of $79,649. Root causes for reutilization program waste and inefficiency included (1) unreliable excess property inventory data; (2) inadequate oversight and physical inventory control; and (3) outdated, nonintegrated excess inventory and supply management systems. Procurement of inventory in excess of requirements also was a significant contributing factor. Improved management of DOD's excess property could save taxpayers at least hundreds of millions of dollars annually.
When selecting contractors, the FAR generally requires agencies to consider past performance as a factor in most competitive procurements. During source selection, contracting officials often rely on various sources of past performance information, such as the prospective contractor’s performance on prior government or industry contracts for efforts similar to the government’s requirements, and the past performance information housed in the government-wide PPIRS database. FAR § 42.1502(a) and (b). Currently, the dollar threshold for simplified acquisitions, with limited exceptions, is $150,000. FAR § 2.101. The FAR has separate assessment thresholds of $650,000 and $30,000 for construction and architect-engineer services contracts respectively. FAR § 42.1502(e) and (f). Navy, the system incorporates processes and procedures for drafting and finalizing evaluations, which are described in the CPARS Guide. In May 2010, OFPP designated CPARS as the single government-wide system for entering evaluations and by October 2010 all agencies had transitioned to using CPARS. In completing past performance evaluations, the assessing official rates the contractor on various elements such as quality of the product or service, schedule, cost control, management, and small business utilization. For each applicable rating element, the assessing official determines a rating based on definitions from the CPARS Guide that generally relates to how well the contractor met the contract requirements and responded to problems. In addition, for each rating element, a narrative is to provide support for the rating assigned. Once draft evaluations have been completed by the assessing official, the contractor is notified that the evaluation is available for review and comment through CPARS. After receiving and reviewing a contractor’s comments and any additional information, the assessing official may revise the evaluation and supporting narrative. If there is disagreement with the evaluation, the reviewing official—generally a government official at a level above the contracting officer—will review and finalize the evaluation. Section 806 of the National Defense Authorization Act (NDAA) for Fiscal Year 2012 required DOD to develop a strategy to ensure that evaluations in past performance databases used for making source selection decisions are complete, timely, and accurate.process was also to be revised so that contractor past performance evaluations are posted to the databases used for source selection decisions no more than 14 days after the performance information is provided to the contractor. In June 2013, we reported on the status of Its contractor comment DOD’s actions to improve the quality and timeliness of past performance information and implement provisions of the NDAA for Fiscal Year 2012. OFPP’s strategy to improve past performance information and respond to section 853 of the NDAA for Fiscal Year 2013 is to increase oversight of contractor performance evaluations, develop government-wide past performance guidance, and revise the FAR. Since 2009, OFPP has taken a number of actions, in conjunction with other organizations, to improve the amount and quality of past performance information available, including emphasizing reporting requirements; assessing the level of compliance and quality of evaluations; developing a compliance tracking tool; setting agency performance targets; consolidating systems used to enter past performance information; and developing government-wide past performance guidance. In addition, OFPP worked with the FAR Council to revise the FAR to implement provisions of the NDAAs for Fiscal Years 2012 and 2013 related to assigning responsibility and accountability; implementing standards for complete evaluations; and ensuring submissions are consistent with award fee evaluations. Revisions by OFPP and the FAR Council to the timing of the contractor comment process in accordance with the acts became effective in July 2014. In 2009, the Deputy Administrator of OFPP issued a memorandum to Chief Acquisition Officers and Senior Procurement Executives emphasizing changes to the FAR such as submitting contractor performance into PPIRS and identifying agency officials who must prepare such evaluations. In addition, the memo outlined actions that agency officials must take to help implement these practices. OFPP also announced plans to conduct regular compliance assessments and quality reviews to ensure that agencies submit timely performance information to PPIRS on required contracts, and provide clear, comprehensive, and constructive information useful for making future contract award decisions. In early 2011, an OFPP review highlighted the need to improve the quantity and quality of information in PPIRS. To see how well agencies managed their efforts to improve submission of past performance evaluations, OFPP assessed the compliance with reporting requirements and the quality of evaluations for the ten agencies that do the most contracting: OFPP found that agencies generally did not meet the requirement to evaluate contractor performance. OFPP’s comparison of data from the Federal Procurement Data System-Next Generation and PPIRS indicated that past performance evaluations were completed for only a small percentage of contracts requiring an evaluation, especially in civilian agencies. To assess the quality of the evaluations, OFPP reviewed a sample to see how well various rating elements were addressed. They found the evaluations generally lacked sufficient information—such as details about how the contractor exceeded expectations or corrected poor performance— to support the rating, or did not include a rating for all performance areas. To improve the collection of contractor past performance information, agencies were asked to review their past performance reporting guidance to ensure it contained key characteristics and to improve management controls by using several strategies to improve compliance and increase the quality of the evaluations. To increase management oversight of contractor performance evaluations, OFPP worked with the PPIRS program office to develop a compliance tracking tool within PPIRS for measuring and managing agency reporting efforts. This was made available to all agencies in early 2011. The compliance tool allows managers to monitor compliance at the department, agency, or contracting office level. In addition, the tool allows agency officials to identify the compliance of specific contracts or orders that meet the reporting criteria. In March 2013, OFPP issued a policy memo to establish annual past performance reporting compliance targets for Chief Financial Officer (CFO) Act agencies. In establishing these targets, OFPP reviewed the compliance rates of agencies and found that the level of compliance varied widely. In order to make the compliance targets realistic and achievable, OFPP set differing fiscal year 2013 and 2014 targets by agency based on the agency’s level of compliance at the end of fiscal year 2012, with the expectation that all agencies will reach full compliance by the end of fiscal year 2015, as shown in table 1. To assist CFO Act agencies in meeting these annual targets, the policy memo required that all of them establish their past performance reporting baselines, and set aggressive quarterly targets that reflect a strategy for meeting the annual performance targets. The memo also highlighted training opportunities for agencies’ acquisition workforces on documenting contractor performance. OFPP has also sought to improve contractor performance information and implement provisions of the NDAA for Fiscal Year 2013 by working with the General Services Administration’s Integrated Award Environment, the CPARS program office, and the FAR Council to consolidate systems for entering past performance information and to develop government-wide past performance guidance, enhance FAR requirements, and change the contractor comment process. To standardize the past performance documentation process, the OFPP Administrator identified CPARS as the government-wide system for collecting contractor performance information, and by October 2010 agencies using other systems transitioned to CPARS. We previously reported that the various systems used to collect past performance information and the lack of standardized evaluation factors and rating scales limited the usefulness of the information in PPIRS. Because the CPARS Policy Guide was specific to DOD, the OFPP worked with the Integrated Award Environment, the CPARS program office, and an interagency working group to update the guide. A government-wide CPARS Guide was released in November 2012. In addition, OFPP worked with the FAR Council to revise the FAR in September 2013 to enhance various elements of documenting contractor performance and implement provisions of the NDAA for Fiscal Year 2013 and the NDAA for Fiscal Year 2012. The FAR was revised and the CPARS Guide was updated to implement these acts as follows: Standards for timeliness: The FAR does not include a timeframe for completing evaluations. However, the CPARS Guide includes a standard that evaluations should be completed within 120 days after the end of the evaluation period. Standards for completeness: Evaluation factors for each assessment must include, at a minimum: quality of product or service; cost control (where applicable); schedule/timeliness; management or business relations; and small business subcontracting (where applicable), and other (for example, late payments or nonpayment to subcontractors or tax delinquency). Assigning responsibility for completeness of evaluations: The requirement that agency procedures identify roles and responsibilities was expanded to include preparing and reviewing evaluations. Also, the FAR now provides a default that the contracting officer is responsible for completing evaluations if agency procedures do not specify that role. Management accountability: Agencies are required to evaluate compliance and assign responsibility and management accountability for completeness of performance submissions. Ensuring past performance submissions are consistent with award fee evaluations: Award and incentive fee evaluations and adjectival ratings are to be included as part of the past performance evaluations. OFPP, in conjunction with the FAR Council, recently implemented provisions of the acts related to changing the timing for obtaining contractor comments. Previously, contractors were allowed a minimum of 30 days to provide comments, rebuttals, or additional information. On May 30, 2014, the final rule changing the contractor comment process was issued with an effective date of July 1, 2014. The rule provides that contractors will have no more than 14 days from the date of notification of the availability of the evaluation to provide comments, rebuttals, or other information before the evaluation is posted to PPIRS, where it is available government-wide for source selection purposes for 3 years after the contract performance completion date.this deadline, its comments will be added to the evaluation after it has moved into PPIRS, as well as any agency review of the comments. Although agencies have generally improved their level of compliance with the reporting requirements over the last year, that rate varies greatly by agency and most have not met the targets set by OFPP. As shown in table 2, all of the top 10 agencies, based on number of contracts or orders with an evaluation due in PPIRS, showed improvement in reporting compliance from 2013 to 2014, but the compliance rate varied from 13 percent to 83 percent as of April 2014. According to OFPP’s annual reporting performance targets, as shown in table 1, all CFO Act agencies should have been at least 65 percent compliant by the end of fiscal year 2013, but only two of the top 10 agencies were above 65 percent compliance as of April 2014. According to an OFPP official, some agencies placed greater emphasis on improving timely reporting and have increased their management oversight, issued guidance, diligently monitored compliance, and frequently conduct internal meetings with management and accountable staff about reporting activity and responsibilities. However, the official noted that some agencies report workforce shortages, work priorities, and time constraints as hindering better compliance. Some contracting officers also reported that they had difficulty in obtaining timely feedback from other parts of the acquisition workforce so that they could complete the evaluation due to shifting workloads, retirements, and relocations. The OFPP official stated the office plans to continue its efforts to improve past performance information and strengthen reporting compliance, by taking the following actions: collaborating with agency senior procurement executives to increase management oversight and leadership; working with the FAR Council on the development of additional regulatory guidance, as necessary, to standardize reporting practices and improve agency consideration of past performance information; directing the Federal Acquisition Institute to develop useful training aids to ensure agencies know how to consider performance information prior to contract award and rate a contractor's performance during the post-award process; overseeing the General Services Administration’s Integrated Award Environment on system enhancements to ensure agencies have a practical reporting tool with useful performance metrics to manage and monitor not only reporting compliance but also quality reporting of performance information; and conducting outreach with internal and external stakeholders to garner their thoughts on ways to improve past performance information reporting. We are not making recommendations in this report. We requested comments on a draft of this report from OFPP and DOD. On July 24, 2014, an OFPP Procurement Policy Analyst provided comments by e- mail. OFPP concurred with the findings of the draft report, and provided technical comments, which we incorporated as appropriate. DOD also provided technical comments by e-mail, which we incorporated as appropriate. We are sending this report to appropriate congressional committees, the Director of the Office of Management and Budget, the Secretary of Defense, and other interested parties. The report will also be available at no charge at the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. LaTonya Miller, Assistant Director; Julia Kennon; Robert Swierczek; Bradley Terry; and Alyssa Weir made key contributions to this report. Over the past five years, OFPP has issued additional policy guidance to strengthen agency use of past performance information and improve agency reporting compliance and documentation. Making Better Use of Contractor Performance Information (July 10, 2014): Enhances agencies’ use of performance information when making source selection decisions on high-risk programs, major acquisitions, and other complex contract actions by directing agencies to conduct additional research and outreach to make more informed decisions, including obtaining as much relevant and recent performance information about a contractor’s performance beyond what is in the Past Performance Information Retrieval System (PPIRS). http://www.whitehouse.gov/sites/default/files/omb/procurement/memo/ma king-better-use-of-contractor-performance-information.pdf. Improving the Collection and Use of Information about Contractor Performance and Integrity (March 6, 2013): Requested agencies to: (1) establish a baseline for reporting compliance, (2) set aggressive performance targets that agencies can use to monitor and measure reporting compliance, and (3) ensure the workforce is trained to properly report and use this information to improve the collection and use of performance and integrity information. http://www.whitehouse.gov/sites/default/files/omb/procurement/memo/imp roving-the-collection-and-use-of-information-about-contractor- performance-and-integrity.pdf. Improving Contractor Past Performance Assessments: Summary of the Office of Federal Procurement Policy's Review, and Strategies for Improvement (January 21, 2011): Included OFPP’s initial assessment of agencies’ reporting of contractor performance information and additional steps and strategies for improving the collection of past performance information. http://www.whitehouse.gov/sites/default/files/omb/procurement/contract_p erf/PastPerformanceMemo-21-Jan-2011.pdf. Improving the Use of Contractor Performance Information (July 29, 2009): Described new FAR requirements to strengthen the use of contractor performance information, included agency management responsibilities to support robust implementation, and established OFPP’s review process for evaluating agencies’ reporting of contractor performance information. http://www.whitehouse.gov/sites/default/files/omb/assets/procurement/im proving_use_of_contractor_perf_info.pdf. The FAR Council, chaired by the Administrator of OFPP, has recently amended the FAR to strengthen the collection of contractor past performance information, including: FAR Case 2012-028, Contractor Comment Period, Past Performance Evaluations: Implements section 806(c) the National Defense Authorization Act (NDAA) for Fiscal Year 2012, Pub. L. No. 112-81 (2011) and section 853(c) of the NDAA for Fiscal Year 2013, Pub. L. No. 112- 239. The final rule provides that contractors will have no more than 14 days from the date of notification of the availability of the evaluation to provide comments, rebuttals, or other information before the evaluation is posted to PPIRS. The final rule was published in the Federal Register on May 30, 2014, at 79 Fed. Reg. 31,197, and was effective July 1, 2014. FAR Case 2012-009, Documenting Contractor Performance: Implements parts of section 806 of NDAA for Fiscal Year 2012, Pub. L. No. 112-81, and establishes standards of completeness of past performance evaluations, strengthens assignment of responsibility and management accountability for submitting assessments, and requires that past performance submissions include incentive/award fee information, where appropriate. This rule also incorporates certain requirements from section 853 of the NDAA for Fiscal Year 2013, Pub. L. No. 112-239. This final rule was published in the Federal Register on August 1, 2013, at 78 Fed. Reg. 46,783 and was effective on September 3, 2013. FAR Cases 2008-016, Termination for Default Reporting: Establishes procedures for contracting officers to provide contractor information, such as terminations for cause or default and defective cost or pricing data, into PPIRS and Federal Awardee Performance and Integrity Information System module within PPIRS. This final rule was published in the Federal Register on September 29, 2010, at 75 Fed. Reg. 60,258, and effective October 29, 2010. FAR Case 2008–027, Federal Awardee Performance and Integrity Information System: Amends the FAR to implement the Federal Awardee Performance and Integrity Information System. The system is designed to significantly enhance the Government’s ability to evaluate the business ethics and quality of prospective contractors competing for Federal contracts and to protect taxpayers from doing business with contractors that are not responsible sources. This final rule was published in the Federal Register on March 23, 2010, at 75 Fed. Reg. 14,059, and was effective April 22, 2010.
Having complete, timely, and accurate information on contractor performance allows officials responsible for awarding new federal contracts to make informed decisions. Agencies generally are required to document contractor performance on contracts or orders exceeding certain dollar thresholds. Section 853 of the National Defense Authorization Act for Fiscal Year 2013 required the development of a strategy to ensure that timely, accurate, and complete information on contractor performance is included in past performance databases. The act also required a change to the timeframes allowed for contractors to provide comments, rebuttals, or additional information pertaining to past performance information. The act required GAO to report on the actions taken in response to these requirements. For this report, GAO identified (1) the OFPP strategy to improve the number and quality of contractor past performance evaluations and implement provisions of the act, and (2) changes in the compliance rates for required performance evaluations from April 2013 to April 2014 for selected agencies. GAO reviewed OFPP memos and reports, government-wide guidance, and recent changes to the Federal Acquisition Regulation and interviewed an OFPP official. GAO also reviewed past performance reporting compliance data for 2013 and 2014. GAO is not making any recommendations. OFPP concurred with GAO's findings. The Office of Federal Procurement Policy's (OFPP) strategy to improve the reporting of past performance information relies on increased oversight and enhancements to guidance and acquisition regulations. Since 2009, OFPP has taken several actions to increase the number and quality of past performance submissions available to source selection officials, including: emphasizing reporting requirements through memos to agency officials; assessing and reporting on the level of compliance and quality of evaluations; directing the development of a compliance tracking tool; setting performance targets for certain agencies; directing the consolidation of systems for entering past performance information; and developing government-wide past performance guidance. To implement provisions of the act, OFPP and the Federal Acquisition Regulatory Council (FAR Council) worked to enhance requirements for assigning responsibility and accountability; implement standards for complete evaluations; and ensure submissions are consistent with award fee evaluations. Recently, OFPP and the FAR Council revised the timelines for the contractor comment process in accordance with the 2013 statutory requirement. Although agencies generally have improved their level of compliance with past performance reporting requirements, the rate of compliance varies widely by agency and most have not met OFPP targets. For the top 10 agencies, based on the number of contracts requiring an evaluation, the compliance rate ranged from 13 to 83 percent as of April 2014. According to an OFPP official, some agencies placed greater emphasis on documenting contractor performance, but workforce shortages and work priorities may hinder better compliance. The official said that OFPP plans to continue its oversight and provide additional training and guidance.
To help manage its multi-billion dollar acquisition investments, DHS has established policies and processes for acquisition management, test and evaluation, and resource allocation. The department uses these policies and processes to deliver systems that are intended to close critical capability gaps, helping enable DHS to execute its missions and achieve its goals. DHS policies and processes for managing its major acquisition programs are primarily set forth in Acquisition Management Directive (MD) 102-01 and DHS Instruction Manual 102-01-001, Acquisition Management Instruction/Guidebook. DHS issued the initial version of this directive in November 2008 in an effort to establish an acquisition management system that effectively provides required capability to operators in support of the department’s missions. DHS’s Under Secretary for Management (USM) is currently designated as the department’s Chief Acquisition Officer and, as such, is responsible for managing the implementation of the department’s acquisition policies. DHS’s USM serves as the decision authority for the department’s largest acquisition programs: those with LCCEs of $1 billion or greater. Component Acquisition Executives—the most senior acquisition management officials within each of DHS’s component agencies—may be delegated decision authority for programs with cost estimates between $300 million and less than $1 billion. Table 1 identifies how DHS has categorized the 26 major acquisition programs we review in this report, and table 7 in appendix II specifically identifies the programs within each level. DHS acquisition policy establishes that a major acquisition program’s decision authority shall review the program at a series of five predetermined Acquisition Decision Events (ADE) to assess whether the major program is ready to proceed through the acquisition life-cycle phases. Depending on the program, these ADEs can occur within months of each other, or be spread over several years. Figure 1 depicts the acquisition life cycle established in DHS acquisition policy. An important aspect of an ADE event is the decision authority’s review and approval of key acquisition documents. See table 2 for a description of the type of key acquisition documents requiring department-level approval before a program moves to the next acquisition phase. DHS acquisition policy establishes that the APB is the agreement between program, component, and department-level officials establishing how systems will perform, when they will be delivered, and what they will cost. Specifically, the APB establishes a program’s schedule, costs, and KPPs. DHS defines KPPs as a program’s most important and non- negotiable requirements that a system must meet to fulfill its fundamental purpose. For example, a KPP for an aircraft may be airspeed and a KPP for a surveillance system may be detection range. The APB schedule, costs, and KPPs are defined in terms of an objective and minimum threshold value. According to DHS policy, if a program fails to meet any schedule, cost, or performance threshold approved in the APB, it is considered to be in breach. Programs in breach are required to notify their acquisition decision authority and develop a remediation plan that outlines a time frame for the program to return to its APB parameters, re-baseline—that is, establish new schedule, cost, or performance goals—or have a DHS-led program review that results in recommendations for a revised baseline. In addition to the acquisition decision authority, other bodies and senior officials support DHS’s acquisition management function: The Acquisition Review Board (ARB) reviews major acquisition programs for proper management, oversight, accountability, and alignment with the department’s strategic functions at ADEs and other meetings as needed. The ARB is chaired by the acquisition decision authority or a designee and consists of individuals who manage DHS’s mission objectives, resources, and contracts. The Office of Program Accountability and Risk Management (PARM) is responsible for DHS’s overall acquisition governance process, supports the ARB, and reports directly to the USM. PARM develops and updates program management policies and practices, reviews major programs, provides guidance for workforce planning activities, provides support to program managers, and collects program performance data. Component agencies, such as U.S. Customs and Border Protection (CBP), the Transportation Security Administration (TSA), and the U.S. Coast Guard (USCG) sponsor specific acquisition programs. The 26 programs we review in this report are sponsored by eight component agencies. Component Acquisition Executives within the components are responsible for overseeing the execution of their respective portfolios. Program management offices, also within the components, are responsible for planning and executing DHS’s individual programs. They are expected to do so within the cost, schedule, and performance parameters established in their APBs. If they cannot do so, programs are considered to be in breach and must take specific steps, as noted above. Figure 2 depicts the relationship between acquisition managers at the department, component, and program level. In May 2009, DHS established policies and processes for testing the capabilities delivered by the department’s major acquisition programs. The primary purpose of test and evaluation is to provide timely, accurate information to managers, decision makers, and other stakeholders to reduce programmatic, financial, schedule, and performance risk. We provide an overview of each of the 26 programs’ test activities in the individual program assessments, presented in appendix I. DHS testing policy assigns specific responsibilities to particular individuals and entities throughout the department: Program managers have overall responsibility for planning and executing their programs’ testing strategies. They are responsible for scheduling and funding test activities and delivering systems for testing. They are also responsible for controlling developmental testing. Programs use developmental testing to assist in the development and maturation of products, product elements, or manufacturing or support processes. Developmental testing includes engineering-type tests used to verify that design risks are minimized, substantiate achievement of contract technical performance, and certify readiness for operational testing. Operational test agents (OTA) are responsible for planning, conducting, and reporting on operational testing, which is intended to identify whether a system can meet its KPPs and provide the acquisition decision authority with an evaluation of the operational effectiveness and suitability of a system in a realistic environment. Operational effectiveness refers to the overall ability of a system to provide desired capability when used by representative personnel. Operational suitability refers to the degree to which a system can be placed in field use and sustained satisfactorily. The OTAs may be organic to the component, another government agency, or a contractor, but must be independent of the developer in order to present credible, objective, and unbiased conclusions. For example, the U.S. Navy Commander, Operational Test and Evaluation Force is the OTA for the USCG National Security Cutter (NSC) program. The Director, Office of Test and Evaluation (DOT&E) is responsible for approving major acquisition programs’ OTAs, operational test plans, and Test and Evaluation Master Plans (TEMP). A program’s TEMP must describe the developmental and operational testing needed to determine technical performance, and operational effectiveness and suitability. As appropriate, DOT&E is also responsible for participating in operational test readiness reviews, observing operational tests, reviewing OTAs’ reports, and assessing the reports. Prior to a program’s ADE 3, DOT&E provides the program’s acquisition decision authority a letter of assessment that includes an appraisal of the program’s operational test, a concurrence or non-concurrence with the OTA’s evaluation, and any further independent analysis. As an acquisition program proceeds through its life cycle, the testing emphasis moves gradually from developmental testing to operational testing. See figure 3. DHS has established a planning, programming, budgeting, and execution (PPBE) process to allocate resources to acquisition programs and other entities throughout the department. DHS’s PPBE process produces the multi-year funding plans presented in the FYHSP, a database that contains, among other things, 5-year funding plans for DHS’s major acquisition programs. DHS guidance states that the 5-year plans in the FYHSP should allow the department to achieve its goals more efficiently than an incremental approach based on 1-year plans. DHS guidance also states that the FYHSP articulates how the department will achieve its strategic goals within fiscal constraints. According to DHS guidance, at the outset of the annual PPBE process, the department’s Office of Policy and Chief Financial Officer (CFO) should provide planning and fiscal guidance, respectively, to the department’s component agencies. In accordance with this guidance, the components should submit 5-year funding plans to the CFO; these plans are subsequently reviewed by DHS’s senior leaders, including the DHS Secretary and Deputy Secretary. DHS’s senior leaders are expected to modify the plans in accordance with their priorities and assessments, and they document their decisions in formal resource allocation decision memorandums. DHS submits the revised funding plans to the Office of Management and Budget, which uses them to inform the President’s annual budget request—a document sent to Congress requesting new budget authority for federal programs, among other things. In some cases, the funding appropriated to certain accounts in a given fiscal year can be carried over to subsequent fiscal years. Figure 4 depicts DHS’s annual PPBE process. Federal law requires DHS to submit an annual FYHSP report to Congress at or about the same time as the President’s budget request. This report presents the 5-year funding plans in the FYHSP database at that time. Within DHS’s Office of the CFO, the Office of Program Analysis and Evaluation is responsible for establishing policies for the PPBE process and overseeing the development of the FYHSP. In this role, the Office of Program Analysis and Evaluation reviews the components’ 5-year funding plans, advises DHS’s senior leaders on resource allocation issues, maintains the FYHSP database, and submits the annual FYHSP report to Congress. For the first time since we began our annual assessments of DHS’s major acquisition programs, all of the programs included in our review had a department-approved baseline. This allowed us to analyze schedule and cost changes across the portfolio of the 26 programs we assessed, which provides a foundation for measuring DHS’s acquisition performance going forward. From January 2016 to January 2017, 17 of the 26 programs we assessed were on track to meet their schedule and cost goals, including 2 that experienced either a schedule acceleration or cost decrease. However, 7 of these 17 programs established their goals for the first time since our last review and 9 others had previously revised their goals. The remaining 9 of the 26 programs experienced schedule slips, including 4 that also experienced cost growth. The change in schedule for a key program acquisition milestone in 2016 ranged from a 21-month acceleration to a 75-month delay, which resulted in an average increase of 6 months across the portfolio. Additionally, although 1 program had a drop in costs, overall the total acquisition cost across the portfolio increased by $988 million—or 1.6 percent—and the total LCCE across the portfolio increased by nearly $1.6 billion—or 0.8 percent. The overall schedule and cost changes were largely driven by increases experienced by a few programs. For example, the full operational capability (FOC) date for TSA’s Technology Infrastructure Modernization (TIM) program slipped by more than 6 years when the program revised its acquisition strategy—significantly delaying the delivery of some services to end users. Table 3 summarizes our findings and highlights those programs with schedule or cost increases. We present more detailed information after the table and in the individual assessments in appendix I. From January 2016 to January 2017, 17 programs were on track to meet their schedules or cost goals. Eight of the 17 programs were on track against their initial schedule and cost goals; that is, the schedules and cost estimates in the baseline DHS leadership initially approved after the department’s acquisition policy went into effect in November 2008. The other 9 programs had re-baselined prior to January 2016 and were on track against revised schedules and cost estimates that reflected past schedule slips, cost growth, or both. However, most of the programs on track in 2016 identified risks that may lead to schedule slips or cost growth in the future. Of the 8 programs on track against the schedules and cost goals in their initial baselines, only 1 program received DHS approval of its initial baseline prior to December 2015. Six of the remaining programs had operated for several years without a DHS-approved baseline, which, in addition to decreasing oversight, also increased the risk of end users not getting required capabilities on time or at cost. For example, DHS leadership approved the initial APB for CBP’s Non-Intrusive Inspection (NII) Systems Program in January 2016, which was more than 13 years after the program deployed initial capabilities to end users. This means that, even though capabilities were delivered to end users, the program had not followed the department’s November 2008 acquisition policy. Since the NII Systems Program’s initial APB was approved, the program’s acquisition cost estimate decreased by $190 million and its LCCE decreased by $315 million. Program officials attributed these decreases to achieving a reduction in NII system purchase and maintenance costs and the replacement of some NII systems that were costly to maintain. DHS leadership also recently approved the initial APB for a newer program—the National Protection and Programs Directorate’s (NPPD) Homeland Advanced Recognition Technology (HART)—in April 2016 when it entered the Obtain phase. Only 1 program—the Science and Technology Directorate’s (S&T) National Bio and Agro-Defense Facility (NBAF)—that we found was on track against its initial baselines in 2015 remained on track against its initial baselines in 2016. For context, because many baselines had been approved only recently, we also assessed the extent to which programs that were on track in 2016 had previously experienced problems. We found that 9 of these programs had previously experienced schedule slips, cost growth, or both. Specifically, all 9 of these programs had milestones that slipped an average of 4.5 years, for a variety of reasons. In addition, 6 of these 9 programs also experienced cost growth prior to 2016; in total, acquisition costs increased by $5 billion and LCCEs increased by nearly $17 billion. Examples of programs with no changes during 2016, but that had experienced past schedule slips and cost growth, follow. CBP’s Integrated Fixed Towers (IFT) program’s FOC date previously slipped 5 years, which officials attributed to delays in awarding contracts and to funding shortfalls. From September 2010 to September 2014, NPPD’s Next Generation Networks Priority Services (NGN-PS) program’s acquisition cost increased by $447 million and LCCE increased by $386 million when officials accounted for capabilities delivered under the voice phase’s second increment. From September 2014 to August 2015, the program’s acquisition costs subsequently decreased by $153 million based on a refinement of the estimate, but the LCCE increased by an additional $100 million when officials included all sustainment costs funded by a separate program—NPPD’s Priority Telecommunications Services program, which assumes responsibility for sustaining NGN- PS capabilities once they become operational—at the direction of DHS headquarters. On the other hand, 2 USCG programs—the Medium Range Surveillance (MRS) Aircraft and Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR)—that experienced past problems reported positive changes in 2016. In August 2016, DHS approved a revised APB for the MRS program that establishes initial schedule and cost goals for the restructured program. Specifically, the department paused the number of HC-144A aircraft at the 18 already procured and accounted for the transfer of 14 C-27J aircraft from the U.S. Air Force as directed by Congress in fiscal year 2014. Prior to this restructuring, the MRS program’s FOC date slipped from September 2020 to September 2025 when the USCG reduced the number of HC-144A aircraft it planned to procure annually in response to funding constraints. In addition, the program’s LCCE increased by $16.4 billion when the USCG accounted for costs over this additional 5-year period, among other things. For C4ISR, USCG officials stated they now plan to complete the transition away from using contractor-owned proprietary software by the end of calendar year 2017, which is 21 months earlier than the program’s revised APB. However, if completed by the new date, this transition would still occur more than 5 years later than the C4ISR program initially planned. Officials from most of the 17 programs on track in 2016 identified risks that could cause schedule slips, cost growth, or both in the future. These risks include testing issues, funding gaps, and technical challenges, among other factors. For example, NPPD’s Continuous Diagnostics & Mitigation (CDM) program is in the process of re-baselining to address implementation challenges discovered in 2016, which officials anticipate will increase the program’s cost and lead to potential schedule slips for future capabilities. In addition, the USCG Long Range Surveillance Aircraft is currently on track to meet schedule and cost goals, but experienced significant cost increases and schedule slips from 2009 to 2012, which USCG officials primarily attributed to the decision to procure additional HC-130J aircraft. Officials have said that the USCG would need to acquire one to two HC-130J aircraft per year in order to meet the program’s FOC date of March 2027. If the remaining aircraft are not delivered at this rate, the program’s schedule could slip further. USCG officials said the delivery rate is dependent on the amount of funding the program receives, as the USCG has historically received HC-130Js without including them in their budget requests. From January 2016 to January 2017, 9 of the 26 programs we assessed experienced schedule slips, 4 of which also experienced cost growth. The extent of these changes constituted breaches of schedules, cost goals, or both, for 6 of the 9 programs. For these 9 programs, the average schedule slip of 1.6 years was largely driven by changes in TSA’s TIM program. As far as cost growth, increases of $1.2 billion and $1.9 billion for acquisition and life-cycle costs, respectively, were also essentially driven by one program, TSA’s Electronic Baggage Screening Program (EBSP). More details follow. During 2016, 9 of the 26 programs in our review had at least one major acquisition milestone that slipped for various reasons. Across these programs, the average schedule slip was 1.6 years, but that average was significantly driven by a more than 6-year delay in the TSA’s TIM program, which revised its acquisition strategy. Figure 5 identifies the 9 programs that experienced schedule slips and the extent to which their major milestones slipped in 2016, as well as—for additional context—in prior years. While there are various reasons for the schedule delays, the effect is that end users may not have gotten needed capabilities when they originally anticipated. We identified several reasons why these key milestones slipped, including the following: New strategies or requirements: For example, TSA’s TIM program re-baselined in September 2016 to reflect a new acquisition strategy that is intended to address past program execution challenges that led to the program breaching its initial APB in 2014. TIM’s new strategy also includes integration with the Transportation Vetting System and support for additional programs, such as TSA’s Pre-Check. Additionally, TSA’s Passenger Screening Program (PSP) declared an APB schedule breach in January 2016 because of delays in incorporating new cybersecurity requirements in the Credential Authentication Technology system prior to completing operational testing. Technical challenges: For example, the USCG’s H-65 conversion/sustainment program declared a schedule breach in November 2016 after experiencing significant delays in developing a portion of the avionics upgrades for the H-65, which officials primarily attributed to an underestimation of the technical effort necessary to meet requirements. As a result, the avionics initial production decision has been delayed until September 2018, nearly 5 years later than initially planned. We elaborate on the reasons for all 9 programs’ schedule slips in the individual assessments in appendix I. During 2016, 4 of the 26 programs in our review experienced growth in both their acquisition cost estimates and LCCEs. In total, acquisition cost estimates increased by a total of $1.2 billion and LCCEs increased by a total of $1.9 billion, which reflects an approximately 8 percent increase in both estimates when calculated across these 4 programs. The cost growth is almost entirely driven by increases to TSA’s EBSP cost thresholds to account for risk in its new estimate that reflects anticipated funding shortfalls and planning for program succession. Table 4 identifies the 4 programs with cost growth and the extent to which their estimates increased in 2016. We identified a number of reasons why cost estimates increased in 2016, including the following: Revised acquisition strategy: For example, DHS leadership approved new APBs for TSA’s EBSP and TIM programs in May 2016 and September 2016, respectively, which increased the programs’ cost thresholds over their previous estimates to better account for potential programmatic risks. EBSP updated its cost estimate in July 2015 in response to funding constraints and plans for a new acquisition program to succeed EBSP in fiscal year 2028. In addition, the TIM program’s cost estimates changed from its September 2015 estimate when it adopted its new acquisition strategy, as noted above. Specifically, TIM’s acquisition cost estimate increased and LCCE decreased. However, the establishment of new APB cost thresholds in September 2016 that accounted for implementation risks associated with the program’s new strategy resulted in an overall increase in both estimates. More realistic cost estimates: For example, officials from CBP’s Automated Commercial Environment (ACE) program said the program’s initial cost estimate underestimated the number and size of the required development teams and included expected savings from moving to a cloud environment. In addition, officials from the Immigration and Customs Enforcement’s (ICE) TECS Modernization program attributed their program’s acquisition increase to including actuals for a contract awarded in 2016. We elaborate on the reasons for all 4 programs’ cost growth in the individual assessments in appendix I. Some DHS programs continue to face funding challenges, which increases the likelihood that they will cost more and take longer to deliver capabilities to end users than expected. We found that 18 of the 26 programs we assessed in this review are projected to experience life- cycle funding gaps exceeding 10 percent through fiscal year 2021. While DHS has continued to take steps to improve the affordability of its major acquisition programs, this is 8 more programs than we found in our prior review. In March 2016, we found that 10 of the 25 programs had a projected 6-year funding gap. Similar to last year, we compared the programs’ funding plans—documented in the FYHSP report to Congress—to the programs’ yearly LCCEs in order to identify any projected funding gaps for fiscal year 2016 through fiscal year 2021. We also identified the funding from previous years that programs brought into fiscal year 2016—known as carryover funding—to determine the extent to which that carryover could offset any funding gaps. Based on this analysis, we found various reasons for programs’ projected funding gaps, such as unfunded activities, new requirements, or that a sub-set of programs’ annual costs were funded by organizations outside the program. In addition, the USCG’s cost estimates include operations and maintenance (O&M) costs—which usually represent a majority of program costs—but their funding plans do not. We first identified this FYHSP reporting inconsistency in April 2015 and recommended that DHS account for the O&M funding the USCG plans to allocate to each of its acquisition programs in its future report. DHS concurred with the recommendation, but the USCG has yet to take action. USCG officials said they cannot resolve this issue until the USCG updates its financial management system and transitions to DHS’s common appropriations account structure, which they anticipate will occur in fiscal year 2020. Similarly, DHS officials told us that the next FYSHP report, which will be the first to include CBP’s Multi-Role Enforcement Aircraft (MEA) and Medium Lift Helicopter (UH-60) as distinct programs, will also not include funding allocated to cover these programs’ O&M costs because these costs are funded through a separate, central account for all of CBP’s air and marine assets. As a result of these reporting issues, any calculated projected funding gap would likely be overstated for 9 USCG and CBP programs we assessed. Aside from these specific O&M issues, program officials identified strategies to mitigate projected funding gaps, such as the following: Using alternative funding sources: For example, TSA’s TIM program anticipates receiving fees from vetting programs that will cover the program’s anticipated funding shortfall; Program tradeoffs: For example, officials from three CBP programs noted that they planned to address their projected funding gaps with actions such as performing only minimum maintenance, prioritizing upgrades against operational needs, and service life extension efforts; and Increased funding allocation: For example, NPPD identified that DHS plans to program additional funding to the HART program from fiscal year 2017 through 2021. However, officials from 7 programs said that projected funding gaps could cause future program execution challenges, such as schedule slips or cost growth. For example, officials from S&T’s NBAF program said that although they were working with the component to mitigate a $38 million funding gap, affordability challenges could cause delays in the operational stand-up of the facility. We elaborate on programs’ projected funding gaps in the individual program assessments in appendix I. DHS officials recognize the need to address program affordability and, since our last review, have continued to take actions through the department’s acquisition management and annual budget development processes to do so. For example, in March 2016, we found that DHS had initiated a process to assess and address affordability trade-offs based on a June 2014 requirement that components certify programs’ affordability prior to ADEs. We also made several recommendations at that time to enhance DHS leadership’s efforts to improve the affordability of the department’s major acquisition portfolio. For example, we recommended that components ensure their affordability certifications include details such as cost estimates, funding streams, and the monetary value of proposed tradeoffs. We also recommended that DHS review the affordability of 11 programs that had not had an ADE since DHS’s new funding certification requirements went into effect, and consider holding ARBs to discuss the affordability of these programs, as necessary. DHS concurred with both recommendations and now requires components to provide explicit details on affordability prior to ARBs, as necessary, as well as to submit more detailed information as a part of the annual budget process. For example, to develop the President’s fiscal year 2018 budget request, DHS required major acquisition programs to submit detailed data on program affordability, such as identifying all funding sources, a comparison to the program’s most recent cost estimate, and the impact of any funding gaps on program schedule, cost, or performance. As a result, officials said that they were able to address any potential funding gaps for major acquisition programs through this process and determined that no programs required an ARB specifically to discuss affordability in response to our March 2016 recommendation. In the near term, DHS officials said that they plan to publish programs’ annual acquisition cost estimates and any projected acquisition funding gap in the FYHSP report for fiscal years 2018-2022, which had not yet been submitted to Congress at the time of our review. They do not, however, plan at this point to present annual LCCE gaps as we previously recommended due to a lack of reliable information. While presenting acquisition cost estimates and any projected funding gaps are important, we continue to believe that DHS should also reflect annual LCCEs and any overall funding gaps—including O&M data, not just acquisition—in its future FYHSP reports. Adding this information would provide Congress valuable insights into DHS’s total funding needs and clarify the potential funding gaps for major acquisition programs. DHS officials acknowledged the importance of communicating overall program funding gaps in the FYHSP, including O&M data. They said that DHS’s efforts to implement a common appropriations account structure across the department should help them present this information in the future. We continue to monitor DHS’s actions to address program affordability and, at the request of Congress, have initiated a review to assess the extent to which DHS has accounted for program’s O&M costs and funding. Fourteen of the 26 programs we reviewed deployed capabilities prior to meeting all of their department-approved KPPs—the most important requirements that a system must meet to fulfill its purpose. As a result, DHS faces increased risk of fielding capabilities that do not work as intended. In some cases, it may be appropriate for programs to deploy capabilities prior to meeting their KPPs, such as systems that develop and test their capabilities incrementally. However, DHS’s acquisition policy requires programs to conduct operational testing, which is intended to demonstrate program performance, prior to receiving approval to pursue full-rate production or to transition into sustainment. Program officials identified multiple reasons that KPPs have not been met, such as programs had not yet tested the KPPs or KPPs were poorly defined. We found that DHS’s acquisition policy requires programs to establish an initial baseline—including defined KPPs—prior to gaining full knowledge about the program’s technical requirements. This timing is counter to acquisition best practices, and may potentially cause programs to experience cost growth, schedule slips, and inconsistent performance if requirements are not firmly established at the time the baseline is set. Fourteen of the 26 programs we reviewed have deployed capabilities prior to meeting all of their department-approved KPPs. All but 3 of these 14 programs have conducted some type of operational testing. Programs evaluate KPPs during operational testing, which is intended to help DHS determine how well a system will provide the desired capability before the system is fully deployed. DHS’s acquisition policy requires programs to conduct operational testing prior to receiving ADE 3 approval—the point where programs are authorized to pursue full-rate production or to transition into sustainment—but the policy also allows programs to initiate limited deployments of capabilities to support operational testing under certain circumstances. In some cases, programs deploy and test capabilities incrementally—an approach commonly used by information technology (IT) programs. For example, NPPD’s CDM program plans to provide sensors and tools for strengthening the cybersecurity of the federal government’s computer networks through a series of phases, which have their own KPPs that will be deployed and tested separately. Of the 26 programs we assessed, 9 have met all of their KPPs and 3 are still relatively early in the acquisition life cycle and have not yet deployed or operationally tested any capabilities. Table 5 identifies all 26 programs we assessed, whether they have deployed or operationally assessed or tested capabilities, and their progress in meeting department-approved KPPs as of January 2017. DHS officials identified several reasons why programs have deployed capabilities, but not met all of their department-approved KPPs. For example, programs had not yet tested the KPPs or failed to meet the KPPs when they were tested. Programs identified multiple reasons that KPPs hadn’t been met, which are presented in figure 6 along with the number of programs that identified them. Examples for each of the categories of reasons that programs have not met KPPs are presented below: The program has not yet tested the KPP. For example, the USCG’s C4ISR program no longer plans to independently conduct operational testing against its KPPs and will instead test C4ISR systems in conjunction with other USCG planes and vessels for which they are installed. However, the C4ISR system’s KPPs were not specifically assessed during prior HC-144, Fast Response Cutter (FRC), and NSC tests. Future testing will focus only on the ability of the C4ISR system to meet the NSC’s KPPs during the NSC’s follow-on operational testing in fiscal years 2017 and 2018. This follow-on testing, however, will only test one of the C4ISR system’s six KPPs. The program failed to meet KPPs during testing, or testing was not adequate to determine KPP status. For example, the U.S. Citizenship and Immigration Services’ (USCIS) Transformation program conducted an operational assessment on a sub-set of deployed capabilities from March 2015 to August 2015. This assessment evaluated seven of the program’s KPPs, and the program failed to meet one of them—the reliability KPP—because of the frequency of system failures. In another example, the Federal Emergency Management Agency’s (FEMA) Logistics Supply Chain Management System (LSCMS) program conducted operational testing throughout calendar year 2013, but DOT&E concluded that this testing was not adequate to determine whether the program had met its KPPs. This program subsequently met two of its seven KPPs through a performance test of a software release, and plans to conduct additional operational testing in March 2018 once it completes development of additional capabilities. The KPPs are not ready to be tested because the required technology or system capabilities are not yet available, or because capabilities are being deployed and tested incrementally. For example, the USCG’s MRS program cannot demonstrate the C-27J’s seven KPPs until it installs an entire mission system on the aircraft. Additionally, the program will not be able to demonstrate two of these KPPs—the detection and interoperability KPPs—identified in the joint operational requirements document (joint with CBP) for the C-27J aircraft because the mission system technology needed is not yet commercially available for this aircraft. In April 2016, the USCG received approval to defer these capabilities until the technology required to meet the detection KPP becomes commercially available. DHS has also directed the program to revisit requirements and, if appropriate, to initiate updating them prior to the program’s next acquisition milestone. In another example, NPPD National Cybersecurity Protection System (NCPS) officials told us that the program has not yet met the five KPPs related to its Block 2.2 capabilities because these capabilities are still early in the development phase and are not yet ready to be tested. The NCPS program has met a majority of its KPPs for capabilities that have previously been deployed and tested. The KPP is poorly defined. For example, the USCG’s NSC program indicated challenges in meeting three of its KPPs related to cutter- boat deployment in rough seas because the USCG and its OTA have different interpretations of the cutter-boat requirements. In January 2016, we recommended the NSC program office clarify the KPPs for the cutter boats, with which the USCG concurred. As of January 2017, the USCG was working on a resolution. While we have previously found that DHS’s acquisition policy is sound, at a more granular level we found an area for improvement. The policy requires programs to obtain department-level approval for initial APBs— including KPPs, schedules, and cost goals—at ADE 2A, that is, prior to gaining full knowledge about the program’s technical requirements. This sequence is not consistent with acquisition best practices. GAO’s acquisition best practices state that programs should pursue a knowledge-based acquisition approach that ensures program’s needs are matched with available resources—such as technical and engineering knowledge, time, and funding—prior to starting product development. While these initial APBs include KPPs that identify operational requirements defined by the user prior to ADE 2A, programs have not yet decomposed those KPPs into specific technical requirements or conducted key engineering reviews to develop critical knowledge about whether the proposed solution meets the user’s needs. This happens after the baseline is approved and programs are officially initiated. Key engineering reviews that should be conducted prior to establishing program baselines include the following: System definition review: establishes a functional baseline, which identifies what the system is to perform. Preliminary design review: assesses the preliminary design of the system and determines whether the program is prepared to start detailed design and test development. A third review, called the critical design review, is appropriately conducted after program initiation, which is consistent with acquisition best practices. This is a key engineering review that demonstrates whether the system’s final design is sufficiently complete to begin production. Figure 7 compares GAO’s acquisition best practices to DHS’s acquisition and systems engineering life-cycle phases. As shown, the system definition and preliminary design reviews are to the left of program initiation according to best practices, but are to the right of program initiation within DHS’s acquisition life cycle. By initiating programs without a well-developed understanding of system needs, DHS increases the likelihood that programs will change their user- defined KPPs, costs, or schedules after establishing their baselines. Changes such as this can be viewed as a natural occurrence as requirements are better defined. For example, officials from NPPD’s HART program told us that the cost and schedule goals in the program’s approved APB may change once they award the initial contract and receive the contractor’s technical solution for meeting the program’s already-established KPPs. In addition, we found in March 2016 that several programs had changed KPPs at least once since DHS’s current acquisition policy went into effect in 2008, and that KPP changes were associated with schedule slips and cost growth. We also found that 9 of the 12 programs that changed KPPs attributed those changes to poorly defined or unattainable requirements, and officials from 12 programs said that they may change KPPs in the future. Since March 2016, at least one additional program—TSA’s TIM program—made changes to its KPPs and we anticipate that more programs will need to make changes to KPPs in the future to better reflect system requirements. For example, officials from ICE’s TECS Modernization program said that they will not be able to demonstrate the program’s concurrent user KPP because the minimum goal far exceeds the current number of system users. DHS leadership previously acknowledged that the department has had difficulty defining KPPs, and senior DHS officials told us in December 2016 that they are continuing efforts to help programs define KPPs more effectively. However, officials also noted that there is a lack of systems engineering capability within the agency, which is an ongoing challenge. Officials further agreed there is room to refine the acquisition processes and told us that they are working with S&T to better align systems engineering efforts with the acquisition life cycle. For example, DHS officials said that they are working to adapt the acquisition processes for agile development—the department’s preferred development approach for IT programs—which is currently being piloted by some DHS major acquisition programs. While DHS’s efforts may allow for increased S&T involvement in the acquisition process, placement of the requirements definition and key engineering reviews earlier in the acquisition life cycle could yield better outcomes regardless of the development approach pursued by programs. Without also matching the program’s technical requirements and resources at the time KPPs are defined, DHS increases the risk that programs will continue to experience execution challenges, including cost growth, schedule slips, and inconsistent performance as requirements change after programs are initiated. By accumulating more knowledge before programs establish baselines and begin development, per acquisition best practices, DHS can place major programs in a better position to succeed, which ultimately means an increased likelihood of end users obtaining the capabilities they need within expected costs and time frames. In 2016, DHS made positive strides to strengthen its management of major acquisition programs. For example, DHS established new processes for assessing programs’ staffing needs and monitoring major acquisition program progress. While promising, it is too soon to tell if these processes will contribute to positive outcomes because DHS is still working on how to implement them and use them to support more forward-looking planning decisions. In addition, DHS revised the instruction for implementing the department’s acquisition policy to reflect changes made since the previous version was issued—some of which reflect past GAO recommendations. In addition, the new instruction includes changes to the documentation approvals needed before programs advance through the acquisition life cycle and to DHS’s breach policy. Our analysis indicates that DHS made progress in implementing these documentation requirements more consistently in 2016 than we have found in the past. For example, DHS leadership generally approved all the required key acquisition documentation prior to approving programs to proceed through the acquisition process. However, DHS leadership could better document its rationale for decisions made at ADEs to increase insight the department and external stakeholders have into acquisition management decisions. Further, we also found that no programs in our review had reported performance breaches and that DHS’s policy does not clearly define at what point not meeting KPPs constitutes a performance breach. Without insight into potential performance issues identified through breaches, DHS is at risk of fielding capabilities that do not work as intended. DHS has established new processes that could improve acquisition management by addressing longstanding issues related to acquisition workforce shortfalls and program execution challenges we have identified in the past. Specifically, DHS revised its process for assessing major acquisition program staffing needs and established a process to monitor major acquisition program progress across a variety of factors and categories DHS deemed were important for successful program execution. However, it is too early to tell what impact these efforts will have on program outcomes because DHS is still developing implementation plans for these new processes. We have highlighted DHS acquisition management issues in our high-risk updates since 2005—most recently in February 2017—and identified five outcomes that could strengthen DHS’s management of its acquisitions. One of these outcomes is that DHS assess and address whether sufficient numbers of trained acquisition personnel are in place at the department and component levels. In addition, we previously found that staffing shortfalls can impact a program’s ability to execute and may introduce risks leading to schedule slips, cost growth, or both in the future. For example, in March 2016, we found that staffing shortfalls limited NPPD NCPS’s ability to perform testing, oversee contractors, and manage finances. In response, DHS’s PARM initiated a process for assessing the staffing needs of its major acquisition programs in fiscal year 2014 and conducted a second assessment in fiscal year 2015. PARM collected key information such as the total staffing needed— including positions identified as critical—actual staffing levels, and mitigation strategies to fill any vacancies, among other items. However, these assessments collected retrospective information on whether programs were sufficiently staffed in those fiscal years and did not collect current or future program staffing need data. In addition, some of the fiscal year 2015 staffing assessments were not approved until January 2017, limiting the usefulness of the assessments given that the data was over a year old. In June 2016, the department began tracking only critical position vacancies rather than assessing all acquisition-related positions. PARM officials said they made this change to capture staffing data in a timely manner, document progress in filling key staffing gaps, and help the department mitigate remaining gaps. Consequently, some programs were assessed as being sufficiently staffed because they had few or no critical position vacancies, despite these programs identifying shortfalls in the programs’ total staffing need. For example, NPPD’s CDM program reported a total staffing need of 51 full-time positions, 19 of which were considered critical. NPPD also reported that CDM had only 1 vacancy out of its 19 critical positions. However, CDM officials told us they had only 31 of the 51 staff they needed in total, which represents a 39 percent shortfall overall. We present more information on programs’ staffing profiles in the individual program assessments in appendix I. After we raised questions about whether this approach would limit department insight into programs’ total staffing needs in October 2016, PARM revisited its decision to track only critical position vacancies and revised its approach for future staffing assessments. In December 2016, DHS approved a new staffing instruction that will require major acquisition programs to submit and annually update staffing plans identifying total staffing needs, but also track critical position vacancies quarterly, among other things. According to PARM officials, the agency is developing guidance and templates intended to bring clarity to the new policy and limit potential inconsistencies in interpretation across the programs, such as what positions programs determine to be critical. In addition, the new staffing instruction requires programs to develop a multi-year staffing plan that identifies future staffing needs. PARM officials told us that they plan to pilot the new staffing assessment process in 2017 and hope to complete the first assessment in time to inform the department’s fiscal year 2019 budget request. If implemented as intended, the new staffing assessment process would improve PARM’s insight into major acquisition program staffing needs and assist the department in developing mitigation strategies to address current staffing gaps and planning for future staffing needs. In October 2016, DHS established the Acquisition Program Health Assessment (APHA), a process intended to monitor major acquisition programs’ progress. PARM initiated efforts to develop the APHA in February 2015 after DHS’s Deputy USM directed it to lead development of a holistic, objective, repeatable process for evaluating the department’s major acquisition programs and reducing duplicative reports. PARM established a working group with representatives from all ARB stakeholder organizations—such as the CFO, Chief Information Officer (CIO), Chief Procurement Officer, DOT&E, and the Joint Requirements Council (JRC)—and each of DHS’s operational components, which developed a weighted assessment methodology. The APHA assessment methodology consists of a number of factors within several categories, such as program management, financial management, contract management, performance, and human capital, which DHS deemed were important for successful program execution. Each factor was defined and is rated by the stakeholder with primary responsibility for that area within the department. For example, DOT&E defines and rates programs on the factor related to operational testing, whereas the CFO defines and rates programs on the factor related to LCCEs. The factor ratings are then used to develop category ratings, which in turn, feed into a program’s single overall APHA score. DHS is still working on its implementation and it will take time to determine whether it will be an effective acquisition management tool. According to PARM officials, they plan to utilize the APHA results to inform DHS leadership about major acquisition programs through monthly briefings and quarterly reports, as well as reports to external stakeholders. For example, the APHA will inform a section of the department’s annual Comprehensive Acquisition Status Report to the Senate and House appropriations committees starting in fiscal year 2017 and will provide the score that the DHS CIO reports for each major acquisition program on the Office of Management and Budget’s IT Dashboard. However, senior DHS officials noted that while the department has made progress in developing APHA, they still have work to do to refine and strengthen the process, such as determining what constitutes a good APHA score and turning it into a leading indicator of program health versus a lagging indicator. DHS officials have shared information on the department’s efforts to establish the APHA process with us, and we will continue to review DHS’s efforts to evolve and implement the APHA process moving forward. In March 2016, DHS revised the acquisition policy instruction for implementing MD-102 to provide guidance for successful program planning, management, and execution. Some of the revisions reflect changes DHS previously made in response to past GAO recommendations, and the new instruction also includes changes to the documentation that programs are required to get approved before advancing through the acquisition life cycle. The revisions also set forth the process programs must follow if they experience a breach. DHS has made progress in implementing these documentation requirements more consistently than we have found in the past, but DHS leadership could better document its rationale for key acquisition decisions to increase department and external stakeholder insight into acquisition management decisions. Over the past 3 years, DHS has made changes that reflect prior GAO recommendations to clarify roles and responsibilities and provide better oversight, which are now included in its revised acquisition policy instruction. For example: Clarifying roles and responsibilities. In March 2015, we found that DHS’s acquisition policy did not clearly differentiate the roles and responsibilities of DHS’s PARM and the Enterprise Business Management Office in the Office of the CIO, which has the primary responsibility for ensuring IT investments align with DHS’s missions and objectives. We recommended that DHS clarify the roles and responsibilities of PARM and other DHS oversight organizations to improve coordination, limit overlap of responsibilities, and reduce duplicative efforts at the component level. In April 2015, DHS’s Acting Deputy USM issued an acquisition decision memorandum to clarify the respective acquisition responsibilities of PARM, the Office of the CIO, and other members of DHS’s ARB, and in March 2016, DHS revised its policy instruction to reflect these changes. Re-establishing the JRC. In November 2008, we found that DHS had not effectively implemented or adhered to its review process for major acquisitions and recommended that DHS reinstate the JRC to review and approve acquisition requirements and assess potential duplication of effort. In June 2014, the Secretary of Homeland Security directed the creation of a joint requirements process, led by a component-composed and chaired JRC, and in March 2016, DHS revised its policy instruction to reflect the addition of the JRC as an acquisition oversight body. Among other responsibilities, the JRC is to provide requirements-related advice and validate key acquisition documentation to prioritize requirements and inform DHS investment decisions, such as the joint-operational requirements document between USCG and CBP for a common aircraft mission system. In October 2016, we found that the re-establishment of the JRC after many years without such an active body is a positive demonstration of senior-level commitment to improving the DHS-wide capabilities and requirements processes and has the potential to help DHS reduce duplication and make cost-effective investments across its portfolio over time. However, the JRC is still developing a process to prioritize requirements to inform budget decisions. DHS’s March 2016 revision to the acquisition policy instruction also included changes to the acquisition documentation required to inform ADEs, but DHS leadership did not always document its rationale for key acquisition decisions. In September 2012, we found that, in most instances, DHS leadership had allowed programs to proceed with acquisition activities without obtaining department-level approval of key acquisition documentation—such as APBs, LCCEs, and operational requirements documents—as required by its acquisition policy. As a result, we recommended DHS ensure all programs obtain department- level approval for key acquisition documentation before approving their movement through the acquisition life cycle to mitigate risks of execution challenges, such as cost growth and schedule slips. DHS concurred with this recommendation and we have continued to monitor the agency’s progress in addressing this recommendation through our annual assessments and high-risk updates. Key changes to the acquisition documentation required to inform ADEs include: ADE 2A: DHS now requires programs to obtain department-level approval for program study plans for performing analysis of alternatives and receive technical assessments conducted by S&T and the CIO at this decision point. ADE 2C: DHS now requires programs to update and obtain department-level approval for several documents at ADE 2C, including, but not limited to current APBs, LCCEs, and TEMPs. The previous instruction had no formal documentation requirements for this decision point. We reviewed acquisition decision memorandums—the department’s official repository for key acquisition management decisions—issued in calendar year 2016 and identified that 14 major acquisition programs received ADE approval in 2016. Half of these programs had ADEs before DHS revised the acquisition policy instruction in March 2016, while the other half had ADEs after March 2016. We reviewed the documentation for each program compared to the requirements in place at the time of its ADE and found that DHS leadership had generally approved the required key acquisition documentation—including APBs, LCCEs, and operational requirements documents—for all 14 programs according to the requirements in place at the time. However, DHS had not approved some of the required documentation for 4 programs—CBP’s Tactical Communications (TACCOM) Modernization and UH-60, NPPD’s HART, and TSA’s TIM. CBP’s TACCOM program did not have a department-approved Acquisition Plan when leadership granted it ADE 3 approval in January 2016. CBP officials told us that the Acquisition Plan did not complete the approval process prior to its ADE 3 because of conflicting guidance delivered to the program regarding the content of the plan. However, these officials stated that the program subsequently updated the Acquisition Plan and submitted it for department approval, which they expect to receive by early calendar year 2017. CBP’s UH-60 program did not have a department-approved Integrated Logistics Support Plan, TEMP, or Systems Engineering Life Cycle Tailoring Plan when DHS leadership granted it ADE 2B approval in January 2016. DHS leadership required the program to update its Integrated Logistics Support Plan and, as of December 2016, program officials said they had submitted a draft for signature. Program officials also told us that DOT&E said that a TEMP was unnecessary because the program completed operational testing in 2012 and DHS leadership only required that the program conduct minimal flight checks on future aircraft. Program officials acknowledged they had no Systems Engineering Life Cycle Tailoring Plan for the UH-60 program, and noted that the systems engineering reviews for the reconfigured aircraft are being performed by the U.S. Army. NPPD’s HART program received ADE 2A approval in May 2016, but did not receive DHS approval for all of the new documentation requirements under the March 2016 acquisition policy instruction revision. Specifically, the program received a technical assessment from S&T but not from DHS’s CIO, as was required. Program officials noted that they were not aware of the requirement for a CIO technical assessment, but that DHS’s CIO did review HART’s documentation and is a part of the program’s source selection evaluation team. TSA’s TIM program received a combined ADE 2A/2B approval in October 2016, but did not receive approval for the Analysis of Alternatives Study Plan, as required. However, TIM did receive DHS approval of its new technical approach that was developed in close collaboration with DHS’s CIO and subject matter experts from S&T, among other organizations, prior to its ADE approval. A senior DHS official stated that TIM’s new technical approach satisfied the Analysis of Alternatives Study Plan requirement based on the activities completed. In all four cases, there is no acquisition decision memorandum granting these programs approval to deviate from the documentation requirements, as outlined in DHS policy. While DHS made progress implementing its documentation requirements in 2016, DHS leadership made some decisions that were inconsistent with DHS’s acquisition policy for programs that did have all the required documentation approved. For example, DHS leadership granted CBP’s Land Border Integration (LBI) and NII programs ADE 3 approval while simultaneously requiring CBP to identify a final year for each program. As a result, DHS approved the programs to transition into sustainment based on approved LCCEs that did not account for each programs’ full costs, which is inconsistent with both the current and past versions of DHS’s acquisition policy instruction. Senior DHS officials said that they had the knowledge to support ADE 3 approval for the programs because the approved LCCEs for both LBI and NII covered at least one cycle of technology replacement past each program’s FOC dates and that they had discussed plans for follow-on capabilities at each programs’ ADE. Officials from both programs said they will update their programs’ LCCEs in 2017 to reflect all costs through each programs’ identified end year. Senior DHS officials acknowledged that the department could better document these decisions and leadership’s rationale in acquisition decision memorandums. In other cases, we found that DHS leadership took steps to ensure programs complied with its acquisition policy. For example, CBP’s ACE program requested permission to waive the requirement to complete all operational testing prior to FOC, but DHS leadership denied that request. In addition, DHS leadership withheld ADE 1 approval for the USCG’s Motor Lifeboat program until it received JRC validation of its mission needs documentation and submitted it to DHS for approval, as required. Federal internal control standards state that to achieve objectives and respond to risks, agencies should clearly document and communicate significant events in a manner that allows for effective oversight and examination. DHS’s acquisition policy instruction indicates that acquisition decision memorandums document acquisition decisions, direction, guidance, and any assigned actions. However, the policy instruction does not specify that leadership’s rationale for those actions be included in the memorandums. DHS leadership’s decisions to approve programs to proceed through the acquisition process without meeting all acquisition policy instruction requirements may be reasonable in any given case. For example, it can take months to obtain department-level approval for key acquisition documentation, and it may take time for DHS to build the capacity to conduct the new S&T and CIO assessments and implement the policy across the department. However, unless the rationale for these decisions is documented and communicated through acquisition decision memorandums, effective oversight and insight into approval decisions for internal and external stakeholders is limited. DHS’s March 2016 revised acquisition policy instruction also includes changes to the department’s breach policy, which applies to programs that fail to meet any cost, schedule, or performance threshold in a program’s approved APB. However, the policy instruction does not specifically discuss how to determine whether a performance breach has occurred, and we found that no programs had reported a performance breach. Among other changes, DHS’s revision requires programs to notify department- and component-level leadership via formal memorandum within 30 calendar days of an identified breach (cost, schedule, or performance). The revision also removed the requirement that programs submit breach remediation plans to DHS leadership within 30 days of this notification and take certain corrective actions—such as returning to its APB parameters, re-baselining, or having a DHS-led program review that results in recommendations for a revised baseline—within 90 days of the breach occurrence. Under the revised instruction, programs are now directed to work with the Component Acquisition Executive to determine an appropriate timeframe in which to complete remediation planning after submitting a breach notification, and to take corrective actions within the timeframe established by DHS as documented in an acquisition decision memorandum approving the program’s remediation plan. In general, programs continue to execute planned activities while conducting breach remediation planning efforts, unless otherwise directed by DHS leadership. In calendar year 2016, 10 major acquisition programs—including 6 that we reviewed in more depth—submitted schedule or cost breach notification memorandums to component and DHS leadership. Three of the programs declared the breaches before DHS revised the acquisition policy instruction, while the rest declared breaches afterwards. These programs took varying lengths of time to submit remediation plans, and DHS approved the remediation plans for all programs. Table 6 depicts the status (as of February 2017) of the 10 programs that had reported a cost or schedule breach in 2016. As a part of its review process, DHS requested that at least two programs make revisions to their remediation plans before they were approved. For example, DHS issued an acquisition decision memorandum in December 2016 disapproving the USCIS Transformation program’s remediation plan, and directing that USCIS stop planning and development of new capabilities and update its breach remediation plan, among other things. DHS subsequently approved a revised breach remediation plan for the Transformation program in February 2017. In addition, TSA submitted three versions of its combined breach remediation plan for both PSP and the Security Technology Integrated Program over the span of about 5 months, before DHS leadership ultimately approved the final plan in January 2017. DHS issued an acquisition decision memorandum in July 2016 directing TSA to make significant changes to its initial breach remediation plan submitted in May 2016. PARM officials confirmed they received TSA’s revised breach remediation plan for these programs in August 2016, but requested additional changes, which were reflected in a final version submitted in October 2016. According to these officials, the requested changes were made during a meeting with the program managers and not documented in an acquisition decision memorandum. They added that PARM is in communication with the component and program as they develop their remediation plans, and also updates DHS leadership on programs’ breach status on a monthly basis; however, officials noted that the communication between DHS and the program is informal and not always documented through acquisition decision memorandums unless DHS leadership has significant concerns about the breach. We will continue to monitor DHS’s implementation of its updated breach policy, including documentation of the department’s communication with programs during their breach remediation planning efforts. We also found that the revised acquisition policy instruction is not clear as to how programs are to determine when a performance breach has occurred. No program in our review had reported a performance breach despite 14 programs not meeting KPPs, including 3 programs that DHS had granted ADE 3 approval. Some program officials we spoke to said that they did not report a performance breach to DHS headquarters because the programs planned to meet all KPPs during future test events. Senior DHS officials told us programs typically experience a cost or schedule breach prior to a performance breach, and that they consider the performance breach policy to apply towards the end of a program’s acquisition life cycle, such as after it begins operational testing. In addition, senior DHS officials said they frequently discuss program performance at ARBs and prior to granting programs ADE 3 approval. However, DHS’s acquisition policy instruction revision states that the breach policy applies once a program’s initial APB is approved at ADE 2A through FOC, and does not specify at what point during this timeframe programs should have met KPPs. Moreover, while some programs may experience schedule or cost breaches earlier in the acquisition life cycle, these breaches or actions programs take to remediate these breaches may not be related to performance issues. For example, CBP’s IFT program experienced a schedule breach in November 2012 due to delays in the initial contract award process and anticipated funding shortfalls. DHS leadership removed IFT from breach status in December 2015—one month after the program’s OTA conducted a limited user test on equipment deployed on the Arizona border. Based on the test data, the OTA was unable to determine if the system met its identification range KPP. The program has not declared a performance breach because the IFT program manager did not concur with several of the test results due to testing limitations. DHS granted the program ADE 3 approval in 2013 prior to this testing, which means the program has the authority to continue fielding equipment that may not work as intended. In June 2014, we found that the USCG’s acquisition guidance did not clearly specify the conditions—particularly the timing—that would constitute a performance breach and that DHS approved two USCG programs—FRC and MRS’s HC-144A—to enter full-rate production without having demonstrated all of their KPPs. We recommended the USCG revise its acquisition guidance to specify when performance standards should be met and to clarify the performance data used to determine whether a performance breach has occurred. The USCG concurred with our recommendations and updated its component-level policy in May 2015 to define a performance breach occurrence, specify when performance standards should be met (such as in formal follow-on operational testing), and to outline the actions a program must take following a breach to resolve the performance shortfall. However, DHS’s department-level policy does not contain similar guidance. Until DHS clarifies its acquisition policy instruction, it may be difficult for programs to determine when, or by what measure, a breach of its KPPs has occurred and, therefore, when to notify DHS of the occurrence. By allowing programs to continue re-testing capabilities that have failed to meet KPPs without submitting performance breach notifications and remediation plans, DHS lacks insight into the root causes of system failures to address performance issues that may also impact a program’s schedule and cost estimates moving forward. In addition, programs could potentially continue to field capabilities that do not fully meet KPPs or test and re-test indefinitely in an attempt to meet a KPP—scenarios in which end users do not get the capabilities they need or in the timeframes that they need them. Since we began reviewing DHS’s portfolio of major acquisitions in 2015, the agency has strengthened its ability to track the progress of its major acquisitions. Significantly, this year, for the first time, all programs in our review had approved baselines against which DHS can measure program performance—an effort that has taken almost 8 years since DHS first established this requirement. Nevertheless, DHS continues to face challenges in managing its portfolio, and progress does not negate the fact that many programs continue to cost more, take longer than expected, or struggle to meet moving performance targets. Improving information for DHS leadership that ensures a program’s needs are matched with available resources—performance and technical requirements, time, and funding—prior to approving programs to begin development could reduce the risk that programs will continue to face execution challenges, put programs in a better position to succeed, and ensure the department is making wise investment decisions with its limited resources. DHS has made a concerted effort to refine its policies to reflect a more disciplined management approach and adhere more closely to this acquisition policy. This policy also affords acquisition decision makers a certain amount of flexibility. As DHS leadership exercises this flexibility in its oversight of acquisition programs, however, it is important that visibility is maintained into whether programs are meeting established requirements, that reasonable deviations are well documented, and that feedback directly affecting a program’s ability to be successful—such as remediating a breach of its goals—is consistently communicated to programs through formal channels. Doing so will enable better management of DHS’s major acquisition portfolio as a whole by retaining organizational knowledge and providing useful insight for DHS decision makers and external stakeholders. Additionally, as mature programs continue to fall short of performance goals, it is not clear at what point programs need to acknowledge to DHS that performance problems constitute a breach. As a result, DHS may be missing opportunities for oversight and correction of performance issues, and is at risk of fielding systems that may not work as intended. To mitigate the risk of poor acquisition outcomes and strengthen the department’s investment decisions, we recommend the Secretary of Homeland Security direct the Undersecretary for Management to take the following three actions: Update the acquisition policy to: Require that major acquisition programs’ technical requirements are well defined and key technical reviews are conducted prior to approving programs to initiate product development and establishing APBs, in accordance with acquisition best practices. Specify that acquisition decision memorandums clearly document the rationale of decisions made by DHS leadership, such as, but not limited to, the reasons for allowing programs to deviate from the requirement to obtain department approval for certain documents at ADEs and the results of considerations or trade-offs. Specify at what point minimum standards for KPPs should be met, and clarify the performance data that should be used to assess whether or not a performance breach has occurred. We provided a draft of this product to DHS for review and comment. In its written comments, reproduced in appendix III, DHS concurred with all three of our recommendations. In response to our first recommendation, DHS provided an estimated completion date for a study on how to better align the department’s systems engineering and acquisition life cycles with GAO’s acquisition best practices. In response to our other two recommendations, DHS requested that we consider them closed based on recent actions taken. Specifically, the department stated that it has begun expanding the information documented in programs’ acquisition decision memorandums to include enhanced background information and plans to include the status of acquisition documentation in the future. In addition, the department has updated the handbook for PARM’s component leads to include guidance on (1) including the information noted above when writing acquisition decision memorandums and (2) determining programs to be in performance breach if they have not met a KPP prior to ADE 3. While these are positive steps for addressing the intent of our recommendations, we continue to believe that DHS should update its acquisition policy to ensure that these changes are clearly communicated and implemented consistently throughout the department. DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This appendix presents individual assessments for each of the 26 programs we reviewed. Each of these assessments is two pages and presents information current as of January 2017. They include several standard elements, including an image provided by the program office, a brief program description, and a summary of the program’s progress in meeting its key performance parameters. Each assessment also includes the following four figures: Projected Funding vs. Estimated Costs. This figure generally compares the funding plan presented in the Future Years Homeland Security Program report to Congress for fiscal years 2017-2021 to the program’s current annual total cost estimate based on its department- approved life-cycle cost estimate. We use this funding plan because the data are approved by the Department of Homeland Security (DHS) and Office of Management and Budget, and was submitted to Congress to inform the fiscal year 2017 budget process. As a result, the data does not account for other potential funding sources, such as carryover, cost-sharing agreements with other organizations, or fees. In addition, the program’s current annual cost estimate accounts for total costs attributable to the program, regardless of funding source. Program Office Staffing Profile. This figure is generally based on the staffing assessments conducted by the Office of Program Accountability and Risk Management, which identify the number of staff a program needs (measured in full time equivalents) including how many are considered critical positions (measured in the number of people) and how many staff the program actually has. This figure and any discussion of programs’ efforts to address identified staffing gaps or critical vacancies do not reflect the January 2017 presidential order to freeze the hiring of federal civilian employees. Schedule Changes over Time. This figure consists of two timelines. The first timeline is generally based on the initial Acquisition Program Baseline (APB) DHS leadership approved after the department’s current acquisition policy went into effect in November 2008. Because these APBs were approved at different times, the first as-of date varies across programs. The second timeline identifies when that program expected to reach its major milestones as of January 2017. The second timeline also identifies any new major milestones that were introduced after the initial APB was approved, such as the date a new increment was scheduled to achieve initial operational capability, or the date the program was re-baselined. Cost Estimate Changes over Time. This figure generally compares the program’s cost estimate in the initial APB approved after DHS’s current acquisition policy went into effect to the program’s expected costs as of January 2017. This figure also identifies how much funding had been appropriated to the program through fiscal year 2016 and how it compares to future funding needs. These four figures are generally based on DHS headquarters-approved documentation and data, as identified above. However, in some cases, the figures are based on data the program office provided when it commented on a draft of the assessment if, for example, the data were more accurate or current. Each program assessment also consists of a number of other sections depending on issues specific to each program. These sections may include: Program Governance, Acquisition Strategy, Program Execution, Test Activities, and Other Issues. Lastly, each program’s assessment also presents comments provided by the program office and identifies whether the program provided technical comments, and presents GAO’s response to these comments, as necessary. Automated Commercial Environment (ACE) Customs and Border Protection (CBP) The ACE program is developing software that will electronically collect and process information submitted by the international trade community. ACE is intended to provide private and public sector stakeholders access to this information, and enhance the government’s ability to determine whether cargo should be admitted into the United States. The ACE program aims to increase the efficiency of operations at U.S. ports by eliminating manual and duplicative trade processes, and enabling faster decision making. CBP deployed ACE’s initial release in February 2003, but struggled to develop capability for several years. Department of Homeland Security (DHS) leadership directed CBP to halt new development in August 2010, and did not authorize CBP to restart development until it re-baselined the program in August 2013. GAO previously reported on CBP’s ACE program in March 2016 (GAO-16-338SP) and has an ongoing review to assess ACE’s implementation. Staff needed: 196 full time equivalents (FTE) CBP officials previously told GAO that three of ACE’s four key performance parameters (KPP) were tested and successfully demonstrated for deployed functionality in May 2015, including the KPP for system availability. However, CBP officials subsequently reported that the ACE program did not meet its availability KPP in June 2016 when ACE became mandatory for all manifest processing and system traffic increased. Officials expect the availability KPP to temporarily decline again in January 2017 when ACE is fully deployed. ACE will not be able to demonstrate that it can meet its final KPP for full system performance in an operational environment until the program completes testing, which is now planned for April 2017. When DHS leadership re-baselined ACE’s cost, schedule, and performance parameters in August 2013, the program adopted an agile software development methodology to accelerate software creation and increase flexibility in the development process. ACE’s agile method is defined by a series of 2-week “sprints,” during which software is designed, developed, integrated, and tested. Six ACE sprints constitute a program increment. The program currently consists of 13 increments, which are to be completed over a 3-year period. At the end of each sprint, software developers demonstrate new capabilities to ACE end users to obtain feedback and confirm that the new capabilities meet requirements. The ACE program office serves as the system integrator, overseeing 15 agile development teams. Because the agile teams demonstrate capabilities after each sprint, ACE program officials said they have opportunities to closely monitor contractor performance and mitigate risks through real-time management decisions. decommissioning the legacy program that ACE is replacing. In December 2016, CBP reported that the program is able to offset the projected gap in fiscal year 2017 with carryover funds and that both CBP and DHS had realigned additional funding to address the projected gap in the remaining years. In August 2016, CBP officials told GAO that the ACE program is developing a fee-for-service concept that could potentially be used for system enhancements, among other things. In April 2016, DHS’s Director, Office of Test and Evaluation (DOT&E) approved a new Test Evaluation Master Plan (TEMP) that reflected a more flexible testing approach. CBP officials previously told GAO that they determined it would be more feasible to test ACE’s KPPs in batches as capabilities were deployed, rather than all at once as directed in its initial TEMP. The program conducted its first operational test in June 2015, but delayed a subsequent operational test from April 2016 to July 2016 to allow all stakeholders more time to transition to ACE prior to testing. In December 2016, program officials said they plan to conduct follow-on testing for this event in February and March 2017—after ACE’s final deployment in January 2017—and do not anticipate receiving final test results until May 2017. In November 2016, DHS’s Under Secretary for Management (USM) re-baselined the ACE program, removing it from breach status after the program experienced schedule slips and cost growth. In June 2016, CBP officials notified DHS leadership that the program would not complete several key events as planned, and that its costs would increase beyond its approved thresholds. The program reported that its external stakeholders raised concerns about meeting the mandatory transition date to ACE. In response, the program delayed completion of two key milestones: (1) decommissioning of the legacy entry system slipped from March 2016 to July 2016 and (2) development of ACE functionality slipped from May 2016 to September 2016. According to CBP officials, ACE functionality will be fully developed and in use by January 2017. The delays affected subsequent milestones including completion of operational test and evaluation, which slipped from September 2016 to April 2017, and full operational capability (FOC), which slipped from November 2016 to September 2017. Despite these delays, CBP’s initial re-baseline draft did not delay the program’s Acquisition Decision Event (ADE) 3 from its initial date of November 2016, which could have allowed the program to transition into sustainment without test and evaluation results that confirmed successful performance of ACE’s full capabilities, as required by DHS’s acquisition policy. However, the revised baseline approved by DHS’s USM ultimately delayed ADE 3 to June 2017 until after operational testing is scheduled to be complete. The program’s final operational test event, which is the first time the program will be able to test the functionality and performance of the entire ACE system, was delayed from September 2016 to April 2017. CBP officials said they requested permission to waive the requirement to complete all operational testing prior to FOC. DHS leadership denied the request and, as reflected in the program’s new baseline, ACE will complete operational testing prior to FOC. The ACE program reported one critical vacancy for a Director of Testing and Evaluation. In August 2016, CBP officials told GAO that existing staff have covered the workload for this critical vacancy and the position will no longer be required in the near future. Program Office Comments The availability KPP is measured over any continuous 365-day period for a fully deployed system and reported to DHS monthly. Although this KPP dipped slightly below its threshold in June 2016, which is typical after a deployment or mandatory use date, there is no indication that the availability KPP will not be met once the system is fully deployed. Program officials noted that they would declare breach for performance prior to full system deployment if they determine there is no chance of achieving a KPP. Program officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Integrated Fixed Towers (IFT) Customs and Border Protection (CBP) The Department of Homeland Security (DHS) established the IFT program in March 2012 to address the capability gap left when the Secretary of Homeland Security canceled the Secure Border Initiative Network (SBInet) program. CBP plans to deliver approximately 53 fixed surveillance tower units equipped with ground surveillance radar, infrared cameras, and communications systems linking the towers to command and control centers. CBP plans to deploy these units across six areas of responsibility (AoR) in Arizona to help the Border Patrol detect and track illegal entries in remote areas. DHS leadership re-baselined the program in December 2015, approximately 3 years after CBP determined the program could not meet its initial schedule goals. GAO previously reported on CBP’s IFT program in March 2016 (GAO-16-338SP) and has an ongoing review to assess IFT’s deployment along the Arizona border. Staffing gap: 4 FTEs equivalents (FTE) CBP officials previously told GAO that IFT met all 3 of its key performance parameters (KPP) during a July 2015 systems acceptance test in the Nogales AoR. These KPPs establish a minimum acceptable range for detection and identification, and the percentage of time the system must operate as intended. In April 2016, however, testers found that IFT only met 2 of its 3 KPPs and experienced 5 operational deficiencies during a November 2015 limited user test conducted in the same AoR. IFT did not meet its KPP for identification range. IFT and Border Patrol leadership did not concur with several of the test results and reported deficiencies, but DHS’s Director, Office of Test and Evaluation (DOT&E) did not formally assess the test results. In January 2011, the Secretary of Homeland Security canceled CBP’s SBInet program in response to cost, schedule, and performance problems involving the acquisition of new surveillance technologies. When CBP initiated the IFT program, it decided to purchase a non-developmental system, and it required that prospective contractors demonstrate their systems prior to CBP awarding a contract. The program awarded the contract to EFW, Inc. in February 2014, but this award was protested. GAO sustained the protest, and CBP had to re- evaluate the offerors’ proposals before it again decided to award the contract to EFW, Inc. As a result, EFW, Inc. did not initiate work at the deployment sites until fiscal year 2015. The contract is valued at $145 million and covers the entire system acquisition cost for the six AoRs, as well as 7 years of operations and maintenance. costs over the next 5 years. These officials added that they are in the process of updating IFT’s cost estimate to account for changes in the order of AoR deployments, but that the program will carry over nearly $34 million in funding from fiscal year 2016 to help address any remaining gap. According to CBP officials, the number of IFT units deployed to a single AoR is subject to change based on assessments by the Border Patrol. In April 2013, Border Patrol directed CBP to reduce the number of planned IFT units from 50 to 38 and reduce the AoRs from six to five. In January 2015, Border Patrol directed CBP to increase the AoRs back to six, but instructed CBP to replace 15 existing fixed tower systems deployed under the SBInet program, rather than expanding IFT capabilities to a new AoR as originally planned. In March 2016, Border Patrol certified to the congressional appropriations committees that 7 of the 53 IFT units deployed to the first AoR in Nogales met the program’s operational requirements—a prerequisite for CBP’s deployment of additional IFT units. As of January 2017, CBP officials said they had initiated the deployment of 15 additional IFT units to two other AoRs, and planned to deliver the remaining 31 IFT units across the other three AoRs. The DOT&E-approved TEMP established that CBP would conduct a limited user test to validate operational requirements and determine how the IFT system contributes to CBP’s mission. The program’s operational test agent (OTA) completed a limited user test at the Nogales AoR in November 2015. This test was delayed 2 months because, according to program officials, CBP delayed systems acceptance so the contractor could address problems identified with IFT’s cameras and operator interfaces during a July 2015 test. In April 2016, the OTA identified 5 operational deficiencies and recommended the program take 11 actions to improve IFT system operations. For example, the OTA found that the camera did not provide sufficient video quality and the IFT system did not enable the operator to consistently identify possible entries. In June 2016, IFT’s program manager issued a memorandum identifying his concerns with the OTA’s report and non-concurrence with 4 of the 5 deficiencies and 1 of the 11 actions. Border Patrol leadership subsequently concurred with the IFT program manager’s position. DOT&E reviewed the OTA’s test results, but decided not to conduct a formal assessment because DHS leadership had already authorized full deployment. In November 2016, a DOT&E official who observed the limited user test told GAO that he had concerns with how the test data were collected and did not believe the test results were useful in assessing IFT’s operational effectiveness, suitability, cybersecurity, or contribution to CBP’s mission. Program officials told GAO that Border Patrol is responsible for 5 of the 11 actions, and that they are working with the contractor to address the remaining 6 actions identified during the limited user test, such as updating the cameras to improve video quality. In January 2017, the IFT program manager said the program plans to conduct further testing and is working closely with DOT&E to determine the scope and timing of future test events. In January 2016, CBP reported that the IFT program had a staffing gap of four full time equivalents. In August 2016, program officials said they did not have problems executing current IFT installations, but said they will encounter challenges if CBP initiates subsequent AoR deployments simultaneously. Program Office Comments CBP officials non-concurred with GAO’s assessment that the IFT system failed a KPP in any phase of the program testing. There is no evidence in the limited user test report or other documentation showing IFT did not meet a KPP, specifically the KPP for identification range. CBP officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. GAO Response Based on the limited user test data, the OTA was unable to determine if the system met the identification range KPP. Land Border Integration (LBI) Customs and Border Protection (CBP) The LBI program delivers license plate readers (LPR), radio frequency identification readers, and other technologies to 122 land border locations. The program’s goal is to facilitate legitimate trade and travel while enhancing border security. LBI systems are intended to enhance the processing of pedestrians, inbound and outbound vehicles at land border crossings, as well as Border Patrol checkpoints. LBI leverages technology delivered through a previous CBP acquisition program designated the Western Hemisphere Travel Initiative (WHTI), which sought to enhance inbound vehicle processing. GAO previously reported on CBP’s LBI program in March 2016 (GAO-16-338SP). Staff needed: 33.55 full time equivalents (FTE) In September 2016, CBP officials reported that the program had met its key performance parameter (KPP) for the checkpoint LPR system. LBI previously relaxed the KPP threshold for the checkpoint LPR system in November 2015 after determining the original requirement was unrealistic and did not account for challenges in the checkpoint operating environment. To achieve the revised KPP, the program also replaced underperforming LPR technology at 28 locations. Program officials previously told GAO that the other LBI systems met their respective KPPs during testing conducted in 2009, 2012, and 2015. DHS’s Under Secretary for Management (USM) authorized CBP to transition from WHTI to LBI in May 2011. At that time, the USM transferred the inbound capabilities of WHTI to LBI, authorized a limited deployment of LBI’s outbound, pedestrian, and checkpoint capabilities, and informed CBP that he planned to delegate acquisition decision authority for future LBI deployments to CBP’s Component Acquisition Executive. However, according to CBP officials, the USM never delegated this authority. Nonetheless, program officials reported that CBP expanded the deployment of LBI’s outbound, pedestrian, and checkpoint capabilities without requesting formal authorization from DHS leadership. CBP proceeded with these deployments even though the USM had not approved an LBI Acquisition Program Baseline (APB) establishing the program’s cost, schedule, and performance parameters. The program previously deferred some planned deployments due to funding constraints. In December 2014, program officials told GAO that LBI’s cost estimates had decreased significantly from the nearly $2 billion estimated in August 2014. CBP officials said they originally planned to execute the program through three phases, which would allow CBP to enhance LBI systems over time, and expand the deployment of certain technologies to additional land border crossings. However, program officials stated that subsequent funding constraints forced CBP to defer some planned LBI deployments. CBP prioritized subsequent deployments by identifying land border crossings that would benefit the most from new technologies. LBI officials also explained they no longer planned to deploy Border Patrol checkpoint systems along the northern border, and have purchased less expensive, less efficient equipment to reduce costs. In January 2016, the USM approved the program’s APB. Later that month, DHS leadership granted the program Acquisition Decision Event 3 approval, and simultaneously required that CBP identify a final year for the program. CBP officials subsequently identified fiscal year 2027 as the program’s end date. However, LBI’s approved life-cycle cost estimate (LCCE) includes planned costs only through fiscal year 2021—6 years short of the program’s final year. Nevertheless, DHS approved the program to transition into sustainment without an understanding of the program’s full costs, as required by its acquisition policy. Program Execution LBI achieved full operational capability (FOC) for its remaining systems in 2016—more than 3 years later than officials originally reported. Program officials previously told GAO that all of LBI’s systems had achieved FOC by the end of August 2013. However, in August 2016, program officials reported that none of the systems had achieved FOC until June 2015, when the pedestrian systems reached this milestone. According to program officials, the remaining systems reached FOC over the next 15 months, with the inbound, outbound, and checkpoint systems achieving this milestone in September 2015, June 2016, and September 2016, respectively. LBI’s approved APB of January 2016 reflects these changes to the program’s FOC dates. In 2016, CBP continued to monitor the performance of the checkpoint LPR system against its KPP, as directed by DHS’s USM. In September 2016, CBP reported this system had met its KPP. The program concluded formal testing prior to January 2016. DHS’s Director, Office of Test and Evaluation (DOT&E) approved LBI’s Test and Evaluation Master Plan in November 2011, and the program conducted operational testing in January 2012. CBP officials reported that LBI systems met all of their KPPs during the 2012 operational test with the exception of the checkpoint LPR system. However, DOT&E did not validate the test results because, as discussed above, the program did not request formal authorization from DHS leadership to expand LBI’s deployment. From July to September 2015, CBP conducted an operational assessment of LBI’s deployed outbound systems and declared them operationally effective and suitable. In November 2015, DOT&E validated these results. Other Issues In January 2016, CBP reported the program needed 7.4 more full time equivalents. In August 2016, CBP officials said the program recently hired three new staff, and the remaining staffing gap has had minimal effect on operations. From January 2016 to January 2017, LBI’s cost estimates remained stable. However, as noted above, the program’s LCCE only reflects costs through fiscal year 2021 and does not account for additional quantities, operations and maintenance, or upgrade costs through the program’s end date of fiscal year 2027. In August 2016, CBP officials told GAO they plan to update the program’s LCCE in fiscal year 2017, at which point they will extend the estimate through the program’s end date of fiscal year 2027. From fiscal years 2017 to 2021, LBI’s yearly cost estimates appear to exceed the program’s funding plan by $52 million. LBI officials reported this funding gap is largely driven by the need to refresh deployed technology. The program plans to mitigate the funding gap by prioritizing upgrades against operational needs, conducting preventive maintenance, and remotely monitoring and correcting system issues, among other things. Program officials stated that upgrades to LBI’s inbound systems are most likely to be affected by future funding constraints, as the program has already updated checkpoint and outbound systems. Program Office Comments The LBI program formally achieved Produce/Deploy/ Support Phase in January 2016. The program satisfied the outstanding checkpoint KPP with the refresh of underperforming LPR technology. In September 2016 the program awarded a new primary technology support contract with one base and four option years, without protest. LBI will coordinate future activities with the CBP Component Acquisition Executive to ensure compliance with acquisition requirements. CBP officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Medium Lift Helicopter (UH-60) Customs and Border Protection (CBP) The UH-60 is a medium lift helicopter that CBP uses for law enforcement and border security operations, air and mobility support and transport, search and rescue operations, and other missions. CBP’s UH-60 fleet consists of 20 aircraft acquired from the U.S. Army in three different models. CBP previously acquired 4 aircraft in the modern UH-60M model and converted 6 of its 16 older UH-60A aircraft into more capable UH-60L models as a part of its Strategic Air and Marine Program (StAMP). In July 2016, Department of Homeland Security (DHS) leadership designated the UH-60 as a separate and distinct level 1 acquisition program. The UH-60 program is currently focused on converting the remaining 10 UH-60A aircraft. GAO previously reported on the UH-60 aircraft as a part of StAMP in March 2016 (GAO-16-338SP). Staffing gap: 3 FTEs equivalents (FTE) CBP determined that the converted UH-60L and UH-60M aircraft met all five of their key performance parameters (KPP) through operational testing conducted in fiscal years 2012 and 2014. These KPPs establish requirements for communications and specific mission capabilities, including interdiction, air mobility, special operations, and search and rescue. However, DHS’s Director, Office of Test and Evaluation (DOT&E) did not validate these results because the UH-60 was not considered a major acquisition when the tests were conducted. CBP has obtained all 20 UH-60 aircraft through agreements with the U.S. Army. CBP received the 16 UH-60A aircraft through a loan agreement in January 2004. In March 2008, CBP entered into an inter-agency agreement with the Army to convert the UH-60A into UH-60L models to extend the aircraft’s service life by an estimated 20 years, as well as to purchase and modify the 4 new UH-60M aircraft. CBP completed acceptance of the UH-60M aircraft in 2012. Security Program report to Congress—the first report to include UH-60 as a distinct major acquisition program—but said the O&M funding will not be reflected for the reason stated. This issue limits insight into the program’s funding needs and may obscure the size of future funding gaps. In November 2014, CBP proposed changing its acquisition strategy for converting its UH-60A aircraft when it learned the Army planned to divest several HH-60L aircraft, which could more easily be reconfigured into UH-60L aircraft for CBP missions. Specifically, CBP proposed concluding its UH-60A conversions of the 6 aircraft it had initiated and trading the remaining 10 aircraft for the Army’s newer HH-60L. Although the Army would still have to reconfigure the HH-60L aircraft to meet CBP’s needs, CBP officials anticipated this effort could reduce the program’s costs by an estimated $70 million, accelerate its schedule, and result in newer aircraft since the Army’s HH- 60L airframes had fewer operating hours than CBP’s existing UH-60A aircraft. At that time, DHS’s Under Secretary for Management (USM) directed CBP to further study its proposed approach in consultation with DHS’s Aviation Governance Board, and authorized CBP to initiate the transfer of a single HH-60L aircraft for developing a prototype to validate and verify its reconfiguration. In January 2016, DHS’s USM approved CBP’s revised acquisition strategy based on the Aviation Governance Board’s determination that the proposed plan carries less risk and will result in overall cost savings. The USM also approved the UH-60 program’s initial Acquisition Program Baseline (APB) at that time, which established schedule, cost, and performance parameters for the program’s revised acquisition strategy. Test Activities CBP conducted operational testing of the UH-60L and UH-60M aircraft in fiscal year 2012. CBP testers assessed the UH-60L as operationally effective and suitable in July 2012, but assessed the UH-60M as operationally suitable and marginally effective in April 2012 because it could not meet endurance requirements, among other things. CBP completed modifications on the UH-60M to address identified issues and conducted additional operational testing in March 2014. In April 2014, CBP testers assessed the UH-60M retrofits as operationally effective and suitable. However, DOT&E did not validate CBP’s test results for either aircraft variant because the UH-60 was not considered a major acquisition when the tests were conducted. In January 2016, DHS’s USM directed the program to conduct acceptance functional flight checks on at least one reconfigured HH-60L aircraft prior to receiving approval to proceed with the remaining transfer and conversions. The program plans to flight check the reconfigured HH-60L prototype in July 2017, but officials said they do not plan to conduct further operational testing because the HH-60L has minimal differences from the UH-60L aircraft previously tested. However, not demonstrating the reconfigured HH-60L in an operational environment may increase the risk that the aircraft will not perform as intended or be reliable once fielded. In September 2016, officials told GAO that CBP designated a program manager to lead each former StAMP acquisition program—including the UH-60—but that it maintained a consolidated program office where the same staff from StAMP continue to support all remaining acquisitions. Officials explained that this matrixed approach works well because they are able to leverage each team member’s particular subject matter expertise. Officials added that the program’s prior staffing challenges decreased significantly once they completed UH-60’s required acquisition documentation, and officials did not anticipate future staffing issues. Program Office Comments CBP is committed to accurate reporting of all of its programs and would like to clarify any misunderstanding in terms of program affordability. O&M of the UH-60 is funded separately, thus is not reflected in the acquisition funding. This assessment reflects only the acquisition funding plan. Additionally, CBP disagrees with GAO’s use of a 2007 draft APB. The program should be assessed according to the APB signed in January 2016 that was provided. CBP officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. GAO Response The draft 2007 APB provides perspective on the history of the program; however, GAO did assess UH-60 against its January 2016 APB as shown in the figures above. Multi-Role Enforcement Aircraft (MEA) Customs and Border Protection (CBP) The MEA is a fixed-wing, multi-engine aircraft that replaces CBP’s aging fleets of C-12, PA-42, and BE-20 aircraft. The MEA can be configured to perform multiple missions, including marine, air, and land interdiction; logistical support; and law enforcement technical collection (LETC). The current MEA configuration is equipped with marine search radar and an electro-optical/infrared sensor to support maritime and land surveillance and airborne tracking missions. CBP previously acquired MEA aircraft as a part of its Strategic Air and Marine Program (StAMP). In July 2016, Department of Homeland Security (DHS) leadership designated the MEA as a separate and distinct level 1 acquisition program. CBP plans to acquire 16 MEA aircraft in the current configuration and, as of January 2017, 12 had been delivered. GAO previously reported on the MEA aircraft as a part of StAMP in March 2016 (GAO-16- 338SP). Staffing gap: 3 FTEs equivalents (FTE) The MEA program has met all five of its key performance parameters (KPP). In March 2016, DHS’s Director, Office of Test and Evaluation (DOT&E) determined that the MEA program continued to meet four of its KPPs. Specifically, the MEA met two KPPs related to the aircraft’s marine interdiction capabilities and two KPPs related to air mobility. DOT&E previously determined the MEA had met its fifth KPP for communications during initial operational test and evaluation (IOT&E). Going forward, the program plans to establish additional KPPs for air and land interdiction, LETC operations, and suitability for future MEA configurations. CBP initially planned to procure 50 MEAs and awarded the first production contract in September 2009. However, the aircraft did not perform well during testing. In October 2014, DHS leadership said CBP could not procure or accept transfer of additional MEA without approval. CBP procured 12 aircraft under the initial contract, which expired in March 2015. In August 2015, DHS’s Under Secretary for Management (USM) authorized CBP to procure 4 additional MEAs for a total of 16 and directed CBP to work with the Joint Requirements Council (JRC) to determine the appropriate quantity and configuration for future MEA procurements. In September 2016, CBP awarded an indefinite delivery, indefinite quantity contract for 1 base year with four 1-year options, and issued a delivery order for MEAs 13 and 14. Program officials plan to exercise the first option year in fiscal year 2017 to procure MEAs 15 and 16, and the remaining option years once CBP receives approval for additional quantities. In April 2016, CBP developed a report that described capability gaps in multiple mission areas and proposed future MEA quantities and configurations. In September 2016, the JRC Chair endorsed CBP’s findings, but stated additional analysis was necessary for the JRC to fully validate them and recommended CBP develop a number of acquisition documents including an operational requirements document. reflected for the reason stated. This issue limits insight into the program’s funding needs and may obscure the size of future funding gaps. In March 2016, DHS’s DOT&E determined that the MEA was effective and had resolved issues found during prior testing. DOT&E had assessed the program’s IOT&E results in 2013, and concluded that additional testing was needed to assess the MEA’s air interdiction capabilities. DOT&E also said CBP needed to take 28 specific actions as soon as possible to improve MEA performance and that CBP should prioritize those that affect flight safety. CBP officials previously told GAO that they began addressing flight safety issues in January 2014. In July 2015, the program’s operational test agent (OTA) conducted an operational assessment and found that CBP had addressed 24 of the 28 actions. However, the OTA also made 15 additional recommendations to improve the aircraft’s operational effectiveness and suitability, and offered 14 additional findings to improve the effectiveness of the MEA’s new mission system. DOT&E concurred with the OTA’s findings, and subsequently determined that the remaining 4 actions had no operational impact or had been addressed by CBP. DOT&E recommended the program develop a plan to address the OTA’s recommendations, and consider the OTA’s additional findings to improve the mission system. In September 2016, CBP officials told GAO the program plans to conduct additional testing when MEA 14 is delivered by September 2017. CBP is replacing the mission system processor on the MEA with a system used by the U.S. Navy and the U.S. Coast Guard. The new processor is intended to enhance operator interface and sensor management, as well as replace obsolete equipment. CBP tested a prototype of the processor in July 2015. According to program officials, MEAs 13-16 will be delivered with the new mission system, and CBP will begin retrofitting previously delivered MEAs in fiscal year 2017. CBP officials said the program is on track to meet the goals in its initial Acquisition Program Baseline (APB), which DHS’s USM approved in January 2016. This APB established schedule, cost, and performance parameters for the program’s approved quantity of 16 MEAs. The program achieved initial operational capability in June 2011 upon delivery and acceptance of the first aircraft. The program plans to achieve full operational capability (FOC) by December 2018 upon delivery and acceptance of MEA 16. However, this is later than CBP previously planned. For example, a draft 2007 APB—which was never department approved—reported that the program planned to achieve FOC by September 2016. CBP plans to revise its APB if it receives approval to acquire future aircraft, which may delay FOC further and increase costs. Other Issues In September 2016, officials told GAO that CBP designated a program manager to lead each former StAMP acquisition program—including the MEA—but that it maintained a consolidated program office where the same staff from StAMP continue to support all remaining acquisitions. Officials explained that this matrixed approach works well because they are able to leverage each team member’s particular subject matter expertise. Officials added that the program’s prior staffing challenges decreased significantly once they completed MEA’s required acquisition documentation, and officials did not anticipate future staffing issues. Program Office Comments CBP is committed to accurate reporting of all of its programs and would like to clarify any misunderstanding in terms of program affordability. O&M of the MEA is funded separately, thus is not reflected in the acquisition funding. This assessment reflects only the acquisition funding plan. Additionally, CBP disagrees with GAO’s use of a 2007 draft APB. The program should be assessed according to the APB signed in January 2016 that was provided. CBP officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. GAO Response The draft 2007 APB provides perspective on the history of the program; however, GAO did assess MEA against its January 2016 APB as shown in the figures above. Customs and Border Protection (CBP) The NII Systems Program supports CBP’s interdiction of weapons of mass destruction, contraband such as narcotics, and illegal aliens being smuggled across U.S. borders, while facilitating the flow of legitimate commerce. CBP officers in the field utilize large- and small-scale NII systems at air, sea, and land ports of entry, as well as border checkpoints and international mail facilities. Large-scale NII systems use directed beams of X-rays or gamma rays that allow officers to examine the entire contents of containers and vehicles without breaching them. Small-scale NII systems are used to perform inspections of passenger baggage and cargo, view the inside of fuel tanks and small compartments, and identify false walls in containers. Small-scale NlI systems include X-ray systems and fiber optic scopes, among other devices. GAO previously reported on CBP’s NII Systems Program in March 2016 (GAO-16-338SP). Staff needed: 53.4 full time equivalents (FTE) In January 2016, the NII Systems Program reduced the number of its key performance parameters (KPP) from 24 to 18. According to officials, the program continues to meet all 18 KPPs including one requiring CBP to examine 100 percent of cargo identified for inspection. CBP previously reported challenges obtaining examination data for this KPP in the land environment because of data accessibility and compatibility issues. However, in August 2016, CBP officials said they worked with stakeholders to develop a standard methodology to report examination data. That said, the Department of Homeland Security’s (DHS) Director, Office of Test and Evaluation did not independently validate CBP’s assertion that it has met this KPP. CBP has been deploying NII systems since the 1980s, but DHS leadership did not approve the NII Systems Program’s Acquisition Program Baseline until January 2016. Later that month, DHS leadership granted the program Acquisition Decision Event (ADE) 3 approval, and simultaneously required that CBP update NII’s life-cycle cost estimate (LCCE) and identify a final year for the program. CBP officials subsequently identified fiscal year 2035 as the program’s end date and submitted an updated LCCE in February 2016. However, this LCCE only updated costs estimated through fiscal year 2026—9 years short of the program’s final year. Nevertheless, DHS approved the program to transition into sustainment without an understanding of the program’s full costs, as required by DHS acquisition policy. NII systems are commercial-off-the-shelf products, and for this reason, DHS leadership decided that the NII Systems Program does not need a Test and Evaluation Master Plan. However, the program regularly tests NII systems and plans to conduct operational assessments through FOC in fiscal year 2024. In August 2016, CBP officials said that they assessed the performance capabilities of deployed units earlier in the year. Among other things, CBP compared two fielded NII systems to determine their operational effectiveness in detecting contraband in both empty and loaded containers. The two systems were found to be equally effective at detecting contraband in empty containers, but one was generally determined to be a better option for loaded containers. As a part of the program’s ADE 3 approval, DHS leadership also required CBP to reassess future program requirements. In response, CBP developed a Capabilities Analysis Study Plan in March 2016 outlining the methodology for an 8-month analysis that will assess current capability gaps to ascertain future program requirements. CBP plans to complete the analysis by December 2016. The NII Systems Program has also conducted testing for future capabilities. In 2015, the program assessed whether NII and radiation detection technology could be combined to examine rail cargo, and whether cameras are capable of detecting new welding—indicating the possible presence of contraband—in moving trains. In August 2016, CBP officials told GAO that preliminary assessments of these tests were positive and the results will be further evaluated on fielded systems to validate the return on investment. This will better inform future acquisitions or systems upgrades where practical. For example, CBP is conducting operational testing on one of its rail systems with the combined radiation detection technology. If successful, future rail systems will incorporate this upgrade. From January 2016 to January 2017, the program’s acquisition cost estimate decreased from $1.9 billion to $1.7 billion, and the LCCE decreased from $4.5 billion to $4.2 billion. NII’s estimates previously increased when CBP extended the program’s lifespan from fiscal year 2022 to fiscal year 2026; increased the total procurement quantity for large- and small- scale systems, from 9,427 to 11,448; and increased the number of planned replacement systems by more than 2,000 units. CBP officials reported that the updated LCCE is lower because of reduced NII system costs in newly awarded and anticipated contracts, reduced maintenance costs resulting from fixed price maintenance contracts, and the replacement of some NII systems that were costly to maintain. However, as noted above, the updated LCCE does not account for operations and maintenance or replacement costs through the program’s end date of fiscal year 2035. CBP officials said they plan to update the program’s LCCE in 2017. As of August 2016, the NII Systems Program continued to face a staffing gap of approximately 44 percent. The largest shortfalls were in the program management and life cycle logistics disciplines. According to CBP officials, the current staffing gap has reached a critical point because of the risk of acquisition and deployment delays. Officials said that the program is utilizing contractor support, but this approach comes at a higher cost than filling the vacancies with government employees. Program officials explained that CBP has not hired additional staff because of an ongoing realignment of CBP’s organizational structure, and CBP is placing a higher priority on hiring officers, such as Border Patrol agents, versus program staff. From fiscal year 2017 to 2021, the NII Systems Program’s yearly cost estimates appear to exceed the program’s funding plan by $253 million. However, the yearly cost estimates over this 5-year period also include $138 million for operating and maintaining radiation detection equipment acquired by the Domestic Nuclear Detection Office. According to CBP officials, the program has instituted positive cost controls including a service life extension for some NII systems to address affordability challenges, but funding shortfalls continue to be the program’s greatest risk. Officials stated that the program is sustainable with these cost controls, but most of the NII systems will reach the end of their expected service lives within the next 5 years. Without funding for replacement of these critical systems, CBP officials said they will not have the capability to scan cargo and will have to inspect cargo manually. The program may also experience further slips in reaching full operational capability (FOC). As we found in March 2016, funding shortfalls previously caused the program’s FOC to slip 5 years—from fiscal year 2019 to fiscal year 2024. Program Office Comments CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Customs and Border Protection (CBP) The TACCOM program is intended to upgrade land mobile radio (LMR) infrastructure and equipment. It is replacing obsolete radio systems with modern digital systems in 20 different service areas, linking 19 of these service areas to one another through a nationwide network, and building new communications towers to expand coverage in 5 of the 20 service areas. The program is delivering LMR capability to approximately 95,000 users at CBP and other federal agencies. GAO previously reported on the TACCOM program in March 2016 and March 2015 (GAO-16- 338SP, GAO-15-201). Staff needed: 50.6 full time equivalents (FTE) In July 2016, CBP officials told GAO that the TACCOM program continued to meet its two key performance parameters, which measure coverage area and the percentage of time the systems are available. In May 2014, the Department of Homeland Security’s (DHS) Director, Office of Test and Evaluation (DOT&E) determined that the TACCOM program’s systems met their performance requirements. Going forward, the TACCOM program plans to conduct annual assessments in select locations to monitor systems performance. The TACCOM program was initially intended to upgrade LMR infrastructure and equipment in 20 different service areas, replacing obsolete radio systems with modern digital systems. The program was also intended to build new communications towers in all 20 of those service areas to expand LMR coverage. However, CBP subsequently decided to reduce the number of service areas where it would build new communications towers from 20 to 5 due to funding constraints. In the 15 remaining service areas, the program will still replace obsolete analog radio equipment with modern digital systems, but it will not expand coverage. The funding needed for tower construction in one service area was adequate to replace the radio systems in the 15 remaining service areas. management in order to initiate the upgrades. If DHS does not reach an agreement with the Department of Justice on the ownership and maintenance of the San Diego system, the program expects that the funding gap will increase. In addition to upgrading LMR capabilities within the 20 service areas, the TACCOM program is also responsible for connecting 19 service areas to one another. CBP plans to do so by replacing the circuitry that connects these service areas to an existing nationwide network. CBP officials said this effort constitutes the majority of the program’s remaining work, which they anticipate will be completed in December 2017. The TACCOM program conducts operational assessments annually in select locations where upgrades were recently completed to determine whether the system is operating as intended. From March to June 2016, the program’s operational test agent (OTA) conducted an operational assessment in the Houlton and Miami sectors and concluded the TACCOM systems were operationally effective and operationally suitable. However, the OTA noted some limitations, including interoperability with external users and collecting performance data for management review. The OTA recommended the program office conduct periodic user reviews by sector to identify and resolve coverage shortfalls and establish a system to collect and report on TACCOM performance monthly, among other things. In August 2016, program officials told GAO they monitor performance of TACCOM systems regularly and report outages to CBP’s Chief Information Officer daily. According to officials, the program will also conduct another operational test after it has connected the 19 service areas to one another. Program officials said the risk associated with this effort is low, but they do not expect to determine whether the capability meets mission needs until June 2017. CBP conducted operational testing in the Rio Grande Valley in December 2013 after the program had replaced obsolete radio systems with modern digital systems and built new communications towers. DOT&E concluded that the new TACCOM systems were operationally effective, and that the systems will likely prove suitable over time. In August 2016, program officials said they had hired two business managers and were actively working to fill the program’s remaining staffing gap. From January 2016 to January 2017, the program’s cost estimates remained unchanged. However, TACCOM’s cost estimates in its January 2016 APB reflected changes from the program’s previous internal estimates. The acquisition cost estimate decreased from $467 million to $397 million, but the life-cycle cost estimate increased from $959 million to approximately $1.1 billion when the program added government personnel costs. The program is projected to have a funding shortfall of over $100 million from fiscal years 2017 through 2021. In August 2016, program officials explained that they have taken steps to mitigate the anticipated funding gap by cutting TACCOM’s $4 million real properties budget in half; reducing manpower support contracts, travel, and gas; and performing minimum maintenance; among other things. However, they also explained that the anticipated funding shortfall may have a substantial long-term impact on operations and maintenance. Program Office Comments The deployed system is consistently exceeding the objective value for its operational availability key performance parameter. The program implements a formal process to review and update life-cycle cost estimates annually. Program officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Customs and Border Protection (CBP) TECS (not an acronym) is a law-enforcement information technology system that helps CBP officials determine the admissibility of persons wanting to enter the United States at border crossings and ports of entry as well as pre-screening sites located abroad. The legacy TECS system has been in place since the 1980s and is obsolete, expensive to maintain, and unable to support CBP’s evolving mission needs. In 2008, the Department of Homeland Security (DHS) initiated efforts to modernize TECS to provide its users with enhanced capabilities for accessing and managing data. Immigration and Customs Enforcement (ICE) is executing a separate TECS Modernization program, which GAO is also assessing in this report. GAO previously reported on CBP’s TECS Modernization program in March 2016 (GAO-16-338SP). Staff needed: 38.35 full time equivalents (FTE) In August 2016, CBP officials told GAO the program had met its remaining key performance parameter (KPP), which establishes how quickly the system can create a new, searchable record. CBP officials previously told GAO in August 2015 that the program had met its other five KPPs, but DHS’s Director, Office of Test and Evaluation (DOT&E) has not validated this assertion. According to officials, the program will demonstrate its KPPs through a series of four operational test events scheduled between September 2016 and January 2017. After the final event, DOT&E is to assess the test results to validate the program’s performance. To modernize TECS, CBP is replacing its legacy, mainframe- based platform with a combination of hardware, custom- developed and commercial software, and a web-based portal that will allow TECS to deliver capabilities to users within CBP and other DHS agencies. The TECS Modernization program consists of five projects, and officials stated CBP initially used an incremental acquisition approach for four of these projects. However, CBP is now using an agile software development methodology for all five of the projects. Under the agile software development methodology, programs deliver software in short, small increments—rather than long, sequential phases—which allows programs to measure interim progress. In April 2016, the program updated its life-cycle cost estimate, which remained largely unchanged since November 2010. According to CBP officials, the schedule delays have had little to no effect on the program’s cost estimate or end users because the legacy TECS system remains active. According to program officials, the program leverages existing CBP contracts to support TECS Modernization efforts. In June 2008, CBP awarded a contract to Bart & Associates, Inc. to develop software and provide operations and maintenance support. CBP exercised options on this contract from 2009 to 2012. However, the program experienced delays during this period. Officials told GAO that, in 2013, CBP awarded a new development and support contract to Northrop Grumman. That February, Bart & Associates, Inc. and two other firms submitted bid protests to GAO. CBP took corrective action, and 20 months later awarded another contract to Northrop Grumman in September 2014. Bart & Associates, Inc. submitted a second protest, which GAO denied. In January 2015, Northrop Grumman resumed work under the awarded contract that is being used to support TECS Modernization application development activities. In May 2016, DOT&E approved a fourth version of CBP TECS Modernization’s Test and Evaluation Master Plan that provided additional information on how cybersecurity threats would be addressed during operational testing starting in fall 2016. According to officials, CBP conducted three operational test events in September and November 2016—one event each at a land border crossing, a seaport, and an airport—prior to conducting a fourth operational test event in January 2017 that will verify final integration of the system’s hardware and software at both the primary and secondary data centers. CBP officials anticipate receiving preliminary results as testing is conducted, but said a test report encompassing all four events will be submitted to DHS’s DOT&E for assessment after the final test is complete. They explained that the January 2017 operational test event is the program’s biggest challenge because it will test integration of the TECS Modernization’s hardware and software with DHS’s network. In August 2016, CBP officials stated that staffing shortfalls related to the previous bid protests have been resolved. Officials do not plan to fill the remaining two full time equivalents because they are for requirements analysts, which the program no longer needs this late in the acquisition life cycle. Program Office Comments CBP officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Logistics Supply Chain Management System (LSCMS) Federal Emergency Management Agency (FEMA) LSCMS is a computer-based tracking system that FEMA officials use to track shipments during disaster-response efforts. It is largely based on commercial-off-the-shelf software. FEMA initially deployed LSCMS in 2005, and initiated efforts to enhance the system in 2009. According to FEMA officials, LSCMS can identify when a shipment leaves a warehouse and the location of a shipment after it reaches a FEMA staging area near a disaster location. However, LSCMS cannot track partner organizations’ shipments—such as those by state and local governments—en route to a FEMA staging area, and it lacks automated interfaces with its partners’ information systems. GAO previously reported on LSCMS in March 2016 (GAO-16- 338SP). Staffing gap: 3 FTEs equivalents (FTE) FEMA plans to conduct additional operational testing on the system by March 2018, after the program completes anticipated upgrades, including the capability to interface automatically with its partners’ information systems. According to FEMA officials, LSCMS previously demonstrated it could meet all seven of its key performance parameters (KPP) through either operational or developmental testing. However, the Department of Homeland Security’s (DHS) Director, Office of Test and Evaluation (DOT&E) noted that the testing was not adequate and recommended FEMA retest LSCMS. FEMA subsequently met two of its KPPs during a performance test for a single software release. In March 2016, DHS leadership authorized LSCMS to resume all development and acquisition efforts after a nearly 2-year program pause. FEMA deployed the enhanced LSCMS in 2013 without approval from the DHS Under Secretary for Management (USM) or key documentation such as a department-approved Acquisition Program Baseline (APB) or a DOT&E letter of assessment, as required by DHS’s acquisition policy. In April 2014, based on the preliminary results of a DHS Office of Inspector General (OIG) report, the Acting USM directed FEMA not to initiate the development of any new LSCMS capabilities until further notice. The DHS OIG noted that neither DHS nor FEMA leadership ensured the program office identified all mission needs before selecting a solution, and the Acting USM instructed FEMA to conduct an analysis of alternatives to address LSCMS’s remaining capability gaps. In June 2015, a contractor completed the analysis of alternatives and recommended that FEMA pursue the current version of LSCMS plus additional capabilities that would improve coordination with partner organizations. On the basis of this assessment, in August 2015, FEMA officials stated they were planning to pursue an upgrade known as Electronic Data Interchange (EDI), which would provide LSCMS with the ability to automatically interface with its partners’ information systems. In December 2015, DHS’s USM approved the program’s APB, which established cost, schedule, and performance parameters for LSCMS’s new capabilities. In March 2016, DHS’s USM directed the program to select a new operational test agent (OTA) and develop a Test and Evaluation Master Plan (TEMP) to address issues identified through past operational testing. Previously, FEMA deployed the enhanced LSCMS in January 2013 before operationally testing the system. When the operational test was conducted, DHS’s DOT&E determined that the test was inadequate. The OTA at the time—the Department of Defense’s Joint Interoperability Test Command—conducted the operational testing throughout calendar year 2013, leveraging performance data from the field, including data collected during FEMA’s responses to real-world disasters. The OTA’s conclusions were generally positive, but DOT&E determined in June 2014 that these conclusions were not supported by the test results, in part because the test’s sample size was not adequate. DOT&E recommended that the program conduct additional operational testing. In June 2016, DOT&E approved FEMA’s selection of a new OTA for LSCMS. In November 2016, DOT&E approved the program’s TEMP, which defines a new overall testing approach for evaluating unresolved issues from previous testing along with LSCMS’s new capabilities. The new TEMP also includes plans for cybersecurity testing. FEMA plans to complete additional operational testing by March 2018, once the security upgrades and the addition of the EDI capability are complete. In September 2016, FEMA officials told GAO that the LSCMS program had 22 of the 25 full time equivalents (FTE) it needed and was working to recruit additional staff. This represents a significant improvement from fiscal year 2014, when GAO found that the program had only 7 of the 22.5 FTEs it needed (GAO-15-171SP). Officials previously attributed the program’s governance and testing challenges in part to staffing shortages. In September 2016, officials stated that the addition of new staff has helped the program to update acquisition documents and conduct business analyses that may help identify future cost savings. FEMA officials told GAO in September 2016 that they anticipate meeting, or potentially coming in under, the program’s APB cost threshold of $814 million, based on an April 2016 update to the program’s approved life-cycle cost estimate. This estimate represents a nearly $500 million increase from the program’s initial 2009 estimate, which was never approved by DHS. FEMA officials previously stated that the 2009 life-cycle cost estimate did not account for costs beyond fiscal year 2017, among other things. From fiscal year 2017 through fiscal year 2021, LSCMS’s yearly cost estimates exceed the program’s funding plan by almost $29 million. However, the program’s updated life-cycle cost estimate includes approximately $35 million in costs for some services, such as personnel, that are funded by organizations external to LSCMS. When excluding the externally funded costs, the program is affordable during this 5-year period. Program Office Comments FEMA officials reviewed a draft of this assessment and provided no comments. Immigration and Customs Enforcement (ICE) ICE is responsible for investigating and enforcing border control, customs, and immigration laws. The legacy TECS (not an acronym) system has supported ICE’s mission since the 1980s, providing case management, intelligence reporting, and information sharing capabilities. However, the legacy system is obsolete, expensive to maintain, and unable to support ICE’s growing mission needs. In 2009, ICE began efforts to modernize aging TECS functionality and provide end users with additional functionality required for mission execution. The Department of Homeland Security’s (DHS) Customs and Border Protection is executing a separate TECS Modernization program, which GAO has also assessed in this report. GAO previously reported on ICE’s TECS Modernization program in March 2016 (GAO-16- 338SP). equivalents (FTE) The modernized ICE TECS system demonstrated two of its three key performance parameters (KPP) during operational testing conducted from August to October 2016. However, DHS’s Director, Office of Test and Evaluation (DOT&E) has not yet validated these results. The third KPP, related to concurrent users, was not tested and ICE officials said it will be difficult for the program to meet this KPP because the requirements are not realistic. The current KPP threshold assumes 6,000 officers will use the system simultaneously. In August 2016, officials said data showed there are between 500 and 600 concurrent users. ICE officials said they are working with end users to revise the KPP threshold prior to full operational capability (FOC). ICE initiated efforts to modernize the TECS system with a custom-developed solution in 2011. By June 2013, ICE officials determined that the existing TECS Modernization approach was unfeasible and subsequently restructured the program. The program now leverages commercial-off-the-shelf products, and is no longer pursing a custom-developed solution. According to the program manager, the program is acquiring capabilities through four concurrent “work streams,” each of which delivers discrete portions of the system’s total planned functionality. Different contractors are responsible for different work streams, and the program office is managing their efforts and integrating their software. Program officials said that this approach is intended to improve management visibility into each of the contractor’s efforts. However, officials added that integrating the program across the four work streams has presented challenges and that ICE has utilized multiple techniques to address these challenges including co-locating all work stream teams, conducting daily coordination meetings, and establishing a cross-program body of ICE and DHS technical experts to address integration issues. data from the legacy system but deferred final evaluation of the modernized system’s operational suitability, operational effectiveness, and KPPs until further testing could be conducted in a production environment. The program’s operational test agent conducted initial operational test and evaluation from August 2016 to October 2016. ICE initially planned to start this testing in May 2016, but it slipped once IOC was delayed. The operational test agent determined the modernized ICE TECS system was operationally effective and operationally suitable with limitations, and recommended the program conduct additional tests related to cybersecurity prior to FOC, among other things. The final operational test agent report was released in December 2016, and DOT&E plans to complete an assessment of the results by the end of February 2017. ICE officials told GAO they plan to revise the program’s Test and Evaluation Master Plan once FOC functionality is finalized and conduct follow-on operational test and evaluation prior to achieving FOC in September 2017. According to officials, final testing will include threat-based cybersecurity testing. Prior to IOC, program officials stated the program conducted a “soft launch” of the case management capabilities at the New York field office, which allowed users to update their credentials, conduct test searches, and insert test records into the modernized TECS system. Program officials stated the exercise helped users get comfortable with the new TECS system and allowed the program to initiate the transfer of user provisions from the legacy system to the modernized system. In August 2016, program officials told GAO that use of the modernized TECS system since IOC has been consistent across all field offices and they have received positive feedback from ICE field agents that the system is meeting their day-to-day needs. Program officials stated that ICE established a 24/7 Command Center for the first 4 weeks following IOC implementation to address end user problems and concerns. These officials added that they continue to track help desk tickets on a weekly basis and plan to release monthly updates to address identified issues. In achieving IOC, ICE has overcome past technical difficulties and schedule delays. In June 2014, DHS’s Under Secretary for Management re-baselined the program to reflect ICE’s new acquisition approach. The program’s IOC date slipped from December 2013 to March 2016, but the FOC date moved forward, from December 2017 to September 2017. In August 2016, ICE officials told GAO that the program remains on track to achieve its revised FOC date. However, at that time, the program had not yet identified what FOC would entail and officials stated that they were working with end users to determine final FOC functionality. ICE officials subsequently said they completed FOC planning activities in October 2016, including confirming FOC functionality such as enhanced system search capabilities. From January 2016 to January 2017, the program’s acquisition cost estimate increased by $4 million. ICE officials attributed this increase to including actuals for a data center contract that was awarded in 2016. However, the program’s cost estimates previously decreased significantly when the program revised its acquisition approach. In fiscal years 2017 and 2018, the program is projected to face a $5 million funding gap. However, ICE officials anticipate utilizing a multi-year appropriation to cover the projected gap. Program Office Comments ICE officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Continuous Diagnostics & Mitigation (CDM) National Protection and Programs Directorate (NPPD) The CDM program is intended to strengthen the cybersecurity of the federal government’s computer networks by providing sensors and dashboards to more than 65 participating civilian departments and agencies. The sensors continually monitor agency networks for vulnerabilities rooted in both hardware and software, and automatically notify agency personnel through dashboards when vulnerabilities are detected. CDM is also delivering a government-wide dashboard to the Department of Homeland Security (DHS), which will extract data from the agency-level dashboards and enhance situational awareness across the federal government. In June 2016, DHS leadership directed the program to re-baseline for the third time to address implementation challenges and to account for additional capabilities. GAO previously reported on the CDM program in March 2016 (GAO-16-338SP). Staffing gap: 20 FTEs Actual staff: 31 FTEs equivalents (FTE) CDM currently has 12 key performance parameters (KPP), which it has not yet demonstrated. However, in August 2016, NPPD officials told GAO that they were revising the program’s operational requirements document as a part of the program’s re-baseline effort. Officials said DHS leadership directed the program to consolidate its existing 12 KPPs, but the program may add KPPs to account for additional capabilities. CDM plans to provide sensors and tools to the departments and agencies in four phases. Phase 1 sensors will report vulnerabilities in hardware and software; phase 2 tools will report on user access controls; phase 3 tools will report on department and agency efforts to prevent attacks and limit the impact of ongoing attacks; and phase 4 tools will focus on encryption and other data masking techniques to protect data on the network. Phase 4 was added at the request of the Office of Management and Budget (OMB) in December 2015 to address vulnerabilities on government networks that threats may seek to exploit. The General Services Administration (GSA) is administering CDM’s contracts using blanket purchase agreements (BPA) established under vendors’ GSA Federal Supply Schedule contracts. Through these BPAs, the program issues task orders to acquire commercial-off-the-shelf software, hardware, and services. In June 2016, GSA awarded the final phase 1 task order—to deliver sensors for 44 agencies—as well as the first of two task orders for phase 2 tools—to provide tools for managing user network privileges at 69 agencies. GSA awarded the second phase 2 task order in November 2016, which will provide tools for verifying user network credentials at 26 agencies. GSA previously awarded five task orders to deliver phase 1 sensors to 25 agencies and a separate task order for the agency-level and government-wide dashboards. DHS leadership approved three versions of CDM’s APB between 2013 and 2015. In each new version, the program’s cost estimates and schedule changed. The program’s third APB, which DHS leadership approved in August 2015, reflected schedule slips that officials largely attributed to contracting delays. The program’s acquisition cost estimate increased to $2.7 billion for several reasons, including increased staff levels and costs for sensor replacement. In contrast, the program’s LCCE decreased to $2.7 billion when NPPD determined it did not need to support all of the sensors CDM offers at all agencies and DHS leadership determined CDM would only fund the operation and maintenance of CDM sensors, tools, and dashboards for the first 2 years of deployment, rather than over their entire life cycles. CDM is only authorized to conduct testing on DHS networks, which means the other departments and agencies are responsible for testing the CDM sensors and dashboards on their own networks. In August and October 2016, the contractor providing phase 1 sensors for the DHS network conducted initial testing to demonstrate their functional requirements. CDM’s test team found that 65 percent of the requirements were not demonstrated or not tested during these events. The program plans to work with the contractor to identify and address reasons why the requirements were not met or tested. In August 2016, NPPD officials said they had observed operational testing conducted at three agencies and plan to revise CDM’s Test and Evaluation Master Plan as a part of the programs’ re-baseline effort. In June 2016, DHS leadership directed CDM to re-baseline for the third time to account for the addition of phase 4 and to address challenges encountered during phase 1. Specifically, contractors found large gaps for 12 of the agencies receiving phase 1 sensors in the actual number of network-connected devices needing coverage from what was originally reported. The gaps in coverage ranged from 19 percent to 384 percent. NPPD officials attributed the gaps to different interpretations by some agencies of what devices should have been counted, as well as a time lag between when the agencies reported their coverage needs and when GSA awarded the task orders. In August 2016, program officials said that DHS leadership instructed CDM to self-fund the increased cost caused by the gaps, which NPPD estimated to be at least $66 million to support all agencies except the U.S. Postal Service (USPS). USPS had the largest identified coverage gap, which NPPD estimated would cost an additional $93 million to cover. According to program officials, USPS will fund the cost of covering its own phase 1 sensors, but NPPD will provide two subject matter experts to support USPS’s efforts. In December 2016, NPPD officials told GAO the program’s authorized staffing levels had increased from 30 to 51 full time positions, but that CDM continued to face significant staffing shortages and needed a program manager. Officials said the staffing gap of 20 full time positions—meaning actual personnel rather than equivalents—forces the program to divert individuals from their normal responsibilities to critical areas, such as project management. NPPD is actively working to fill CDM’s vacancies, but officials said they struggle to hire new staff due to lengthy security clearance processes. As of January 2017, NPPD had not yet completed the CDM re-baseline effort, which officials said will include revisions to the program’s Acquisition Program Baseline (APB) and life-cycle cost estimate (LCCE). NPPD officials anticipate the program’s cost estimates will increase and acknowledged that the phase 1 gaps will likely delay the program’s ability to execute subsequent phases. To cover the phase 1 gaps, NPPD officials said they deferred $30 million of phase 2 funding by limiting the number of agencies covered by phase 2 tools and used $36 million originally planned for phase 3. In fiscal year 2017, OMB plans to allocate an additional $172 million to DHS to accelerate deployment of CDM phase 3 capabilities and to support creation of phase 4. Despite the challenges encountered with phase 1, CDM achieved initial operational capability by its revised deadline of December 2016 after the program delivered sensors and dashboards to five agencies. Program Office Comments The program continues to re-baseline and is targeting April 2017 for completion. CDM continues to manage its budget to ensure program costs match available funding. CDM is leveraging the collective buying power of federal agencies and strategic sourcing to achieve over $344 million in government cost savings on CDM products—a 61 percent savings compared to GSA’s Schedule 70. As of December 2016, CDM has deployed dashboards to nine agencies and is planning to deploy the government-wide dashboard in June 2017. CDM has received many accolades from agencies and federal leaders. NPPD officials also provided technical comments on this assessment, which GAO incorporated as appropriate. Homeland Advanced Recognition Technology (HART) National Protection and Programs Directorate (NPPD) HART is intended to replace and modernize the Department of Homeland Security’s (DHS) legacy biometric identification information system known as the Automated Biometric Identification System (IDENT). Since 1994, IDENT has enhanced national security and facilitated legitimate travel, trade, and immigration by receiving, maintaining, and sharing information on foreign nationals with DHS border management organizations, other federal agencies, law enforcement, and foreign partners. However, IDENT is at risk of failure because it cannot keep pace with a growing number of daily system transactions. In 2011, DHS initiated efforts to replace IDENT with HART in order to provide users with enhanced capabilities for accessing and managing biometric identification data. Staff needed: 168 full time equivalents (FTE) HART is still in a relatively early acquisition stage, and the program has not yet demonstrated whether it can meet its eight key performance parameters (KPP). The program plans to demonstrate its KPPs as capabilities are developed. The first two KPPs establish requirements for system availability and a fingerprint biometric identification service. The next set of four KPPs establishes requirements for multimodal biometric verification services and interoperability with a Department of Justice system. The program’s remaining two KPPs establish requirements for web portal response time and reporting capabilities. Acquisition Strategy HART plans to develop and deploy capabilities through 4 increments: increments 1 and 2 are intended to replace and enhance existing IDENT system functionality, and increments 3 and 4 are intended to provide additional capabilities. Specifically, increment 1 will provide the core infrastructure including system hardware and basic functionality to operate HART. Increment 2 will provide enhanced biometric capabilities, such as facial and iris identification, and the full test environment for measuring system performance. Increment 3 will introduce a web portal to improve system accessibility and provide a holistic person-centric view of biometric identification data. Increment 4 will provide additional tools for improved data analysis and reporting capabilities. DHS’s Director, Office of Test and Evaluation approved the HART program’s Test and Evaluation Master Plan in September 2016, after the program incorporated feedback from DHS’s Science and Technology Directorate (S&T) and HART’s operational test agent, the Department of Defense’s Joint Interoperability Test Command. For example, the program revised its developmental test and evaluation strategy, added risk assessment levels for planned tests, and aligned cybersecurity objectives with requirements. HART plans to conduct operational testing for increment 1 in June 2018 prior to achieving IOC. According to NPPD officials, the program is focused on awarding an initial contract for the development and delivery of increments 1 and 2, and plans to pursue separate contracts for the development and delivery of increments 3 and 4. Additionally, S&T’s Office of Systems Engineering completed a technical assessment on HART in February 2016, and concluded that the program had a moderate overall level of technical risk. In October 2016, DHS’s USM directed HART to work with S&T to monitor the risks identified in the technical assessment, and directed S&T to conduct further analysis following the program’s initial contract award for increments 1 and 2. In April 2016, DHS’s Under Secretary for Management (USM) approved HART’s Acquisition Program Baseline (APB)—which established the program’s cost, schedule, and performance parameters—and authorized the program to initiate development efforts for increments 1 and 2 in October 2016. HART plans to achieve initial operational capability (IOC) with the deployment of increment 1 in December 2018, at which point program officials anticipate beginning to transition users from the legacy IDENT system to HART. HART plans to achieve full operational capability with the deployment of increment 4 by September 2021. NPPD reported the program had a staffing gap of 12 full time equivalents, but in September 2016, program officials did not attribute any negative affects to workforce shortages. Program officials said that they plan to hire additional contractors to support the new systems integrator, and will transition existing staff to support HART efforts as the legacy IDENT system is decommissioned. The program is also undergoing efforts to determine future staffing needs. Program officials said they proactively engaged the Office of Personnel Management to conduct a workforce analysis. Additionally, DHS directed the program to conduct a staffing analysis with assistance from the department’s Chief Technology Officer to determine any gaps, particularly in the cyber security field. The results of this analysis are required to be completed by March 2017. NPPD officials told GAO that the program’s schedule and cost estimates may change once they award the contract for increments 1 and 2 and receive the contractor’s proposed technical solution. The program has experienced delays in awarding the contract. In September 2016, NPPD officials told GAO that the program received and incorporated industry feedback into the request for proposal (RFP) in July 2016. In October 2016, NPPD officials told GAO that the program was resolving a potential issue with the final RFP and had released a second draft RFP in order to maintain communication with industry. Program officials anticipate releasing the final RFP in January 2017. Subsequently, they plan to update HART’s schedule and cost estimates once the contract for increments 1 and 2 is awarded because the contractor’s proposed solution will assist officials in determining how much of the legacy IDENT system can be reused for HART, a factor that may affect the program’s cost estimate. DHS proposed moving the IDENT and HART programs from NPPD to Customs and Boarder Protection in its fiscal year 2017 budget submission. In September 2016, program officials told GAO that the transition had not yet been approved and that HART would remain with NPPD through at least the end of the fiscal year 2017. From fiscal year 2017 through fiscal year 2021, HART is projected to face a $406 million funding gap. In April 2016, NPPD identified that DHS plans to program an additional $335 million to the program over this 5-year period. In September 2016, program officials stated that they have taken steps to mitigate remaining shortfalls. For example, the program extended the planned schedule for technical refreshes from 5 years to 7 years, carried over $39 million into fiscal year 2016, and identified approximately $27.3 million of no-year funding in fiscal year 2016 that could be used to cover the anticipated funding gap. Program Office Comments NPPD officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. National Cybersecurity Protection System (NCPS) National Protection and Programs Directorate (NPPD) NCPS is intended to defend the federal civilian government’s information technology infrastructure from cyber threats. The Department of Homeland Security (DHS) established the program to acquire hardware, software, and services, and NCPS delivers capabilities through a series of interdependent upgrades designated “blocks.” Blocks 1.0, 2.0, and 2.1 are fully deployed and collectively provide intrusion-detection and analytic capabilities across government agencies. NCPS is currently deploying EINSTEIN 3 Accelerated (EA), previously designated block 3.0, which is intended to provide an intrusion- prevention capability. Going forward, NCPS plans to deliver block 2.2 to improve information sharing across agencies. GAO previously reported on the NCPS program in March 2016 and January 2016 (GAO-16-338SP, GAO-16-294). Staff needed: 176 full time equivalents (FTE) In June 2015, DHS’s Director, Office of Test and Evaluation (DOT&E) found EA testing in June 2016 and plans to initiate block 2.2 testing in March 2017. The program originally planned to use government technology to deliver block 3.0 intrusion-prevention capabilities, but in May 2012, it significantly changed its acquisition strategy, decided to work directly with commercial internet service providers (ISP), and designated the revised effort EA intrusion-prevention capabilities are now primarily provided through sole source contracts with the nation’s largest ISPs to maximize coverage. However, in May 2015, NCPS decided to provide EA’s coverage, but the program developed a plan that instead allowed it to expand its coverage. Program officials said they awarded a contract to provide basic intrusion-prevention services at a greater number of federal agencies and enable the program to have the capacity to cover all federal email and internet traffic. However, officials noted that providing intrusion- prevention services has some challenges, such as protecting classified information used to identify threats on unclassified networks and rolling out these services across the federal government. The EA had a significant effect on the program’s schedule. Among other things, the program delayed an acquisition decision to operate deployed capabilities until July 2015—when DHS leadership reviewed the results of EA until December 2017. In June 2015, DHS’s DOT&E evaluated the results of EA OA. Program officials anticipate receiving final OA results at the end of January 2017 and have begun planning for initial operational test and evaluation, which is planned for late fiscal year 2017. In December 2015, Congress required federal government agencies and departments to adopt intrusion-prevention services, such as NCPS’s EA at approximately 93 percent of federal agencies and departments. Program officials cited legal and network challenges as barriers to integration because they must negotiate and customize EA services. According to program officials, NCPS plans to conduct an OA on block 2.2 capabilities in March 2017 after it completes adoption with HSIN. The results of this OA will inform the program’s Block 2.2 ADE 2C scheduled for December 2017. Program Office Comments Since the last assessment, the NCPS program office has made progress toward achieving program objectives. Departments and agencies have continued to onboard EA service. NCPS continues to work with agencies to provide all available EINSTEIN protections. Also in 2016, the NCPS program office developed and implemented a plan to leverage an existing DHS investment to meet a portion of the NCPS information sharing requirements (block 2.2), resulting in a cost savings for the program. Program officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Next Generation Networks Priority Services (NGN-PS) National Protection and Programs Directorate (NPPD) NGN-PS is intended to address an emerging capability gap in the government’s emergency telecommunications service, which prioritizes select officials’ phone calls when networks are overwhelmed. NPPD executes the NGN-PS acquisition program through commercial telecommunications service providers, which address the government’s requirements as they modernize their own networks. NPPD plans to execute NGN-PS in phases—voice, video, and data—and is currently focused on the voice phase. Once NGN-PS capabilities become operational, NPPD’s Priority Telecommunications Services (PTS) program assumes responsibility for sustaining them. The cost and funding figures in this assessment account for both NGN-PS and PTS in accordance with Department of Homeland Security (DHS) guidance. GAO reported on the NGN-PS acquisition program in March 2016 (GAO-16-338SP). Actual staff: 12 FTEs equivalents (FTE) In August 2016, NPPD officials told GAO that NGN-PS continued to meet all six of its key performance parameters (KPP) for the voice phase, but DHS’s Director, Office of Test and Evaluation (DOT&E) has not yet validated the program’s performance. NPPD officials noted that each emergency is unique and that performance can be affected by damage to telecommunications infrastructure. NPPD officials also stated that they are in the process of developing additional KPPs for the video and data capabilities of NGN-PS. NGN-PS was established in response to an Executive Order requiring the federal government to have the ability to communicate at all times during all circumstances to ensure national security and manage emergencies. The NGN-PS program works with telecommunication service providers as they enhance their carrier networks so they can provide select government officials survivable telecommunications capability nationwide through the PTS program. The NGN-PS voice phase is divided into three increments: increment 1 includes paying service providers to ensure their major core networks can continue to prioritize government phone calls as needed; increment 2 delivers wireless capabilities; and increment 3 is intended to address landline capabilities. NGN-PS has initiated the first two increments and awarded three base contracts in 2014, each of which includes 9 option years. In August 2016, NPPD officials said they had begun planning for the third increment. 2015, the full operational capability date for increment 1 slipped from June 2017 to March 2019, which NPPD officials attributed to funding shortfalls. In August 2016, NPPD officials said they do not anticipate further schedule slips for planned increment 1 and 2 activities. The program plans to use surplus funding expected in fiscal years 2019 through 2021 to implement new services such as landline capabilities. In July 2016, the White House issued a Presidential Policy Directive that superseded previous directives requiring continuous communication services for select government officials. NPPD officials said the new directive validates the program’s requirements and that they do not expect it to affect the program’s costs or schedule. NPPD officials noted that they plan to update the Acquisition Program Baseline once the impact of the new directive is understood, but could not provide a timeframe for when this will be complete. The NGN-PS data and video capabilities were initially planned as separate phases, but in August 2016, NPPD officials told GAO that they plan to acquire them together. NPPD officials explained that it now makes more sense to consolidate the data and video capabilities as a result of technological advancements achieved since the program’s acquisition plan was developed in 2013. The data and video phase is in the early planning stages and NPPD officials said they plan to work with stakeholders to refine requirements based on the July 2016 directive. DHS’s DOT&E approved a revised Test and Evaluation Master Plan for the NGN-PS program in June 2016, which clarified the program’s existing testing approach. Specifically, NGN- PS capabilities are evaluated through developmental testing, government acceptance testing, and operational assessments. The service providers play a central role in NGN-PS test activities because they conduct the developmental testing and operational assessments on their own networks. NPPD officials review the service providers’ test plans, oversee tests to verify testing procedures are followed, and approve test results to determine when testing is complete. The government’s operational test agent (OTA)—the Department of Defense’s Joint Interoperability Test Command—does not plan to conduct a stand-alone operational test event for NGN-PS. Instead, the OTA leverages the service providers’ test and actual operational data to assess program performance. NPPD officials told GAO that NGN-PS has performed well when its capabilities have been tested and deployed. NPPD officials also said that they continuously review actual NGN-PS performance and that all service providers undergo annual network service verification testing under the PTS program. In January 2016, NPPD reported that NGN-PS’s required staffing level decreased by approximately 5 full time equivalents, and that the program no longer had a staffing gap. In August 2016, NPPD officials said that these numbers only account for funded positions and that NGN-PS also relies on about 20 contracted staff to execute day-to-day activities. NPPD officials also stated that the NGN-PS leverages support from PTS program staff, as needed. From January 2016 to January 2017, the NGN-PS program’s department-approved cost and schedule goals remain unchanged. However, NPPD officials stated that they are in the process of revising the program’s life-cycle cost estimate (LCCE) to clarify NGN-PS costs because past estimates had double counted some operations and sustainment costs that are funded by PTS. From September 2010 to September 2014, NGN-PS’s LCCE increased to $1.1 billion when the program accounted for the voice phase’s second increment. In August 2015, DHS’s Chief Financial Officer approved a revised cost estimate that increased the LCCE to $1.2 billion. NPPD officials attributed the increase to the inclusion of sustainment costs for the PTS program, as requested by DHS headquarters. In August 2016, NPPD officials told GAO they plan to specifically identify operations and sustainment costs attributable only to NGN-PS acquisition efforts in the updated LCCE. In addition, program officials said the LCCE will account for changes related to the new Presidential Policy Directive. Program Office Comments The NGN-PS LCCE update is the refined analysis of development service acquisition costs that will include separate projections for the annual impact of validated NGN-PS capabilities that are transferred to the PTS operations program. The LCCE update will more accurately represent NGN-PS technical support for authorized users to have seamless priority services for critical communications during crises while commercial service providers evolve their infrastructure—while meeting or exceeding performance metrics and managed under budget. NPPD officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. National Bio and Agro-Defense Facility (NBAF) Science and Technology Directorate (S&T) The NBAF program is constructing a state-of-the-art laboratory in Manhattan, Kansas, to enable the United States to conduct comprehensive research, develop vaccines, and provide enhanced diagnostic capabilities to protect against foreign animal, emerging, and zoonotic diseases that threaten the nation’s food supply, agricultural economy, and public health. The facility will provide 574,000 square feet of laboratory space to support the research missions of the Department of Homeland Security (DHS) and the Department of Agriculture (USDA). NBAF is intended to replace and expand upon the capabilities provided at an existing facility called the Plum Island Animal Disease Center, which is nearing the end of its useful life. GAO previously reported on NBAF in March 2016 (GAO-16- 338SP). Staff needed: 14.3 full time equivalents (FTE) The NBAF program must commission several laboratory spaces that meet different biosafety standards in order to meet its sole key performance parameter (KPP). Program officials reported that NBAF will not be able to demonstrate that it has met its KPP until the facility is fully constructed and commissioned in May 2021. S&T leadership, the NBAF program office, a facility design team, and a construction manager are coordinating throughout all phases of the NBAF program in an effort to ensure the facility will be constructed as designed and within estimated cost parameters. According to program officials, they selected a construction manager early in the design process in order to increase coordination between the design and construction phases of the program, and to help reconcile cost and schedule as the program progressed. In July 2014, DHS’s Acting Under Secretary for Management (USM) approved the NBAF Acquisition Program Baseline (APB), which established the program’s cost, schedule, and performance parameters. According to NBAF officials, the program remains on track to meet these parameters. The program awarded the contract for construction of the main laboratory facility in May 2015, and is scheduled to commission NBAF in May 2021. NBAF is scheduled to become fully operational in December 2022, after it receives the certifications needed to operate as a biocontainment facility. In May 2013, DHS’s Director, Office of Test and Evaluation determined he was not responsible for overseeing NBAF because it was a facility as opposed to a system. According to program officials, the NBAF program has implemented a commissioning process for the facility to determine whether it can meet its KPP and other requirements once construction is complete. Program officials stated that a third-party commissioning agent has been retained as a subcontractor to the prime construction management contractor, and a commissioning plan has been in place since 2012. The commissioning agent will monitor and test the facility’s equipment and building systems while construction is ongoing to ensure they are properly installed and functioning according to appropriate biosafety specifications. The commissioning agent will report its findings directly to program officials and coordinate with other entities involved in the commissioning process, including the NBAF program office, the construction management contractor, and end users, among others. Full commissioning of the facility is scheduled for May 2021, 6 months after the planned completion of construction. However, NBAF previously experienced significant cost growth and schedule slips. Between August 2009, when the Acting Under Secretary for Science and Technology approved the initial version of NBAF’s APB, and July 2014, when the Acting USM approved the current version of NBAF’s APB, the program’s acquisition cost estimate increased from $725 million to $1.3 billion, and the facility’s anticipated commissioning date slipped by almost 6 years. In 2010, DHS and the National Academy of Sciences both recommended the NBAF program take a number of actions to mitigate its operational risks as a biocontainment facility. Subsequently, at the direction of Congress and DHS leadership, the program office revised NBAF’s design in response to these recommendations, which increased costs and caused delays. S&T reported that the NBAF program office does not have a staffing gap, and program officials told GAO the program had recently completed the hiring of additional staff for the program’s construction oversight team. According to NBAF officials, the program office’s staffing requirements will change in the coming years, as the NBAF program progresses through construction and moves towards the operational stand-up of the facility. For example, the program office reported it will need to hire an Operations Director, Research and Development Director, Business Manager, and Facility Engineer, among others, by fiscal year 2018 for NBAF operations management. Program officials reported that funding constraints between 2009 and 2014 exacerbated the cost growth and schedule slips, and it appears the program continues to face a funding gap of more than $38 million from fiscal year 2017 to fiscal year 2021. According to program officials, the anticipated funding gap is driven by the cost of operational stand-up activities for NBAF, which are separate from facility construction. Operational stand- up activities are scheduled to ramp up in fiscal year 2018 and include hiring additional operations management personnel; preparing standard operating procedures; training laboratory support personnel and researchers; and demonstrating proficiency in biocontainment operations, among other things. Program officials told GAO they are working with S&T to mitigate the funding gap, but there is a risk these affordability challenges could cause delays in the operational stand-up of NBAF and, in turn, the transition from Plum Island Animal Disease Center. NBAF officials told GAO the program has received full funding for facility construction efforts, through federal appropriations and gift funds from the state of Kansas. DHS entered into a cost-sharing agreement with Kansas’s state government to reduce the federal government’s share of NBAF costs. Kansas’s state government has contributed $307 million to NBAF, which amounts to nearly 25 percent of the program’s estimated acquisition cost. Program Office Comments As noted in the assessment, all out-year funding requests are for operational planning and operationalization activities. Current funding gaps will be eliminated if the program is funded to S&T requested amounts reflected in the next Future Years Homeland Security Program update. Electronic Baggage Screening Program (EBSP) Transportation Security Administration (TSA) TSA established EBSP in response to the terrorist attacks of September 11, 2001. EBSP identifies, tests, procures, deploys, installs, and sustains transportation security equipment across approximately 440 U.S. airports to ensure 100 percent of checked baggage is screened for explosives. The program’s key objectives include: increasing threat detection capability, improving the efficiency of checked baggage screening, replacing aging equipment, and obtaining new screening technologies. The program awarded contracts for 20 types of baggage screening systems from 2002 to 2015. GAO previously reported on EBSP in March 2016 and December 2015 (GAO- 16-338SP, GAO-16-117). Staff needed: 104 full time equivalents (FTE) TSA officials stated that EBSP has demonstrated that all deployed systems can meet the minimum threshold for all of the program’s key performance parameters including automated threat detection, throughput, and operational availability. TSA officials told GAO that two scanners underwent testing in fiscal year 2016, and that two additional scanners are scheduled to undergo testing in fiscal year 2017. EBSP acquires explosives trace detectors and medium-speed and reduced-size explosives detection systems through various vendors. In 2002 and 2003, TSA deployed baggage screening equipment to all federally regulated airports. Since then EBSP has worked to deliver new systems with enhanced screening capabilities and, according to program officials, development efforts are primarily focused on software upgrades. As of December 2016, EBSP had deployed 1,880 explosives detection systems and 2,638 explosives trace detectors to screen checked baggage nationwide. EBSP initially acquired explosive detection systems during specific procurement windows. In 2014, EBSP revised its acquisition strategy to competitively procure systems on an ongoing basis using qualified product lists. TSA officials told GAO this strategy provides the program more flexibility in acquiring scanning devices than its previous approach because vendors are able to submit devices for consideration at any time. Additionally, officials said this approach allows the program to keep better pace with technology advancements. EBSP’s initial competitive procurement of explosive detection systems will end in fiscal year 2018, at which point TSA plans to initiate a second competitive procurement. DHS’s Director, Office of Test and Evaluation (DOT&E) has assessed nine of EBSP’s systems and determined that six of them are effective and suitable. As for the remaining three, TSA is implementing a third party testing strategy to address system failures during testing. TSA’s interim guidance, effective July 2014, states that TSA will not re-admit systems into testing until vendors provide sufficient data from a third party tester that the system meets the failed requirements. According to program officials, an explosives detection system was the first to undergo such testing after failing operational testing. After third party testing of this system, DOT&E issued a memorandum stating the system should be considered operationally suitable and DHS approved full rate production in May 2015. In December 2015, GAO found that TSA has yet to finalize key aspects of its third party testing strategy and recommended it do so before implementing further third party testing requirements for vendors to enter testing. In November 2016, TSA officials said they now plan to implement the third party testing program by the end of calendar year 2017—a full year later than initially planned. These officials attributed the delay to the need to reprioritize third party testing needs and challenges in coordinating proposed strategy changes, among other things. DOT&E approved EBSP’s Test and Evaluation Master Plan (TEMP) in 2010. TSA officials previously told GAO that they were updating the TEMP to reflect EBSP’s acquisition strategy change, but subsequently decided to wait until the start of EBSP’s second competitive procurement of explosives detection systems before formally revising the TEMP, based on discussion with DOT&E. In June 2016, DHS reported that the program needed 20.5 full time equivalents (FTE) and did not have a staffing gap. However, in December 2016, TSA officials told GAO that this reflected only a subset of EBSP staff. These officials explained that EBSP is supported by personnel from five different TSA divisions and had a total staff need of 104 FTEs. From January 2016 to January 2017, the date the program planned to achieve initial operational capability for systems that detect additional materials and provide enhanced homemade explosives detection capabilities slipped. TSA officials previously told GAO that they planned to achieve this milestone in September 2016, but according to the program’s May 2016 APB, TSA has until September 2018 to achieve this milestone. Previously, EBSP planned to award contracts for these systems in September 2015 and September 2018, respectively. Program Office Comments TSA officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Passenger Screening Program (PSP) Transportation Security Administration (TSA) The Department of Homeland Security (DHS) established PSP in response to the terrorist attacks of September 11, 2001. PSP identifies, tests, procures, deploys, and sustains transportation security equipment across approximately 440 U.S. airports to help TSA officers identify threats concealed on people and in their carry-on items. The program’s key objectives include: increasing threat detection capabilities, improving the efficiency of passenger screening, and balancing passenger privacy and security. The program has pursued 11 variants of passenger screening systems since 2002, including 5 that TSA is currently acquiring. GAO previously reported on PSP in March 2016 and December 2015 (GAO-16-338SP, GAO-16-117). Staffing gap: 15 FTEs equivalents (FTE) PSP has faced challenges acquiring and deploying new technologies, including the program’s newest technology: the Credential Authentication Technology (CAT). However, TSA officials stated that PSP has demonstrated that all deployed systems can meet their key performance parameters. The program is focused on addressing emerging threats with next generation technologies as well as ensuring that deployed and new technologies meet cybersecurity requirements. TSA has acquired and deployed five variants of commercial- off-the-shelf passenger screening systems from multiple contractors. One system—CAT—remains in development. Program acquisition efforts are largely focused on upgrading existing detection technology capabilities. In July 2016, TSA identified an urgent operational need for automated screening lanes to address increasing passenger wait times. constraints, significantly decreasing PSP’s acquisition costs to $3.2 billion and its life-cycle cost estimate to $4.8 billion. However, by January 2016, emerging threats drove TSA to increase capability requirements, which in turn increased PSP’s acquisition and life-cycle cost estimates by about $154 million and $264 million, respectively. The program employs two acquisition strategies to acquire PSP systems. It has designated one the Qualified Product List (QPL) approach and the other the Low Rate Initial Production (LRIP) approach. PSP uses the QPL approach for established and tested technologies, when capability requirements are rigid and contractors’ systems are mature. For this approach, any contractors’ systems that demonstrate they meet the capability requirements are added to the QPL. TSA has used this approach to acquire the second generation Advanced Technology X-ray (AT-2) systems, Bottled Liquid Scanners, and Explosive Trace Detectors. In May 2016, TSA published its intent to establish a new QPL for the next generation of Explosive Trace Detectors. Alternatively, PSP uses the LRIP approach when capability requirements are flexible and contractors’ systems are evolving. With the LRIP approach, PSP uses a series of development contracts to enhance systems’ capabilities over time. PSP is currently using the LRIP approach to acquire CAT, which TSA will use to verify the authenticity of passenger identification, and confirm a passenger’s risk status. CAT is intended to help TSA expand risk-based screening. PSP is also using the LRIP strategy to acquire second generation Advanced Imaging Technology (AIT-2). DHS’s Director, Office of Test and Evaluation (DOT&E) approved PSP’s Test and Evaluation Master Plan in 2010, and each PSP system has its own approved addendum. DOT&E has assessed seven PSP systems and determined that three are effective and suitable. However, according to TSA officials, many vendors’ systems cannot successfully pass initial qualification testing because their technologies are not mature, and some systems do not even get to the point in the testing process where DOT&E would assess them. To address this issue, TSA is implementing a third party testing strategy. In December 2015, GAO found that TSA had yet to finalize key aspects of its third party testing strategy and recommended it do so before implementing further third party testing requirements for vendors. Subsequently, TSA gathered and considered industry feedback on potential third party test strategy changes and identified potential third party test vendors. In November 2016, TSA officials said they now plan to implement the third party testing program by the end of calendar year 2017—a full year later than initially planned. These officials attributed the delay to the need to reprioritize third party testing needs and challenges in coordinating proposed strategy changes, among other things. DHS reported that PSP faced a staffing gap of 15 full time equivalents (FTE)—a shortfall of nearly 30 percent. According to TSA officials, the current staffing level hinders the program’s response to emerging threats. This could affect the program’s ability to meet the urgent operational need for automated screening lanes that TSA identified in July 2016. Further, the program projects the need for 38 percent more FTEs over the current approved level, as TSA plans to initiate new checkpoint- related programs in 2018. The program’s fifth APB—which the DHS USM approved in February 2015—reflected schedule slips. The full operational capability (FOC) dates for the AT-2 and AIT-2 both slipped 18 months due to testing issues. The FOC date for CAT also slipped to June 2018—4 years later than initially planned—after operational testing revealed performance issues. In January 2016, the PSP program declared an APB schedule breach of a key CAT milestone—Acquisition Decision Event (ADE) 3, which was scheduled to be complete by June 2016—because of delays in incorporating new cybersecurity requirements before completing operational testing. Program documentation indicates CAT’s ADE 3 could be delayed by nearly 2 years, which would directly affect follow-on events including FOC. Program Office Comments TSA officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Technology Infrastructure Modernization (TIM) Transportation Security Administration (TSA) TSA conducts various threat assessment screening and credentialing activities for millions of transportation workers and travelers. However, these assessments are hindered by stove-piped systems and duplicative processes which cannot accommodate growing enrollment demand. In 2008, TSA initiated the TIM program to address these shortfalls by developing and operating a centralized system to manage credential applications and the review process for three segment populations: maritime, surface, and aviation. The program delivered the maritime segment in May 2014, but subsequently struggled to deliver additional capabilities. GAO previously reported on the TIM program in March 2016 (GAO- 16-338SP) and has an ongoing review of the program’s current efforts. Staff needed: 40.0 full time equivalents (FTE) In September 2016, Department of Homeland Security (DHS) leadership approved a fourth key performance parameter (KPP) for the program for enforcing system user access controls. The program previously demonstrated TIM could meet two of its KPPs—vetting response time and operational availability—during initial operational test and evaluation (IOT&E) of the maritime segment, but DHS’s Director, Office of Test and Evaluation (DOT&E) concluded the system was extremely unreliable due to frequent critical failures. DOT&E cannot assess TIM’s other KPP—information reuse—until additional segments are deployed. In April 2016, DHS leadership approved a new technical approach for the TIM program, which TSA developed in collaboration with DHS’s Chief Information Officer (CIO) and subject matter experts. In November 2015, DHS’s Under Secretary for Management (USM) directed the CIO to work with TSA to develop a new approach, after the CIO reported he could not support TSA’s initial strategy for addressing the TIM program’s execution challenges. Under the new approach, TSA plans to replace the TIM system’s existing commercial-off- the-shelf applications with open source applications and move to a new virtual environment. The program also adopted an agile development methodology that relies on small teams to rapidly develop, test, and deploy capabilities using an iterative, rather than a sequential approach. TSA officials anticipate that the agile approach will allow the program to accelerate development, better respond to customer needs, and achieve cost savings by eliminating expensive proprietary licensing costs, among other things. The TIM program began piloting its agile approach in May 2016 when developing fixes to address issues identified during the maritime segment’s IOT&E. TSA awarded two task orders totaling $17.6 million to the program’s existing contractor in September 2016 for agile design and development services, and plans to competitively award a new contract in 2017. TSA officials expect to have multiple agile development teams in place by early fiscal year 2017. program’s 6-year schedule slip. The TIM program’s yearly cost estimates from fiscal year 2017 through 2021 exceed its funding plan by almost $122 million. However, the program expects to carry over almost $17 million into fiscal year 2017 and receive nearly $106 million in fees from vetting programs during this 5-year period. In September 2016, TSA officials identified several program and technical risks associated with TIM’s new agile approach that could affect the program’s schedule, cost, and performance going forward. These risks include an increase in new requirements or enrollments in TSA Pre-Check, implementation of automated testing into its agile approach, and the availability of knowledgeable contractor development staff. TSA officials are working to mitigate these risks. In September 2016, TSA officials told GAO they worked with TIM customers to prioritize and address performance issues identified during IOT&E of the maritime segment, which was conducted from May to June 2015. DHS’s DOT&E assessed the program’s IOT&E results in September 2015 and concluded the system was not operationally effective or suitable, and was not cyber-secure. According to TSA officials, the program’s operational test agent completed follow-on operational test and evaluation on the maritime segment in November 2016, but the test results will not be available until March 2017. In October 2016, DHS’s USM removed the TIM program from breach status, which authorized TSA to resume new development after a nearly 22-month program pause. TSA notified DHS’s Acting USM in September 2014 that the TIM program had breached its baseline due to significant cost, schedule, and performance issues, and DHS leadership directed the program to halt new development in January 2015 until TSA identified a strategy for addressing these issues. TSA officials identified several causes for the breach, including technical challenges and insufficient contractor performance. In addition, the TIM program reported that TSA added significant new requirements to TIM after DHS leadership had approved the initial acquisition strategy. In September 2016, DOT&E approved TSA’s proposed test and evaluation strategy for the TIM program’s new approach. However, DOT&E noted that DHS guidance for Test and Evaluation Master Plans (TEMP) did not adequately address programs using agile development. He reported his office was leading an effort to develop such guidance and would work with TSA officials to assist with revising the TIM program’s TEMP by January 2017. In June 2016, DHS reported that the TIM program’s staffing need increased from 24 to 43 full time equivalents (FTE). TSA officials explained that the additional FTEs were technical staff funded by the TIM program, but matrixed from another TSA office. In December 2016, TSA officials said the program had only been authorized for 40 FTEs, 35.2 of which were filled. Program Office Comments TSA continues to implement an agile strategy for the completion of the TIM system development. Early agile releases of the TIM system have shown the ability to provide functionality that meets the immediate needs of the mission operators in an accelerated timeframe from traditional development approaches. TIM has also partnered with DHS to form an Agile Integrated Product Team. The role of this group is to take best practices of agile development and policy from across DHS and tailor it for use with the TIM program. TSA officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. United States Coast Guard (USCG) Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) systems provide situational awareness, data gathering and processing, and information exchange tools that are installed in a variety of USCG ships and aircraft. According to the current C4ISR program’s baseline, the program encompasses the acquisition of C4ISR systems tailored for the National Security Cutter (NSC), Fast Response Cutter, Offshore Patrol Cutter, HC-130J and HC-144 aircraft, and legacy vessels. However, USCG officials told GAO the program is now primarily working on the C4ISR system on the NSC. GAO previously reported on the USCG’s C4ISR program in March 2016 and June 2014 (GAO- 16-338SP, GAO-14-450). Staffing gap: 5 FTEs equivalents (FTE) The USCG is no longer planning to operationally test its C4ISR systems against its key performance parameters (KPP). Instead, the C4ISR systems will be tested in conjunction with the USCG’s planes and vessels to save money and avoid duplication. However, the effectiveness and suitability of the C4ISR systems were not specifically evaluated during the HC-144, Fast Response Cutter, and NSC tests. Since this C4ISR system will now only be used on the NSC, testing is focused on this asset. The USCG plans to demonstrate the ability of the C4ISR system to meet the NSC’s KPPs during follow-on operational testing, which is scheduled to be completed in November 2017. The USCG has significantly decreased the C4ISR program’s scope since the Department of Homeland Security’s (DHS) Under Secretary for Management (USM) approved the C4ISR program’s first Acquisition Program Baseline (APB) in February 2011. This APB established the C4ISR program in broad terms, namely that the program would improve the detection and engagement of potential targets in the maritime domain through better coordination and data sharing. However, the initial version of the system relied on contractor-proprietary software, which was in danger of becoming obsolete and too costly to maintain. In November 2013, the USM approved a revised C4ISR APB after lower than expected funding levels caused a schedule breach. The new APB reflected a less comprehensive approach to C4ISR, but established that the C4ISR program would still deliver certain capabilities to specific cutters and aircraft. through fiscal year 2021. However, the gap may not be as great as it appears. In April 2015, GAO found that the DHS funding plan presented to Congress did not identify the operations and maintenance funding the USCG plans to allocate for each of its major acquisition programs—including the C4ISR program—and recommended DHS account for this funding in its future report (GAO-15-171SP). DHS concurred with the recommendation, but has yet to take action. The USCG initially planned to test the C4ISR system against its KPPs separately from its planes and vessels, including the NSC, but officials subsequently decided to test the C4ISR system in conjunction with the planes and vessels to lower costs and avoid duplication. However, the C4ISR system’s KPPs were not specifically evaluated during the NSC’s initial operational test and evaluation in April 2014, in part because the necessary testing activities were not fully integrated into the NSC’s test plan. The USCG also deferred testing of a significant portion of C4ISR functionality on the NSC, including cybersecurity capabilities and real-time tactical communications with the Navy, to later dates. In June 2014, GAO recommended the USCG fully integrate C4ISR assessments into other assets’ test plans or test the C4ISR program independently. The USCG concurred with GAO’s recommendation and stated that it planned to test the C4ISR system’s KPPs during follow-on testing for the NSC. According to USCG officials and the current follow-on testing plan, the USCG will test S2S2 to evaluate the extent to which this improved system meets the NSC’s C4ISR- related KPPs, which the USCG will trace to the C4ISR KPPs. However, the NSC’s KPPs only overlap with one of the C4ISR’s six KPPs, so this testing will not demonstrate how the C4ISR system performs against five of its KPPs. The USCG began NSC’s follow-on operational test and evaluation in fiscal year 2015, but testing is not planned to be complete until the end of calendar year 2017. The USCG has developed a new C4ISR system for the NSC known as segment 2 spiral 2 (S2S2). The S2S2 system is intended to replace the NSC’s initial system to address proprietary and obsolescence issues and, according to USCG officials, to provide improved capabilities. In September 2016, USCG officials told GAO that the S2S2 system performed well during qualification testing conducted in August 2015 and that the USCG will install S2S2 on future NSCs. As of January 2017, the USCG had installed S2S2 on three of the five already-delivered NSCs and officials anticipated retrofitting the remaining two NSCs by the end of calendar year 2017. If completed, the USCG will have transitioned from contractor- proprietary software almost 2 years earlier than the deadline established in the program’s revised APB, but more than 5 years later than initially planned. USCG officials previously attributed delays in completing the transition to funding shortfalls and difficulties scheduling S2S2 installations for when the NSCs are in port. In January 2016, the USCG reported that C4ISR had a staffing gap of 5 full time equivalents, but in September 2016, program officials did not attribute any negative effects to workforce shortages. Program Office Comments The acquisition program’s primary focus is on delivery of the S2S2 baseline for the NSC class. Also, the acquisition program continues to provide acquisition, technical, and cyber security support to Offshore Patrol Cutter, Fast Response Cutter, and other new asset acquisitions to tailor C4ISR systems acquisition strategies and requirements to meet respective platform milestones. The C4ISR acquisition program plans to operationally test S2S2 in the next NSC follow-on operational test and evaluation event. USCG officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Fast Response Cutter (FRC) United States Coast Guard (USCG) The USCG uses the FRC to conduct search and rescue, migrant and drug interdiction, and other law enforcement missions. The FRC replaces the USCG’s Island Class patrol boat and carries one cutter boat onboard. It provides greater fuel capacity, improved communications and surveillance interoperability with other Department of Homeland Security (DHS) and Department of Defense assets, and the ability to conduct full operations in moderate sea conditions. The USCG plans to acquire 58 FRCs, and as of October 2016, 20 had been delivered. GAO previously reported on the FRC program in March 2017, March 2016, and June 2014 (GAO-17-218, GAO- 16-338SP, GAO-14-450). Staffing gap: 3 FTEs equivalents (FTE) According to USCG officials, the FRC demonstrated all six of its key performance parameters (KPP) during follow-on operational test and evaluation (FOT&E) in July 2016. As of January 2017, DHS’s Director, Office of Test and Evaluation (DOT&E) was in the process of validating the FOT&E results and planned to issue its assessment of the FRC’s performance in February 2017. The FRC completed initial operational test and evaluation (IOT&E) in fiscal year 2013 and partially met one of its six KPPs. In September 2008, USCG officials awarded Bollinger Shipyards Lockport a contract for 1 FRC with options to build up to 33 more. GAO subsequently received a bid protest, which was denied, and upheld the USCG’s contract award in January 2009. In May 2014, the USCG established that it would procure only 32 of the 58 FRCs through this contract. The USCG subsequently purchased the technical specifications and licenses from Bollinger that are necessary to build the FRC and used this information to conduct a full and open competition for the remaining 26 vessels. The USCG has designated this effort as phase 2 of the program. In May 2016, the USCG awarded the phase 2 contract, which officials stated has a potential value of $1.42 billion to Bollinger Shipyards Lockport. According to USCG officials, the phase 2 design will be similar to the phase 1 cutters with minimal changes to non-critical systems and updates to address obsolescence issues. The phase 2 contract is the same contract type as the phase 1—fixed price with economic price adjustment—and includes the same warranty. The USCG anticipates delivery of the first phase 2 cutter in spring 2019. FOT&E, which focused on resolving issues found during prior testing. The USCG’s operational test agent (OTA) from the U.S. Navy conducted IOT&E on the FRC in fiscal year 2013 and assessed three of the program’s six KPPs. At that time, the FRC only partially met one of the KPPs tested. IOT&E also revealed several major deficiencies, the most significant of which involved the FRC’s cutter boat, which exhibited problems operating in moderate sea conditions, and the FRC’s main diesel engines, which had multiple equipment failures during testing. Subsequently, independent testers concluded the FRC was operationally effective, but not operationally suitable. USCG officials told GAO they have improved the FRC’s performance since IOT&E. For example, they replaced and successfully tested the FRC’s cutter boat, worked with the engine manufacturer to determine the root cause of equipment failures, and have begun retrofitting the engines. However, as recently as May 2016, three diesel engines were replaced during production on two FRCs, indicating that the problems with the diesel engines are ongoing. The USCG completed FOT&E in July 2016 and the OTA found that several deficiencies from IOT&E had been corrected. For example, the OTA closed a severe deficiency related to the engines based on modifications to the FRC’s main diesel engines along with observing that the cutter achieved an operational availability of 99 percent during FOT&E. Six major deficiencies from IOT&E remain unresolved and the OTA identified four new major deficiencies during FOT&E. Ultimately, the OTA declared the FRC operationally effective and suitable. As of January 2017, DOT&E was in the process of assessing the FOT&E results to independently validate the program’s performance. In January 2016, the USCG reported that the FRC program had a staffing gap of 3 full time equivalents. In August 2016, program officials told GAO they had addressed the FRC’s staffing gap and did not have any staffing vacancies. The program continues to experience numerous problems with the FRC’s main diesel engines. Twenty engines have been replaced under the program’s warranty, which according to officials has allowed the USCG to avoid $51.8 million in potential costs. USCG officials said the program is also conducting a 15-week dry-dock period for the first 13 cutters to correct warranty items, which is also being covered by the warranty. This effort began in January 2016 and is expected to continue through November 2019. Program Office Comments The FRC program is fully funded, executable, and on track for full operational capability by March 2027, within baseline. FRCs provided over 26,000 operational hours in support of the USCG’s Western Hemisphere strategy in the last 12 months in which over 6,300 undocumented migrants were rescued from unseaworthy vessels and 19,000 kg of illegal narcotics trafficking was disrupted. The program office looks forward to receiving DOT&E’s independent validation of the program’s performance. USCG officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. H-65 Conversion/Sustainment Projects (H-65) United States Coast Guard (USCG) The H-65 aircraft is a short-range helicopter that the USCG uses in search and rescue, ports and waterways security, ice-breaking, marine safety and environmental protection, and defense readiness operations. The H-65 acquisition program increased the USCG’s fleet size from 95 to 102 helicopters and added armament capabilities, upgraded navigation systems, and replaced all of the helicopters’ engines. The program is focused on the final phase of upgrades to the radar sensor system, the automatic flight control system (AFCS), and avionics. The upgrades allow for greater reliability, maneuverability, and interoperability between the H-65 and other government assets. GAO previously reported on the H-65 program in March 2016 (GAO-16-338SP). Staff needed: 24.5 full time equivalents (FTE) According to USCG officials, the program has met 16 of its 18 key performance parameters (KPP), but has not yet demonstrated its 2 avionics KPPs. The USCG plans to demonstrate these KPPs through developmental testing and an operational assessment prior to installing the avionics upgrade across the fleet, but the assessment has been delayed. USCG officials stated that during actual operations, the aircraft have not consistently met 3 of the 16 previously demonstrated KPPs, which are related to operational availability. Program officials previously attributed these shortfalls to difficulties maintaining aging equipment, among other things, which the avionics upgrades should address. The USCG Aviation Logistics Center (ALC) is responsible for procuring and integrating all the systems needed to upgrade the H-65 aircraft. USCG leadership assigned the ALC this responsibility because it was already responsible for overhauling the H-65 aircraft every 4 years as part of normal maintenance. The ALC has completed upgrades to the engines, armament, and navigation systems on all flyable H-65 aircraft. The ALC is in the process of testing the systems for the H-65 aircraft’s avionics and AFCS upgrades. the USCG plans to allocate for each of its major acquisition programs—including the H-65—and recommended DHS account for this funding in its future report (GAO-15-171SP). DHS concurred with the recommendation, but has yet to take action. In June 2015, the Department of Homeland Security’s (DHS) Under Secretary for Management (USM) authorized USCG to award contracts for long-lead production materials for the avionics and AFCS upgrades. Officials estimated these materials will cost $20 million. In September 2016, USCG officials told GAO they had awarded all but 2 of about 40 of these contracts. According to officials, ordering long-lead material was necessary to ensure that the ALC has all the required parts to begin installing the upgrades during normal aircraft maintenance once the program receives approval for initial production. According to USCG officials, the program has completed several years of developmental testing on the avionics and AFCS upgrades. In 2015, the program revised its Test and Evaluation Master Plan (TEMP) at the request of the USM to ensure the USCG has sufficient data to support approval for the initial production of these upgrades. Specifically, the program added an operational assessment conducted by the U.S. Navy to collect more data about the upgrades prior to the production decision. DHS’s Director, Office of Test and Evaluation approved the TEMP in February 2016, but recommended the program make further updates to reflect anticipated test objective changes prior to program-wide IOT&E. IOT&E is intended to test all the H-65 upgrades installed throughout the life of the program to support approval for full-rate production. Officials told GAO they would update the TEMP by August 2018, prior to when IOT&E was scheduled to begin in fiscal year 2019. However, these activities will likely be rescheduled because of the program’s delays. The USCG experienced an over 12-month delay in developing a portion of the avionics and AFCS upgrades that resulted in the H-65 program declaring a schedule breach in November 2016. USCG officials told GAO in September 2016 that several milestones for the avionics and AFCS upgrades had been delayed. Specifically, the production readiness review, completion of developmental testing, and operational assessment—all of which were planned for summer 2016—had been pushed into 2017. Program officials primarily attributed these delays to an underestimation of the technical effort necessary to meet requirements. As these activities support approval for the avionics and AFCS initial production, this decision was also delayed from the USCG’s target date of December 2016. USCG officials anticipated receiving approval for initial production by the program’s revised Acquisition Program Baseline (APB) threshold date of March 2017, but notified DHS leadership in November 2016 that it would not meet this date. According to USCG officials, they now plan to receive approval for initial production by September 2018— nearly 5 years later than the initial APB date of December 2013. The USCG plans to update the H-65’s APB by May 2017 to account for these delays, which will also reflect schedule changes for subsequent milestones including initial operational test and evaluation (IOT&E), the full-rate production decision, and full operational capability. In January 2016, the USCG reported the program had a staffing gap of 4 full time equivalents. In September 2016, USCG officials told GAO the program had closed this gap and was sufficiently staffed. USCG officials also stated that they have been able to address long-standing ALC contracting personnel shortages by shifting some contracting duties from ALC to the USCG contracting office. As of October 2016, USCG officials reported two aircraft have been lost during operational missions. As a result, the program’s LCCE will likely decrease because the USCG no longer needs to fund operations and maintenance costs for these aircraft. However, if the USCG chooses to replace the aircraft, officials said there will be no adverse effect on the program’s schedule or acquisition costs because all of the materials for the upgrades were previously purchased. USCG officials told GAO they are also updating the program’s life-cycle cost estimate (LCCE). The USCG anticipates that the program’s schedule delays will result in minor cost increases because of extended labor contracts and inflation, but that these costs will remain within the program’s currently approved cost thresholds. The program’s LCCE previously increased by approximately $6 billion from 2011 to 2014 due to the USCG’s decision to extend the aircraft’s operational life by 9 years, from 2030 to 2039. Program Office Comments USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Long Range Surveillance Aircraft (HC-130H/J) United States Coast Guard (USCG) The USCG uses HC-130H and HC-130J aircraft to conduct search and rescue missions, transport cargo and personnel, support law enforcement, and execute other operations. In 2009, the Department of Homeland Security’s (DHS) Under Secretary for Management (USM) approved an Acquisition Program Baseline (APB) for the HC-130H upgrade program, and a separate APB for the acquisition of the more modern and capable HC-130J aircraft. In 2012, the USM approved a third APB that combined and re-baselined the two programs. In October 2014, USCG officials told GAO they no longer planned to upgrade any additional HC-130H aircraft, and that they were pursuing an all-HC-130J fleet, in response to the addition of C-27J aircraft into the USCG’s fleet of Medium Range Surveillance Aircraft. GAO reported on the USCG’s HC-130H/J program in March 2016 and March 2015 (GAO-16-338SP, GAO- 15-325). Staffing gap: 3 FTEs equivalents (FTE) The HC-130J will not be able to meet two of its seven key performance parameters (KPP) until the USCG installs a new mission system processor on the aircraft, an effort that is underway. These two KPPs are related to the detection of targets and the aircraft’s ability to communicate with other assets. USCG officials said they installed a prototype of the new HC-130J mission system processor in June 2016 and began developmental testing. The USCG plans to conduct further testing on the HC-130J’s mission system processor in fiscal year 2017. USCG officials previously told GAO that the HC-130H aircraft met all six of its KPPs based on operational performance during USCG missions. The USCG plans to acquire 22 HC-130J aircraft, which will eventually replace the existing HC-130H aircraft. After deciding to pursue an all-HC-130J fleet in October 2014, the USCG began to decrease the number of HC-130H aircraft in its fleet. As of January 2017, the USCG had transferred or was in the process of transferring 9 of its 23 existing HC-130H aircraft to other organizations. For example, the USCG is transferring 7 of these aircraft to the U.S. Forest Service. USCG officials told GAO that the USCG will continue to operate 14 of its HC-130H aircraft until the end of their service lives or until they can be replaced with new HC-130J aircraft. Officials anticipate retiring all HC-130H aircraft by fiscal year 2022. As of January 2017, USCG officials said they had received 10 HC-130J aircraft and awarded contracts for 3 more. meet the full operational capability date of March 2027. If the remaining aircraft are not delivered at this rate, the program’s schedule could slip. USCG officials stated the delivery rate is dependent on the amount of funding the program receives. It appears that the program is facing a potential $2.2 billion funding gap from fiscal year 2017 through fiscal year 2021. However, the gap may not be this large, because the USCG has historically received HC-130Js without including them in its budget requests. Additionally, in April 2015, GAO found that the DHS funding plan presented to Congress did not identify the operations and maintenance funding the USCG plans to allocate for each of its major acquisition programs— including the Long Range Surveillance Aircraft program—and recommended DHS account for this funding in its future report (GAO-15-171SP). DHS concurred with the recommendation, but has yet to take action. The USCG is also replacing the mission system processor on all of its fixed-wing aircraft—including the HC-130J—with a system used by the U.S. Navy and DHS’s Customs and Border Protection. The new mission system processor is intended to enhance operator interface and sensor management, as well as replace obsolete equipment. Pending test results, the USCG plans to install the new mission system processor on the 13 HC- 130J aircraft it plans to receive by the end of fiscal year 2020. In September 2015, the USCG awarded a contract that will cover retrofitting efforts for 7 of these aircraft for a total of $17.2 million. In October 2016, USCG officials told GAO the program had begun updating its life-cycle cost estimate to support a revised APB that accounts for the cancellation of HC-130H upgrades, the transition to an all-HC-130J fleet, and replacement of the HC-130J’s mission system processor. However, officials said they would not update the APB until the USCG completed its multi-phased analysis of mission needs. Consistent with congressional direction, the USCG conducted a multi-phased analysis of its mission needs, including its flight-hour goals and mix of fixed-wing assets, which the USCG is delivering through both the Long Range Surveillance Aircraft program and the Medium Range Surveillance Aircraft program, which GAO is also assessing in this report. The USCG submitted the results of this analysis to Congress in November 2016, which confirmed the total quantity of 22 HC-130J aircraft the USCG plans to acquire and an annual flight-hour goal of 800 hours per aircraft. According to program officials, the USCG installed the HC-130J mission system processor prototype, and began developmental testing in June 2016. Once developmental testing is complete, USCG officials said they plan to demonstrate the HC-130J’s mission system functionality against its requirements through performance testing conducted by the U.S. Navy in fiscal year 2017. USCG officials noted that this testing will be conducted in various operational environments. However, formal operational testing will not be conducted, which increases the risk that the new mission system processor will not perform as intended or be reliable once fielded. The USCG has not conducted operational testing on either aircraft. In 2009, DHS’s Director, Office of Test and Evaluation (DOT&E) and the USCG determined the HC-130J did not need to operationally test the airframe because the U.S. Air Force conducted operational testing on the base C-130J airframe in 2005. Additionally, DOT&E approved a Test and Evaluation Master Plan for the HC-130H upgrades in 2010, but the USCG did not implement the plan because it canceled the upgrade. Despite reporting a staffing gap of 3 full time equivalents, program officials did not attribute any negative effects to workforce shortages. Program Office Comments USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Medium Range Surveillance Aircraft (HC-144A & C-27J) United States Coast Guard (USCG) In October 2014, Department of Homeland Security (DHS) leadership directed the USCG to restructure its HC-144A acquisition program to accommodate 14 C-27J aircraft from the U.S. Air Force, and designated this combined acquisition the Medium Range Surveillance (MRS) Aircraft program. All 32 aircraft—14 C-27J aircraft plus 18 previously purchased HC-144A aircraft—are twin-engine propeller-driven platforms that the USCG plans to use to conduct all types of Coast Guard missions, including search and rescue and disaster response. In August 2016, DHS leadership approved MRS’s Acquisition Program Baseline (APB), which established the program’s cost, schedule, and performance parameters. GAO previously reported on the MRS program in March 2016 and the C-27J aircraft in March 2015 (GAO-16-338SP, GAO-15-325). Staffing gap: 6 FTEs equivalents (FTE) The seven HC-144A key performance parameters (KPP) apply to the C-27J aircraft. However, neither aircraft will be able to meet two KPPs until the USCG installs a new mission system processor, an effort that is underway, according to officials. These two KPPs are related to the detection of targets and the aircraft’s ability to communicate with other assets. The HC-144A previously fully met three of its seven KPPs during testing conducted in July 2012. The C-27J aircraft will undergo testing once the USCG installs an entire mission system, consisting of the processor and sensor package, on the aircraft. However, the USCG has deferred its detection KPP due to technology limitations. The USCG initially planned to procure a total of 36 HC-144A aircraft, but reduced that number to the 18 they had already procured after Congress directed the U.S. Air Force to transfer 14 C-27J aircraft to the USCG in fiscal year 2014. As of October 2016, the USCG had accepted 9 C-27J aircraft. The USCG is also replacing the mission system processor on all of its fixed- wing aircraft—including both the HC-144A and C-27J—with a system used by the U.S. Navy and DHS’s Customs and Border Protection. In August 2016, USCG officials told GAO they expect to complete installation of the mission system processor prototype on the HC-144A by December 2016, and plan to outfit all 18 HC-144A aircraft by 2021. These officials said it will take longer to complete installation of this system on the C-27J because the aircraft first needs a sensor package—primarily a radar and electro-optical camera—to meet its requirements. In July 2012, U.S. Navy officials responsible for testing the HC-144A aircraft reported that it was operationally effective and suitable, but fully met only three of its seven KPPs. Program officials previously stated that they are addressing the KPP deficiencies by changing operational tactics until the USCG installs a new mission system processor and other items. USCG officials plan to test the upgraded aircraft through performance testing conducted by the U.S. Navy in fiscal year 2017. USCG officials noted that this testing will be conducted in various operational environments. However, formal operational testing will not be conducted, which may increase the risk that the new mission system processor will not perform as intended or be reliable once fielded. In October 2014, DHS leadership directed the USCG to test the C-27J mission system in an operational setting. In July 2016, DHS’s Director, Office of Test and Evaluation approved the program’s Test and Evaluation Master Plan for the C-27J, which shows operational testing beginning in April 2021. However, it is unclear when the C-27J will be able to meet its detection KPP because the technology required does not yet exist for this aircraft. In April 2016, the USCG received approval to defer these capabilities until the technology becomes commercially available. Incorporating the C-27J into the USCG’s fleet revised the MRS program’s full operational capability date to March 2025. However, this reflects a 6-month acceleration from the USCG’s revised APB date for the HC-144A. In 2012, the HC-144A’s full operational capability date slipped from September 2020 to September 2025 when the USCG reduced the number of aircraft purchased per year in response to funding constraints. The USCG initially estimated that it may cost $600 million to convert the C-27J aircraft to meet USCG mission needs, but according to the MRS APB, it may cost $1 billion, bringing the program’s total acquisition cost to $2.5 billion. These costs include purchasing a sensor package, redesigning the aircraft and installing the package, and customizing and testing the new mission system processor. The MRS program’s life-cycle cost estimate (LCCE) exceeds $15 billion, but this is an almost $13.6 billion decrease compared to the USCG’s revised estimates for an all-HC-144A fleet. From 2009 to 2012, the HC-144A LCCE increased from $12.3 billion to $28.7 billion when the USCG accounted for 5 years of additional costs, among other things. The MRS program’s LCCE decreased because of the reduced number of aircraft acquired, a reduction in planned flight hours, and the 15-year shorter service life of the C-27J compared to the HC-144A. Nevertheless, the USCG will ultimately procure fewer aircraft than initially planned at a higher cost. The USCG still faces challenges in transitioning the C-27J into the USCG fleet. In March 2015, GAO found that the successful and cost-effective fielding of the C-27J aircraft is contingent on the USCG’s ability to address three risk areas: (1) purchasing spare parts, (2) accessing technical data, and (3) understanding the condition of the aircraft. According to USCG officials, purchasing spare parts remains the greatest risk. However, in September 2016, the USCG awarded an $11 million contract for spare parts. In December 2016, USCG officials also said they had not yet received access to the aircraft’s technical data to start the redesign effort. In January 2016, the USCG reported that the program’s staffing need increased from 15 full time equivalents to 81, much of which was needed to establish a C-27J asset program office at the USCG’s Aviation Logistics Center. The MRS program is projected to face a $1.3 billion funding gap from fiscal year 2017 through fiscal year 2021. However, the funding gap may not be this large. In April 2015, GAO found that the DHS funding plan presented to Congress did not identify the operations and maintenance funding the USCG plans to allocate for each of its major acquisition programs—including the MRS program—and recommended DHS account for this funding in its future report (GAO-15-171SP). DHS concurred with the recommendation, but has yet to take action. Program Office Comments USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. National Security Cutter (NSC) United States Coast Guard (USCG) The USCG uses the NSC to conduct search and rescue, migrant and drug interdiction, environmental protection, and other missions. The NSC replaces the USCG’s High Endurance Cutters and is intended to provide improved capabilities over this legacy asset. The NSC carries helicopters and cutter boats, provides an extended on-scene presence at forward deployed locations, and operates worldwide. As of January 2017, the USCG had received six of eight originally planned NSCs, and two were under construction. The Consolidated Appropriations Act of 2016 stated that not less than $640 million shall be immediately available and allotted to contract for the production of a ninth NSC. Each NSC is designed to have a 30-year service life. GAO previously reported on the NSC in March 2017, March 2016, and January 2016 (GAO-17-218, GAO-16- 338SP, GAO-16-148). Staffing gap: 9 FTEs equivalents (FTE) The USCG has been operating the NSC since 2010, but it has not yet demonstrated that the NSC can fully meet 7 of its 19 key performance parameters (KPP). The NSC’s unmet KPPs include those related to unmanned aircraft, cutter-boat deployment, and interoperability requirements. The USCG plans to demonstrate all unmet KPPs during follow-on operational test and evaluation (FOT&E) in fiscal years 2017 and 2018. The USCG awarded a contract to produce the first three NSCs to Integrated Coast Guard Systems—a joint venture between Northrop Grumman and Lockheed Martin—as part of the now- defunct acquisition effort designated Deepwater. In 2006, the USCG revised its Deepwater acquisition strategy, citing cost increases, and took over the role of lead systems integrator, acknowledging that it had relied too heavily on contractors. In 2010, the USCG awarded the production contract for the fourth NSC to Northrop Grumman. In 2011, Northrop Grumman spun off its shipbuilding sector as an independent company named Huntington Ingalls Industries (HII). HII delivered the fourth, fifth, and sixth NSCs, and is producing the seventh and eighth NSCs. In December 2016, the USCG awarded HII a contract to produce the ninth NSC, using the funding made available and allotted by Congress for this purpose in December 2015. The ninth NSC will be built to the same configurations as the eighth NSC. In June 2016, the Department of Homeland Security’s (DHS) Director, Office of Test and Evaluation (DOT&E) approved the NSC program’s revised Test and Evaluation Master Plan (TEMP) in preparation for FOT&E. According to USCG officials, FOT&E will focus on testing all unmet KPPs and resolving deficiencies found during prior testing. The NSC completed its initial operational testing in 2014, and DOT&E subsequently found the NSC operationally effective and suitable. However, the NSC did not fully demonstrate 7 of its 19 KPPs during this testing, including those related to unmanned aircraft and cutter-boat deployment in rough seas. USCG officials indicated that challenges remain in determining a path forward to resolve these KPPs because the USCG and its operational test agent within the U.S. Navy have different interpretations of the cutter boat requirements. In January 2016, GAO recommended the NSC program office clarify the KPPs for the cutter boats, with which the USCG concurred. As of January 2017, the USCG was working on a resolution. According to USCG officials, the NSC program is on track to meet its revised schedule and cost goals for the first eight NSCs. From 2008 to 2014, the program’s full operational capability (FOC) date slipped 4 years. USCG officials attributed this schedule slip to, among other things, funding shortfalls. Additionally, the program’s acquisition cost estimate increased nearly $1 billion due to lingering effects of Hurricane Katrina, which in 2005 struck the region where the NSCs are built. However, the program’s life-cycle cost estimate (LCCE) decreased by $2.4 billion, which USCG officials attributed to increasingly accurate cost estimates for personnel, materials, and maintenance. As of August 2016, the USCG was developing the test scenarios that it will use to conduct FOT&E in fiscal years 2017 and 2018. Officials stated that, in January 2017, the NSC will be the first USCG asset to undergo cyber security testing. The USCG expects to complete installation of an unmanned aircraft on the third NSC in December 2016, but it remains unclear when the USCG will demonstrate the unmanned aircraft KPP. In January 2016, GAO also recommended DHS specify when the USCG must complete the NSC’s FOT&E and any further actions the NSC program should take following FOT&E. The USCG concurred and in April 2016, DHS issued a memorandum outlining requirements for the program’s FOT&E including that it be completed by March 2019. This memorandum also directed the USCG to complete a study no later than December 2017 to determine the root cause of the NSC’s propulsion system issues such as high engine temperatures, cracked cylinder heads, and overheating generator bearings that are impacting missions— issues GAO also reported on in January 2016. The program’s costs include several design changes the USCG has had to implement on equipment with known issues aboard the NSC fleet. As of September 2016, 12 equipment systems required design changes costing over $1 million each, for an estimated total cost of $260 million. The estimated costs associated with these changes—such as structural enhancement work on the first two NSCs and the replacement of the gantry crane which aids in the deployment of the cutter boats—have increased by roughly $60 million since GAO reported on this issue in January 2016. Program officials attributed the increase to the revised cost of structural enhancements on NSCs 1 and 2 based on actual contract values and the addition of the ninth NSC. USCG officials told GAO they are updating the program’s Acquisition Program Baseline and LCCE to account for the ninth NSC, but these updates are not expected until September 2017. The USCG anticipates delivery of the ninth NSC in September 2020, which coincides with the program’s revised FOC date. It is unclear how the ninth NSC will affect the program’s costs. In August 2016, USCG officials told GAO they have been able to mitigate any effects of the program’s staffing shortfall with existing staff and were in the hiring process for the program’s remaining critical vacancy. Despite receiving funding for the ninth NSC in fiscal year 2016, the program is projected to face a $1.6 billion funding gap from fiscal year 2017 to fiscal year 2021. However, the funding gap may not be as large as it appears. In April 2015, GAO found that the DHS funding plan presented to Congress did not identify the operations and maintenance funding the USCG plans to allocate for each of its major acquisition programs—including the NSC—and recommended DHS account for this funding in its future report (GAO-15-171SP). DHS concurred with the recommendation, but has yet to take action. Program Office Comments Cost estimates herein are threshold values from the NSC Acquisition Program Baseline and do not reflect current lower estimates based on award amounts for NSCs 7 and 8. The NSC program completed initial operational test and evaluation (IOT&E) in 2014 and continues to work with DHS to complete remaining testing and resolve pending discrepancies. Despite not fully completing all aspects of IOT&E, USCG operations, led by NSCs, seized more cocaine in 2016 than any year prior—more than 416,600 pounds worth over $5.6 billion. USCG officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. Offshore Patrol Cutter (OPC) United States Coast Guard (USCG) The USCG plans to use the OPC to conduct patrols for homeland security, law enforcement, and search-and-rescue operations. It will be designed for long-distance transit, extended on-scene presence, and operations with deployable aircraft and small boats. The OPC is intended to replace the USCG’s aging Medium Endurance Cutters and to bridge the operational capabilities provided by the USCG’s Fast Response Cutters and National Security Cutters. The USCG plans to procure 25 OPCs, and it expects to receive the first OPC in 2021. GAO previously reported on the OPC program in March 2016 and June 2014 (GAO-16-338SP, GAO-14-450). Staffing gap: 7 FTEs equivalents (FTE) Department of Homeland Security (DHS) leadership has approved six key performance parameters (KPP) for the OPC, establishing goals for the ship’s operating range and duration, crew size, interoperability and maneuverability, and ability to support operations in moderate to rough seas. The first OPC has not yet been constructed, so the USCG has not yet demonstrated whether it can meet these KPPs. The USCG plans to use engineering reviews, and developmental and operational tests throughout the acquisition to measure the OPC’s performance. The USCG used a two-phased down-select strategy to select a contractor to deliver the OPC. For phase 1, the USCG conducted a full and open competition to select three contractors to perform preliminary and contract design work, and subsequently, in February 2014, the USCG awarded fixed- price contracts to Eastern Shipbuilding, Bollinger Shipyard, and Bath Iron Works. For phase 2, the USCG selected one of the three phase 1 contractors to develop a detailed design of the OPC, and construct no more than the first 11 ships. In September 2016, the USCG awarded the phase 2 contract to Eastern Shipbuilding, worth approximately $110 million for the detailed design and with separate options for each ship. The options for ships 10 and 11 were unpriced and included in the solicitation as an incentive to convert the contract type from fixed price incentive to firm fixed price. These options will be included in a re-pricing proposal submitted by the contractor for ships 6-9 after delivery of the first ship. According to USCG officials, the USCG will decide whether to exercise the option for ships 10 and 11 based on the contractor’s re-pricing proposal for ships 6-9. The USCG plans to re-compete the contract for the remaining 14 or 16 ships. USCG officials told GAO they are using a warranty similar to that for the Fast Response Cutter (FRC). In March 2016, GAO found that the FRC’s warranty improved cost and quality by requiring the shipbuilder to pay to repair defects. The OPC’s phase 2 contract includes a 2-year warranty for the lead ship and a 1-year warranty for all other ships that includes provisions that govern defects. GAO previously found that the OPC’s existing cost estimate raised questions about the program’s affordability. For example, in September 2012, GAO found that the requirements and mission for the National Security Cutter (NSC) and the OPC programs have similarities, but the estimated acquisition unit cost for the OPC was less than half the actual acquisition unit cost for the NSC. At that time, USCG officials recognized that the cost estimate for the OPC was still uncertain since the cutter had yet to be designed. USCG officials also noted that any delays, design issues, or contract oversight problems—all of which were experienced during the procurement of the NSC— could increase the eventual cost of the OPC. In 2012, DHS’s Chief Financial Officer also raised concerns that the OPC’s costs could grow as other shipbuilding programs’ costs have grown in the past, and could ultimately affect the affordability of other USCG acquisition programs. In June 2014, GAO reported that the OPC will absorb about two-thirds of the USCG’s acquisition funding from 2018 to 2032, and recommended that the USCG develop a 20-year fleet modernization plan that identifies all acquisitions needed to maintain the current service level, along with trade-offs if the funding needed to execute the plan is not consistent with annual budgets. The USCG concurred with this recommendation but did not identify an estimated date for completing the plan. In September 2016, USCG officials told GAO that significant investments in the NSC and FRC will be phased out by fiscal year 2021 to support the affordability of the OPC as it ramps up production. DHS’s Director, Office of Test and Evaluation approved the OPC Test and Evaluation Master Plan (TEMP) in October 2011, which the USCG updated to reflect schedule changes resulting from the bid protest. In March 2016, the USCG issued a memo further refining the program’s test schedule and detailing plans for cybersecurity testing, among other things. The USCG plans to conduct developmental testing from fiscal years 2017 to 2022 before conducting IOT&E on the first OPC in fiscal year 2023. In January 2016, the USCG reported that the program office increased its required staffing level from 20 to 29 full time equivalents (FTE), but still had a staffing gap of 7 FTEs. In August 2016, program officials told GAO that the program had closed its staffing gap to 3 FTEs. The 5 critical vacancies are for additional USCG personnel who will oversee construction and provide management of contract execution at Eastern Shipbuilding’s shipyard once phase 2 activities ramp up. The OPC’s acquisition and life-cycle cost estimates have not changed since 2012. However, the acquisition cost estimate had previously increased—GAO found in June 2014 that this estimate had increased by $4 billion from 2007 to 2012. USCG officials said the increase was largely due to invalid assumptions in the earlier cost estimate, along with schedule delays and inflation. The program is currently projected to have a nearly $1.2 billion funding gap from fiscal years 2017 to 2021. However, it is unclear whether this assessment of the gap is accurate because the USCG has not updated OPC’s cost estimate to reflect the schedule delays experienced after the 2012 cost estimate was approved. In addition, USCG officials said that $231 million of the OPC’s costs over this 5-year period are funded by sources from outside the program. DHS leadership directed the USCG to update OPC’s life-cycle cost estimate by March 2017 following award of the phase 2 contract. Program Office Comments USCG officials provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. United States Citizenship and Immigration Services (USCIS) USCIS spans more than 200 offices worldwide, and processes tens of thousands of immigration and citizenship applications each day. The Transformation program was established in 2006 to transition USCIS from a fragmented, paper-based filing environment to a consolidated, paperless environment. However, it struggled to deliver capability for several years, and in 2013, the Department of Homeland Security (DHS) Under Secretary for Management (USM) authorized USCIS to revise its acquisition strategy. According to USCIS, the program is now pursuing a simpler solution based on a new system architecture. However, USCIS cannot use any of the architecture delivered under the old strategy, despite having invested more than $475 million in its development. GAO previously reported on the Transformation program in March and July 2016 (GAO-16- 338SP, GAO-16-467). Staffing gap: 17 FTEs Actual staff: 115.34 FTEs equivalents (FTE) In April 2015, DHS leadership approved a revised set of 8 key performance parameters (KPP) after the program struggled to meet its requirements. USCIS will not be able to fully demonstrate these KPPs until it achieves full operational capability (FOC). In the interim, the program has conducted operational assessments of some deployed functionality. In November 2015, DHS’s Director, Office of Test and Evaluation (DOT&E) concluded that the system met 6 of the 7 tested KPPs during an assessment of the product line automating permanent resident card replacement applications. USCIS completed another assessment in March 2016 but, as of January 2017, DOT&E had not assessed these results. In 2008, DHS awarded IBM a task order to deliver the original solution through five software releases. The first release was launched in May 2012, approximately 5 months behind schedule. DHS attributed this delay to its decision to give a single contractor too much responsibility, weak contractor performance, pursuing an unnecessarily complex system, and adopting a development methodology that did not allow DHS to see problems early in the process. To address the delay, the Office of Management and Budget, DHS, and USCIS determined the program should implement a new acquisition strategy, which allowed for an agile software development methodology and increased competition for development work. Under an agile software development methodology, end users, subject matter experts, and testers collaborate with developers, increasing visibility into interim progress. By September 2014, USCIS had awarded four agile development contracts, which expired in September 2016. USCIS told GAO they awarded bridge contracts while the development contracts are re-competed. In April 2015, the Acting Deputy USM formally approved a program re-baseline. Currently, the program plans to deliver capability through 14 releases that correspond to new product lines. Each product line contributes to processing one of four lines of business: Citizenship, Immigrant, Non-immigrant, and Humanitarian. The program’s yearly cost estimates appear to match its funding plan from fiscal years 2017 through 2021, but it is actually projected to have a sizable surplus. USCIS uses revenue from premium processing fees to fund the Transformation program. USCIS expected to carry over $468 million in premium processing revenue into fiscal year 2017, and USCIS expects it will still have $327 million in unobligated funds at the end of fiscal year 2021. In March 2016, the program completed its third operational assessment since adopting its new system architecture. The assessment evaluated a software release deployed in 2015 that was intended to help USCIS customers submit immigrant visa payments. In May 2016, the program’s operational test agent (OTA)—a private industry firm—determined that the product line had an overall low risk and should continue to be developed and deployed in accordance with program plans. However, the operational assessment only tested a minor subset of the system’s FOC capability. As of January 2017, DHS’s DOT&E had not independently validated these results. The OTA subsequently conducted a fourth operational assessment intended to inform DHS leadership’s acceptance of the Citizenship line of business. However, according to program officials, the OTA extended the observation period for this assessment once the program breached the Citizenship line of business completion deadline. These officials said the assessment will be completed in 2017, and DOT&E plans to assess the results prior to DHS’s acceptance of the Citizenship line of business. Going forward, the program plans to conduct similar operational assessments several more times through March 2019, when the program plans to achieve FOC. USCIS completed data migration from the old system architecture in March 2016, but subsequently encountered challenges processing all applications as new product lines were transitioned to the new system architecture. In August 2016, the program reverted back to the legacy system for processing one of the Citizenship forms. As a result of the switchover and other technical issues with the case management system, the program did not complete deployment of all the product lines associated with the Citizenship line of business by its September 2016 deadline, resulting in a schedule breach. In January 2016, USCIS reported that the program added approximately 30 full time equivalents (FTE), but still had a staffing gap of 17 FTEs. In August 2016, program officials said they had filled some vacant positions, including a division chief, but had several new vacancies for support staff and one project lead. However, program officials did not attribute any negative effects as a result of staffing shortfalls. In November 2016, USCIS submitted a breach remediation plan to DHS leadership that identified several root causes for the breach. These causes included that the program’s schedule did not allow time to gather user feedback or address complexities discovered during development; new requirements were added; and there was no consistent performance requirement from USCIS leadership on what the program was supposed to accomplish for specific product lines. In July 2016, GAO found that USCIS was not following its own policies or leading practices when developing software, including ensuring that software meets expectations prior to deployment and development outcomes are defined. GAO made 12 recommendations to improve Transformation program management. USCIS planned to re-baseline the program to account for the schedule delay and subsequently proposed organizational changes. In December 2016, DHS leadership directed USCIS to stop planning and development for new product lines, update its breach remediation plan and acquisition documentation, and brief leadership on the program’s revised approach by February 2017. Program Office Comments Since its introduction in March 2015, the enhanced system architecture has taken in over 2.7 million cases and USCIS also introduced four forms. USCIS continues to modernize the processes based on internal user feedback and input. USCIS is reassessing the program goals and schedule and will re-baseline the program in fiscal year 2017. USCIS officials also provided technical comments on a draft of this assessment, which GAO incorporated as appropriate. The objectives of this audit were designed to provide congressional committees insight into the Department of Homeland Security’s (DHS) major acquisition programs. We assessed the extent to which (1) DHS’s major acquisition programs are on track to meet their schedule and cost goals, (2) major acquisition programs are making progress in meeting key performance parameters (KPP), and (3) DHS has taken actions to strengthen implementation of its acquisition policy and to improve major acquisition program outcomes. To answer these questions, we reviewed 26 of DHS’s 71 major acquisition programs, including 24 that we reviewed in 2016. We reviewed all 16 of DHS’s Level 1 acquisition programs— those with life-cycle cost estimates (LCCE) of $1 billion or more—that had at least one project, increment, or segment in the Obtain phase—the stage in the acquisition life cycle when programs develop, test, and evaluate systems—at the initiation of our audit. Additionally, to provide insight into some of the factors that can lead to poor acquisition outcomes, we reviewed 10 other major acquisition programs—including 5 Level 1 programs beyond the Obtain phase and 5 Level 2 programs that have LCCEs between $300 million and $1 billion—that we or DHS leadership had identified were at risk of not meeting their cost estimates, schedules, or capability requirements. We have reported on many of these programs in our past work. As part of our scoping effort, we met with representatives from DHS’s Office of Program Accountability and Risk Management (PARM), DHS’s main body for acquisition oversight, to determine which programs (if any) were facing difficulties in meeting their cost estimates, schedules, or capability requirements. The 26 selected programs were sponsored by eight different components, and they are identified in table 7, along with our rationale for selecting them. To determine the extent to which DHS’s major acquisition programs are on track to meet their schedule and cost goals, we collected key acquisition documentation for each of the 26 programs, including all Acquisition Program Baselines (APB) approved at the department level since DHS’s current acquisition policy went into effect in November 2008. DHS policy establishes that all major acquisition programs should have a department-approved APB, which establishes a program’s critical cost, schedule, and performance parameters, before they initiate efforts to obtain new capabilities. All 26 programs had one or more department- approved APB since November 2008. We used these APBs to establish the initial and current cost and schedule goals for the 26 programs. We then developed a data collection instrument to help validate the information from the APBs. Specifically, for each program, we pre- populated a data collection instrument to the extent possible with the schedule and cost information we had collected from the APBs and our 2016 assessment (if applicable) to identify cost growth and schedule slips, if any, since the program’s initial baseline was approved. We shared our data collection instruments with officials from the program offices to confirm or correct our initial analysis and to collect additional information to enhance the timeliness and comprehensiveness of our data sets. Additionally, in June 2016, we collected program schedule and cost data from DHS’s Investment Evaluation, Submission, and Tracking (INVEST) System, which is the department’s system for information on its major acquisition programs. We compared the information obtained through the program offices’ data collection instrument responses and the INVEST system to our 2016 assessment (if applicable) or the programs’ most recent department-approved APB to identify schedule and cost changes, if any, since January 2016—the data cut-off date of our 2016 assessment. We then met with program officials to identify causes and effects associated with any identified schedule slips and cost growth. Subsequently, we drafted preliminary assessments for each of the 26 programs, shared them with program and component officials, and gave these officials an opportunity to submit comments to help us correct any inaccuracies, which we accounted for as appropriate (such as when new information was available). We also met with senior acquisition oversight officials to share observations about trends and issues across the portfolio. Through this process, we determined that our data elements were sufficiently reliable for the purpose of this engagement. In addition, we compared the cost data we collected for each of the 26 programs to DHS’s funding plans to identify any projected funding gaps— a challenge that increases the likelihood that acquisition programs will not meet their schedule or cost goals. Specifically, we compared current yearly cost estimates from department-approved LCCEs, INVEST, or program office updates to the funding plan presented in the Future Years Homeland Security Program (FYHSP) report to Congress for fiscal years 2017-2021, which presents 5-year funding plans for each of DHS’s major acquisition programs, to assess the extent to which a program was projected to have a funding gap from fiscal year 2016 through fiscal year 2021. These calculations also accounted for any fiscal year 2016 carryover funds, but did not include other funds that programs brought into fiscal year 2016 from sources such as re-programming, fees, and other reimbursable expenses. This analysis was consistent with the methodology we used in our 2016 annual assessment, which allowed us to make comparisons to our March 2016 findings. We shared our analysis with officials from the program offices and components to confirm or correct our calculations. We subsequently identified actions DHS had taken or planned to take to address projected program funding gaps by reviewing key documentation, such as certification of acquisition funding memorandums for programs that had completed an Acquisition Decision Event (ADE) in 2016 and DHS’s resource allocation policies and processes. We also met with program officials to identify causes and effects associated with any projected funding gaps, and interviewed senior financial officials from DHS headquarters to discuss actions they had taken to implement our prior recommendations on addressing program affordability issues. To determine the extent to which DHS’s major acquisition programs are making progress in meeting their KPPs, we reviewed DHS’s acquisition policy and guidance, as well as key acquisition documentation for all 26 programs, including APBs and operational requirements documents approved at the department level since DHS’s current acquisition policy went into effect in November 2008. An operational requirements document provides a number of performance parameters, including the KPPs, which must be met by a program to close an existing capability gap and provide a useful capability to the operator. We used these documents to establish the KPPs for the 26 programs. We included these KPPs in our pre-populated data collection instrument along with the status of each programs’ KPPs collected through our 2016 assessment (if applicable) to identify changes, if any, in the programs’ KPPs over time. We shared our data collection instruments with officials from the program offices to confirm or correct our initial analysis and to collect additional information to enhance the timeliness and comprehensiveness of our data sets. We also collected test reports and any letters of assessment from DHS’s Director, Office of Test and Evaluation (DOT&E), which assess system performance during operational testing. Operational testing is intended to identify whether a system can meet its KPPs and provide an evaluation of the operational effective and suitability of a system in an operationally realistic environment. For the purposes of our review, we defined operational testing as initial or follow-on operational test and evaluation events, operational assessments, and limited user tests. We used the programs’ APBs, data collection instruments, and other documents to identify whether the programs had deployed new capabilities to operators. We then reviewed the programs’ test reports and DOT&E letters of assessment to determine what KPPs were tested and whether the system met all of the KPPs tested. We relied on information provided by the program offices, such as in the data collection instrument responses in instances where programs did not have test reports and DOT&E letters of assessment, or if these documents did not explicitly assess programs’ KPPs. We considered a program’s KPP met if it achieved, at a minimum, the threshold value outlined in the programs’ APB or operational requirements document. We assessed DHS’s acquisition policy, guidance, and practices against GAO’s acquisition best practices for managing acquisition programs. We also met with officials from the program offices to identify reasons why KPPs had not yet been demonstrated, and interviewed senior officials from DHS headquarters about the program’s performance breach policy and requirements definition processes. To determine the extent to which DHS has taken actions to improve major acquisition program outcomes and to strengthen implementation of its acquisition policy, we reviewed DHS’s acquisition policy and guidance, including current and prior versions of the Acquisition Management Directive Instruction 102-01-001; acquisition decision memorandums issued in calendar year 2016; and key acquisition documentation for major acquisition programs, such as APBs, LCCEs, operational requirements documents, as well as breach notifications and remediation plans. We used the acquisition policy and guidance to identify changes made by DHS in 2016, such as establishing new oversight initiatives or revisions to existing policies. We then used the acquisition decision memorandums and program documentation to assess DHS’s implementation of its acquisition policy in 2016. Specifically, for programs that received DHS approval for an ADE in 2016, we compared the acquisition documentation approved by DHS leadership for that event to the documentation requirements in DHS’s acquisition policy. In addition, we reviewed program breach notifications, breach remediation plans, and acquisition decision memorandums for each of the programs that reported a breach in calendar year 2016 against DHS’s acquisition policy. We assessed DHS’s acquisition management policies, guidance, and practices against the Standards for Internal Control in the Federal Government. Lastly, we interviewed acquisition management officials from DHS headquarters to obtain their perspectives on how new and ongoing acquisition management initiatives are intended to improve program outcomes, as well as key management decisions. We conducted this performance audit from May 2016 through April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact listed above, Richard A. Cederholm (Assistant Director), Katherine Trimble (Assistant Director), Aryn Ehlow (Analyst-in- Charge), Peter Anderson, Mathew Bader, Steven Bagley, Jason Berman, Carissa Bryant, Andrew Burton, Erin Butkowski, Lisa Canini, Jenny Chow, Adam Couvillion, John Crawford, Lorraine Ettaro, Laurier R. Fish, Laura Gibbons, Betsy Gregory-Hosler, Yvette Gutierrez, Leigh Ann Haydon, Kirsten Leikem, Sarah Martin, John Mickey, Erin O’Brien, Alexis Olson, Katherine Pfeiffer, John Rastler, Sylvia Schatz, Jillian Schofield, Charlie Shivers III, Roxanna Sun, Lindsay Taylor, and Hai Tran made key contributions to this report. Homeland Security Acquisitions: Joint Requirements Council’s Initial Approach Is Generally Sound and It Is Developing a Process to Inform Investment Priorities. GAO-17-171. Washington, D.C.: Oct. 24, 2016. Homeland Security Acquisitions: DHS Has Strengthened Management, but Execution and Affordability Concerns Endure. GAO-16-338SP. Washington, D.C.: Mar. 31, 2016. National Security Cutter: Enhanced Oversight Needed to Ensure Problems Discovered during Testing and Operations Are Addressed. GAO-16-148. Washington, D.C.: Jan. 12, 2016. TSA Acquisitions: Further Actions Needed to Improve Efficiency of Screening Technology Test and Evaluation. GAO-16-117. Washington, D.C.: Dec. 17, 2015. Coast Guard Aircraft: Transfer of Fixed-Wing C-27J Aircraft Is Complex and Further Fleet Purchases Should Coincide with Study Results. GAO-15-325. Washington, D.C.: Mar. 26, 2015. Homeland Security Acquisitions: Major Program Assessments Reveal Actions Needed to Improve Accountability. GAO-15-171SP. Washington, D.C.: Apr. 22, 2015. Homeland Security Acquisitions: DHS Should Better Define Oversight Roles and Improve Program Reporting to Congress. GAO-15-292. Washington, D.C.: Mar. 12, 2015. Coast Guard Acquisitions: Better Information on Performance and Funding Needed to Address Shortfalls. GAO-14-450. Washington, D.C.: June 5, 2014. Homeland Security Acquisitions: DHS Could Better Manage Its Portfolio to Address Funding Gaps and Improve Communications with Congress. GAO-14-332. Washington, D.C.: Apr. 17, 2014. Homeland Security: DHS Requires More Disciplined Investment Management to Help Meet Mission Needs. GAO-12-833. Washington, D.C.: Sept. 18, 2012. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight. GAO-09-29. Washington, D.C.: Nov. 18, 2008.
In fiscal year 2016, DHS planned to invest about $7 billion in major acquisitions. DHS's acquisition activities are on GAO's High Risk List, in part due to program management, requirements, and funding issues. The Explanatory Statement accompanying the DHS Appropriations Act, 2015 included a provision for GAO to review DHS's major acquisitions. This report, GAO's third annual review, addresses the extent to which (1) DHS's major acquisition programs are on track to meet schedule and cost goals, (2) these programs are meeting KPPs, and (3) DHS has strengthened implementation of its acquisition policy. GAO assessed DHS's 15 largest acquisition programs that were in the process of obtaining new capabilities as of May 2016, and 11 additional programs that GAO or DHS identified were at risk of poor outcomes. For all 26 programs, GAO reviewed key documentation, assessed performance against baselines established since DHS's 2008 acquisition policy, and met with program officials. GAO also met with DHS acquisition officials and assessed DHS's policies and practices against GAO acquisition best practices and federal internal control standards. For the first time since GAO began its annual assessments of the Department of Homeland Security's (DHS) major acquisitions, all 26 programs that were reviewed had a department-approved baseline. During 2016, over half of the programs reviewed (17 of the 26) were on track to meet their initial or revised schedule and cost goals. However, 7 of these 17 programs only recently established baselines, 6 of which operated for several years and deployed capabilities without approved baselines. The remaining 9 programs experienced schedule slips, including 4 that also experienced cost growth. The table shows the schedule and cost changes across all 26 programs reviewed, much of which was driven by changes in a few programs. As of January 2017, 14 of the 26 programs deployed capabilities before meeting all key performance parameters (KPP)—the most important requirements that a system must meet. As a result, DHS may be deploying much-needed capabilities—such as border surveillance equipment and Coast Guard cutters—that do not work as intended. Programs did not meet KPPs for a variety of reasons, such as KPPs were not yet ready to be tested, systems failed to meet KPPs during testing, or KPPs were poorly defined. Contrary to acquisition best practices, DHS policy requires programs to establish schedule, cost, and performance baselines prior to gaining full knowledge about the program's technical requirements. As a result, DHS programs do not match their needs with available resources before starting product development, which increases programs' risk for cost growth, schedule slips, and inconsistent performance. In 2016, DHS strengthened implementation of its acquisition policy by, for example, focusing on program staffing needs, requiring programs to obtain department-approval for key acquisition documents, and revising the process for when programs breach their cost goals, schedules, or KPPs. However, DHS could better document leadership's acquisition decisions to improve insight into cases that diverge from policy. For example, DHS approved six programs to proceed through the acquisition life cycle even though required documentation was not comprehensive or had not been approved, as required by DHS's policy. Senior DHS officials told GAO these decisions were also based on discussions held at the programs' formal acquisition reviews, but these considerations were not documented. Federal internal control standards require clear documentation of significant events. DHS leadership's decisions may be reasonable, but unless these decisions are documented, insight for internal and external stakeholders is limited. Furthermore, no programs reported a performance breach, even though some programs had not met KPPs. DHS's policy is not clear on how to determine whether a performance breach has occurred. As a result, DHS lacks insight into potential causes of performance issues that may contribute to poor outcomes. DHS should ensure that programs define technical requirements before setting baselines; document rationale for key acquisition decisions; and clarify when not meeting KPPs constitutes a breach. DHS concurred with GAO's recommendations.
This section discusses the offices within FDA and other federal agencies that help oversee imported food products. It also discusses FDA’s approach for overseeing the safety of the imported food products for which it is responsible, including milk, seafood, fruits, and vegetables. FDA’s responsibilities for overseeing the safety of imported food products are divided among its product centers and program offices. For example, the Center for Food Safety and Applied Nutrition (CFSAN) is responsible for regulating food and cosmetics products. FDA’s Office of Regulatory Affairs (ORA) is the lead office for ensuring the safety of imported food through various field activities, such as inspections of firms, review of imported products, examination of products, and sample collection and analysis. ORA works closely with the centers and with the Office of International Programs (OIP), which coordinates some of the agency’s international activities. FDA has staff in China, Chile, Costa Rica, India, Mexico, and the European Union. These foreign offices help ORA obtain foreign scientific and regulatory information, conduct investigations and facility inspections in foreign countries, and facilitate collaboration with foreign regulators in areas of common interest. Primary responsibility for imported food safety is divided between FDA and FSIS, but other federal agencies play a role. For example, CBP supports FDA in its enforcement of its food safety regulations at the border, among other things. CBP’s automated systems process all imported shipments, including food. In addition, NMFS provides fee-for- service inspection services, upon request, to the seafood industry— including domestic and foreign processors, distributors, and other firms— for example, to certify that these seafood firms comply with FDA’s Hazard Analysis and Critical Control Point regulations and other federal food safety standards. Some retailers request this certification as a condition for purchasing seafood products. FDA oversees the safety of imported food by promoting corporate responsibility, examining food prior to it entering into U.S. commerce to help ensure that unsafe food does not enter the country, and responding when unsafe food does enter the country. In 2011, FSMA provided FDA with new provisions that hold imported foods to the same standards as domestic foods and prevent unsafe food from entering the country. These new provisions included new requirements and authorities for FDA, including a requirement to develop a comprehensive plan to expand the food safety capacity of foreign governments; a requirement to increase inspections of foreign food facilities and the authority to enter into cooperative arrangements with foreign governments to facilitate the inspection of foreign food facilities; the authority, under certain circumstances, to require as a condition of admission that imported food be accompanied by certifications or assurances from accredited third-party certification bodies, agencies, or representatives of foreign governments where the food originated; and the authority to require importers to verify that their foreign suppliers meet applicable U.S. food safety standards. Owing in part to the volume of imported food, FDA cannot physically examine every shipment; the agency examines about 1 percent of entry lines annually. FDA electronically screens all imported food shipments to determine which imports to physically examine at the border and which imports to allow into U.S. commerce. The electronic screening process consists of two phases: (1) Prior Notice screening, which is intended to protect against potential terrorist acts and other public health emergencies, and (2) admissibility screening, which is intended to ensure that the food is admissible under the Food, Drug, and Cosmetic Act (FDCA). Imported food products are generally considered admissible if they are in compliance with applicable FDCA regulations that ensure food is not adulterated, misbranded, manufactured or packed under insanitary conditions, or restricted in sale in the country in which it was produced or from which it was exported, among other things. The first phase, Prior Notice screening, requires that an importer, broker, or other entity submit notification to FDA of food being imported or offered for import into the United States before that food arrives at the port of entry. Prior Notice information includes data such as the names and addresses of the manufacturer or grower, importer, owner, and ultimate consignee (the “deliver to” location); FDA product code for the food item; and country of production. Information that FDA uses for Prior Notice review can either be submitted electronically through CBP’s system, which passes the information to FDA, or submitted electronically directly to FDA. FDA targets, screens, and reviews the information to ensure that the information meets the Prior Notice requirements and to determine whether the food potentially poses a terrorism threat or other significant health risk. If such risks are identified, FDA is to work with CBP to examine the shipment upon arrival at the port. If adequate Prior Notice information is not provided, the food is subject to refusal of admission and may not be delivered to the importer, owner, or consignee. If FDA subsequently verifies that adequate Prior Notice information is provided and the food does not appear to pose a terrorism threat, the shipment is allowed to proceed to FDA’s admissibility screening. During the second phase, admissibility screening, FDA electronically screens entry lines using PREDICT to determine their level of risk. “Risk”—reflected by the PREDICT risk score—includes factors such as the inherent health risk of the product, compliance risk associated with firms, facility inspection results, and broker history, among others, and is then compared to all other entry lines within a specified commodity over the past 30 days. If PREDICT determines that an entry line poses a low risk, PREDICT recommends the entry be allowed to enter into U.S. commerce, and another FDA system, called the Operational and Administrative System for Import Support (OASIS), issues a system- generated “May Proceed” message, allowing the product into U.S. commerce without further review. If PREDICT determines that the entry line poses a high risk, an entry reviewer decides whether to examine the entry line for admissibility. The entry reviewer determines if the entry line is under import alert. If the entry line is not under import alert and no further examination is needed, the entry reviewer allows the entry line into U.S. commerce by issuing a manual “May Proceed” message through OASIS. The entry reviewer may determine that a field or laboratory examination is needed for an entry line that is not under an import alert. In this case, an FDA investigator examines the entry either at the port of entry or at another location, such as the importer’s or consignee’s warehouse or a cold storage facility. The investigator may examine a product’s label to determine whether it meets labeling requirements. Additionally, the investigator may examine the shipment for rodent or insect activity or inadequate storage while in transit, among other things. Figure 2 shows FDA investigators examining imported food. If the product does not appear to be violative after the examination, the owner or consignee receives a notice stating that the line is released, and the product is allowed into U.S. commerce. If the product appears to be violative, the investigator may decide to take samples from the product for a laboratory examination to test for rodent or insect activity and other such elements that could confirm that the entry is violative. If the samples indicate that the food is not violative, the owner or consignee receives a notice stating that the line is released, and the product is allowed into U.S. commerce. If the samples are violative, the owner or consignee receives a notice stating that the line is being detained and is subject to refusal. Once the owner or consignee receives a notice stating that the line is being detained and is subject to refusal, the owner or consignee may request that FDA immediately refuse the product, and the product must either be exported or destroyed. If the owner or consignee does not request refusal, then the owner or consignee decides whether to submit testimony or request to “recondition” the product—for example, relabeling the product or converting the product into a type of product not regulated by FDA. If the owner or consignee submits testimony regarding the admissibility of the food or FDA approves a request to recondition the product, then an FDA hearing determines whether the product should be released. If FDA determines that the owner or consignee has provided sufficient information to overcome the appearance of a violation, the owner or consignee receives a notice stating that the product is released. If FDA determines that the owner or consignee’s actions failed to bring the product into compliance, the food must be exported or destroyed. If the product is subject to an import alert, FDA determines whether the product should be detained without physical examination. If FDA determines that the product should not be detained, the owner or consignee receives a notice stating that the product is released. If FDA determines that the product should be detained, FDA issues a notice stating that the line is being detained and is subject to refusal, and the owner or consignee decides whether to export or destroy the product. Figure 3 illustrates the key elements of FDA’s screening process. From 2012 to 2014, about 0.1 percent of all food entry lines were refused and exported or destroyed each year. During this period the countries with the highest number of entry lines that were refused were India, Mexico, and China and the most commonly refused items included rice, herbals and botanicals (not teas), tuna, shrimp and aquaculture- harvested seafood products, and vitamins and minerals. When unsafe food enters the country, FDA may respond by issuing advisories about the affected food. Depending on the circumstances, FDA may also use other enforcement tools, such as seizure, injunction, and administrative detention. FDA can also seek voluntary recalls of unsafe food. If a responsible party does not voluntarily recall the food, FDA may issue a mandatory recall order, provided that certain criteria are met. In early 2010, FDA began an effort to assess foreign food safety systems to determine whether certain other countries have a regulatory system— including food safety statutes, regulations, and an implementation strategy—that is comparable to the U.S. food safety system. Under this effort, FDA in 2010 developed a systems recognition tool to assess the overall food safety systems of foreign countries for foods under FDA’s jurisdiction. FDA completed its on-site review for a systems recognition pilot with New Zealand food safety authorities in late 2010 and established a Systems Recognition Arrangement in 2012. In May 2016, FDA reported that it had established a similar arrangement with Canada. PREDICT uses a variety of data and FDA-created rules—conditional statements that tell PREDICT how to react when encountering particular information—to analyze these data and to identify high-risk imported food shipments. These data include information from FDA sources; non-FDA domestic sources, such as other federal and state agencies; and foreign sources, such as foreign governments. Some of the domestic and foreign sources are open sources—that is, publicly available. FDA does not currently have a documented process for obtaining open source data. PREDICT analyzes all of the data by applying rules that contribute to risk- based scores. PREDICT then recommends potential FDA actions, such as holding an entry line for examination or allowing it to proceed into commerce. PREDICT uses a variety of data to estimate the risk of imported food. These data come from internal FDA sources, other domestic sources, and foreign sources. Figure 4 provides an overview of the types of data sources PREDICT uses. Many of the data PREDICT uses come from sources within FDA. These include electronic sources, such as databases, and human sources, such as FDA officials. Electronic sources. Most of the electronic FDA sources that PREDICT uses are databases that contain historical data about products, firms, and other elements of imported shipments, such as the geographic location of facilities. PREDICT uses numerous kinds of historical data stored in various FDA databases, such as data about (1) the previous field examination and facility inspection history of foreign firms, (2) the track record of importers and brokers, and (3) the types of food products historically imported from other countries. One of the other databases that PREDICT uses contains official FDA safety violation data, such as FDA import alerts and import bulletins. Another database that PREDICT uses contains registration data for facilities that manufacture, process, or pack a specific type of food product, acidified and low-acid canned foods, which are specifically tracked because growth of the botulinum bacterium in canned food may cause a deadly form of food poisoning. Human sources. The human FDA sources that PREDICT draws upon include FDA officials who compile and report data based on their knowledge of products, firms, and other elements of imported shipments, such as country of origin. For example, certain PREDICT rules use FDA’s knowledge and experience to determine the inherent risk of certain food products, and certain other PREDICT rules draw from data reported by FDA entry reviewers about the past record of products or firms. According to FDA officials, PREDICT also uses data from sources outside of FDA, including domestic sources, such as databases from states and other federal agencies. For example, PREDICT uses data from CBP. Specifically, CBP collects entry data from importers for products being imported into the United States and electronically transfers the data to FDA. PREDICT also uses data from other federal agencies, such as from NMFS. NMFS maintains a list of approved seafood establishments, based on its fee-for-service inspections, on its website and may provide this list electronically to FDA upon request. However, FDA only requests information from NMFS on an occasional, as-needed basis. For example, FDA requested information from NMFS in 2010 during the Gulf of Mexico oil spill. More recently, FDA requested NMFS audit reports and facility approvals for certain overseas locations. FDA officials told us that PREDICT also uses data from foreign sources, such as foreign governments. Specifically, FDA’s domestic and overseas offices obtain information from foreign regulatory counterparts on an occasional, as-needed basis about food safety in their respective countries and provide summary information to ORA that may be used in PREDICT. FDA officials told us that the agency may receive nonpublic information from foreign governments only in cases where formal arrangements called Confidentiality Commitments are in place. Of the more than 230 entities from which the United States imports food, FDA has 39 foods-related Cooperative Arrangements with 24 foreign governments—including 1 arrangement with the European Union, which has 28 member countries. FDA officials told us that they are in the process of developing additional cooperative arrangements. An FDA office that is involved in obtaining information from foreign regulatory counterparts is OIP, which has offices in the United States and abroad that help obtain foreign scientific and regulatory information by coordinating with foreign regulators or accessing foreign media sources. One country covered by OIP’s Office of Regional and Country Affairs is Japan. In March 2011, after a 9.0 magnitude earthquake and subsequent tsunami in Japan, the Fukushima Daiichi nuclear power plant suffered extensive damage. The Office of Regional and Country Affairs coordinated with the Japanese government to identify Japanese exports that have a high risk of contamination. As a result, FDA issued Import Alert 99-33: “Detention Without Physical Examination of Products from Japan Due to Radionuclide Contamination.” This alert informs FDA field personnel that they may detain without physical examination certain food shipments from Japan. FDA officials told us that years after the disaster, the agency continues to obtain information from the Japanese government, and the import alert is still in place and was last revised in April 2016. As of March 10, 2014, FDA had tested 1,345 import and domestic samples specifically to monitor for Fukushima contamination. Of the 1,345 samples, 2 were found to contain detectable levels of a contaminant, but the levels posed no public health concern. Both the non-FDA domestic sources and the foreign sources may include open sources—that is, sources that are publicly available, such as newspapers and Internet blogs. Data from open sources may provide information about events—such as food recalls and natural disasters— that could affect the risk of imported foods. Although ORA collects some open source data on its own, it also relies on other FDA offices and federal agencies that gather open source data for their own purposes and that may communicate to ORA information that seems relevant to the import screening mission. For example, OIP’s Europe office monitors news related to FDA-regulated products in Europe. If OIP obtains recall information that it deems significant to FDA-regulated commodities, it communicates the information to ORA and other relevant FDA offices. ORA may also obtain open source data from other federal agencies, such as from the agencies represented at CBP’s Commercial Targeting and Analysis Center, which targets and screens commercial shipments that may pose a threat to the health and safety of U.S. consumers. Open source data have proven useful in the past. For example, in August 2015, an explosion occurred at a warehouse in the port of Tianjin, China, involving hazardous chemicals, including cyanide, posing a potential threat to public health. In the aftermath, OIP, through its China Office, collected information about the explosion from local media sources. As a result, FDA increased surveillance of FDA-regulated imports from manufacturing, processing, and/or packing facilities in the Tianjin area. In this particular instance, FDA also notified importers of this increased level of surveillance. Prior to 2015, FDA had a formal contract dedicated to the collection of open source data for PREDICT. FDA’s 2013 PREDICT operating manual, PREDICT Guide: Rules and Scoring, documented how the agency was to obtain open source data, the type of data to be provided by the contracted company—such as data about natural disasters—and how PREDICT used the data. According to the manual, the PREDICT Open- Source Intelligence Team used open source data to locate news stories pertaining to food safety around the world, looking specifically for geophysical events (e.g., tsunamis, typhoons, and earthquakes) or ecological events (e.g., algal blooms, overuse of antibiotics, and contamination from various spills that could affect FDA-regulated commodities). Analysts then researched each event or action to understand the content and context of the issue, develop an analytic conclusion, and recommend a risk score and expiration date. FDA terminated that contract because the agency determined that it was not cost-effective, given the availability of information from other public sources, such as the Internet and other offices within the agency. The 2015 version of FDA’s PREDICT operating manual does not document the process for identifying the type of open source data to collect, obtaining such data, and determining how PREDICT is to use the data. Instead, FDA relies on ORA officials to informally communicate and obtain information from officials in other FDA offices and from other federal agencies on an ad hoc basis. FDA told us that because the agency did not renew the contract, the 2015 version omitted references to obtaining and using open source information. ORA officials told us they now use a variety of informal and situation-dependent methods to obtain open source intelligence, but these methods are not formally documented. We have previously found that by using informal coordination mechanisms, agencies may rely on relationships with individual officials to ensure effective collaboration and that these informal relationships could end once personnel move to their next assignments. Under federal standards for internal control, agencies are to clearly document internal control, and the documentation is to appear in management directives, administrative policies, or operating manuals. Without a documented process, FDA does not have reasonable assurance that it is consistently identifying the type of open source data to collect, obtaining such data, and determining how PREDICT is to use the data in a regular and systematic manner. A senior ORA official in its Division of Import Operations told us that formalizing and documenting the process by which FDA receives information from other entities would improve the current process for using open source information. FDA creates rules, conditional statements that tell PREDICT how to react when encountering particular situations. For example, if a certain PREDICT rule encounters the term “tamarind” in the description of a product, it should recommend detaining the item without physical examination. This rule was created as the result of an incident in which filth was found in tamarind products. PREDICT’s application of rules generates a cumulative risk score for each entry line; the cumulative risk score includes factors such as the product’s inherent risk and the manufacturing firm’s facility inspection history. The risk score can indicate a negative risk value (signifying low risk) or a positive risk value (signifying higher risk). The individual risk values assigned to the various elements of the entry line help produce the cumulative risk score for the entire entry line and possibly also one or more recommended actions. The application of a rule may prompt a flag (for example, an import alert flag), which may provide information—such as that a manufacturer’s product was recently examined or that the product was refused admission by one or more foreign countries—and recommended an action. For example, it may point out that some of the entry line data are missing or invalid—based on a comparison with other FDA databases—and recommend performing a manual review of databases. After PREDICT has applied the rules to the entry data, entry reviewers analyze the results. The first thing an entry reviewer sees is the main entry review screen, which lists all the current entry lines to be reviewed and their PREDICT risk scores and flags. The entry reviewer can click on a PREDICT risk score to see a table, referred to as the PREDICT Mashup. The PREDICT Mashup displays the individual risk values that contributed to the overall PREDICT risk score for that entry line. For example, the PREDICT Mashup may show that PREDICT assigned the product’s manufacturer a risk-based score of 10 for its facility inspection history, which would signify a high risk. The same Mashup may show that PREDICT assigned the product’s country of origin a risk-based score of 3 for its history, which would signify a lower risk. The entry reviewer can also “roll over,” or move the cursor over, flags to obtain more information. For example, an entry reviewer may roll over a “recent targeted activity” flag—which indicates that this product from this manufacturer was recently examined—to obtain information (sometimes through another database) about when the product was examined and the results of the examination. The entry reviewer uses all of this information to decide how to proceed. For example, the entry reviewer may decide to allow the entry line to proceed into commerce or to hold it for examination or laboratory analysis. Any action taken by the entry reviewer, including the results of any examination, will become part of the history of the facility, which will then contribute to future PREDICT scores. FSMA, which was enacted in 2011, contains provisions that will provide PREDICT with additional data to analyze (i.e., from additional sources) when estimating the risk of imported food. FDA officials told us that even with the changes brought about by FSMA, PREDICT will continue to be used to target imports for examination but will now draw data from even more sources. FDA officials identified five FSMA provisions and authorities as likely to generate the most data for use in PREDICT. Of these five, the first three are as follows: Foreign Supplier Verification Program (FSVP): The FSVP rule requires importers to verify that their foreign suppliers use processes and procedures that provide the same level of public health protection as the hazard analysis and risk-based preventive controls and other applicable requirements of FDCA. For example, importers must develop, maintain, and follow an FSVP that provides adequate assurances that the foreign supplier is producing food that is, among other things, not adulterated (e.g., that it does not contain Salmonella) and is not misbranded with respect to labeling for the presence of major food allergens. Verification activities could include monitoring records, annual on-site inspections or review of such inspections performed by a qualified auditor, and testing and sampling of shipments. The date by which importers must comply with the FSVP regulations depends on a number of factors. Inspection of foreign food facilities: FSMA includes a provision that authorizes FDA to direct resources according to the known safety risks of facilities, especially those that present a high risk. The law directs FDA to inspect at least 600 foreign facilities within 1 year of enactment of FSMA and, in each of the 5 years following that period, to inspect at least twice the number it inspected during the previous year. Laboratory accreditation: FSMA also includes a provision that requires FDA to establish a program for the testing of food by accredited laboratories and to recognize accreditation bodies for accrediting the laboratories, including independent private laboratories. The provision requires owners or consignees to have food products tested by an accredited laboratory in certain circumstances, including when a food product is under an import alert that requires successful consecutive tests. The results of these tests may be submitted to FDA electronically and used to determine compliance and admissibility of the food product. FDA has not yet issued a proposed rule for establishing the program, but an agency official stated that the agency is working on a rule now. The other two FSMA provisions likely to generate data for use in PREDICT are related to third-party certification, for which FDA issued the final rule in November 2015. FSMA directs the establishment of a system for the recognition of accreditation bodies to accredit third-party certification bodies (also known as auditors) to certify that a foreign facility, or any product produced by the facility, meets the applicable FDA food safety requirements. If an accredited third-party certification body or its audit agent discovers a condition that could cause or contribute to a serious risk to public health, the certification body must immediately notify FDA. The following two FSMA provisions rely on third-party certifications and will likely generate data for use in PREDICT: Voluntary Qualified Importer Program (VQIP): FSMA requires FDA to establish a program that offers expedited review and entry to certain participating importers that import food from foreign facilities certified by accredited third-party certification bodies. FDA published draft guidance for VQIP in the Federal Register in June 2015. According to an FDA Fact Sheet on VQIP, the agency expects to begin receiving applications for the program in January 2018. Import certifications: Under FSMA, FDA has the authority, under certain circumstances, to require certifications or other assurances from agencies, or representatives of foreign governments where the food originated, or an accredited third-party certification body. FDA can determine that an article of food requires certification based on, among other factors, the known safety risks associated with the food; the country, territory, or region of origin of the food; or evidence that the food safety systems for that product are inadequate. FDA published the final rule on accreditation of third-party certification bodies in the November 27, 2015, Federal Register. Because some of these FSMA-related programs are still new and not yet fully implemented, the details of how certain data generated by these programs will be integrated into PREDICT are not finalized, but according to FDA officials, PREDICT will use data produced by these programs and authorities in a variety of ways. Implementation of FSVP, for example, will identify importers that are not in compliance, and PREDICT would use that information in assigning risk scores to products from those importers. The food products from suppliers and importers that have a history of noncompliance with food safety regulations will receive higher PREDICT risk scores than those that are consistently in compliance. In addition, if the results from an inspection of a foreign food facility show that the facility is noncompliant, that information would be provided to PREDICT, and PREDICT would use that information in assigning risk scores to imports from that facility. Figure 5 shows how the implementation of FSMA will add to the data sources already used by PREDICT. FDA has assessed the effectiveness of PREDICT by ongoing monitoring of key program data and by conducting an evaluation of the tool in 2013, and it has implemented many, but not all, of the 2013 evaluation’s recommendations to improve PREDICT. Data maintained by agency officials show that PREDICT is working properly to focus reviewers’ attention on food items that have a high probability of being violative. Moreover, when FDA officials conducted an evaluation of the system in 2013, results showed that PREDICT was working well to provide staff with the data they need to make informed entry review decisions. However, the agency has not yet implemented all of the 24 recommendations resulting from that evaluation. FDA officials told us that they conduct monitoring of PREDICT on an ongoing basis and that the data they collect as part of this monitoring show that PREDICT is working as intended—that is, PREDICT is focusing entry reviewers’ attention on items determined to be of higher risk. Data provided by FDA from fiscal years 2012 to 2014 confirmed that in general, PREDICT is fulfilling this role. Our analysis of these data showed that in general, the higher the PREDICT risk score, the more often entry lines were examined and the more often they were found violative. As tables 1, 2, and 3 show, for fiscal years 2012 through 2014, entry lines that received higher PREDICT scores generally were more often selected for a field examination, for a label examination, or for sampling, and entry lines with higher PREDICT scores were more often found violative. In all 3 years we reviewed, items with the highest PREDICT scores— those from 91 to 100—were selected for examination more often than items with lower PREDICT scores. For example, in fiscal year 2012, FDA examined or sampled more than 39,000 entry lines with a PREDICT score from 91 to 100, or about 6.5 percent of all entry lines in this range, nearly double the percentage of entry lines examined in the next-highest 10 point risk score range and at least 6 times more than entry lines with a score of 50 or less. The tables also show that in each of the 3 years we reviewed, the higher the PREDICT score, the higher the percentage of entry lines that were found to be violative. For example, in fiscal year 2014, FDA selected for examination about 45,000 of the nearly 1.3 million entry lines with PREDICT scores of 91 to 100. Of these, nearly 12 percent, or 5,363 line items, were found to be violative, almost double the percentage of violation rates found in the next-highest range of PREDICT scores. In May 2013, FDA completed an internal evaluation of PREDICT’s effectiveness that examined five processes: work planning, examinations and sampling, entry review, rules management, and communication. To conduct this evaluation, FDA officials gathered information through a variety of methods, including analysis of entry line data and the entry review process, interviews with stakeholders from the various FDA centers, and surveys of PREDICT users in the field. Overall, the evaluation showed that PREDICT is helping to expedite the release of low-risk items and to identify violative imports. As a result of the internal evaluation, FDA developed a total of 24 recommendations: 23 for improving the five processes noted above and another recommendation concerning the development of performance metrics. The list below shows the categories of recommendations and the number of recommendations in each category: work planning (4), examination and sampling (4), entry review (6), performance metrics (1). rules management (5), communication (4), and FDA prioritized each of its 24 recommendations according to the feasibility of near-term implementation and the impact on operations or public health, among other things. A recommendation was determined to have high feasibility if resources were available to implement it and the implementation plan was clear. A recommendation was determined to have high impact if, for example, many stakeholders cited the problem that the recommendation was intended to solve. As figure 6 shows, of the 24 recommendations, 6 were determined to be highly feasible, and 16 were determined to be of high impact or medium/high impact; 2 recommendations were determined to be both highly feasible and of high impact. Examples of FDA’s recommendations and how they were prioritized include analyze and set specific “May Proceed” thresholds by commodity (designated as both highly feasible and of high impact), develop a system to test PREDICT rules (designated as highly feasible and of medium impact), and identify sources of delays associated with the sampling process (designated as of medium feasibility and medium impact). FDA has evaluated the 24 recommendations and has implemented many, but not all, of them. According to agency officials, the agency has implemented 15, partially implemented another 6, and not implemented 3 (see fig. 7). Of the 15 recommendations that were implemented, 13 were designated either as highly feasible or of high or medium/high impact. Both of the recommendations determined to be both highly feasible and of high impact were implemented, including the recommendation to analyze and set specific “May Proceed” thresholds by commodity. Of the 6 that were partially implemented—which includes the recommendation to develop a system to test PREDICT rules—5 were designated as either highly feasible or of high or medium/high impact. Finally, of the 3 that were not implemented—which includes the recommendation to identify sources of delays associated with the sampling process—1 was of low feasibility and high impact, 1 was of medium feasibility and high impact, and 1 was of medium feasibility and medium impact. Federal standards for internal control state that agencies are to ensure that the findings of audits and other reviews are promptly resolved. To that end, agencies are to complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. According to FDA officials, the reason for partially implementing or not implementing recommendations was often a lack of resources. For example, FDA officials said that a lack of resources was why they only partially implemented the recommendation to develop a system to test PREDICT rules and did not implement the recommendation to identify sources of delays associated with the sampling process. Agency officials agree that implementing the remaining recommendations would improve the effectiveness of PREDICT, but the agency has not yet established time frames for completing the implementation of the remaining recommendations. Establishing a practical timeline for implementing the remaining recommendations as resources become available would provide FDA with reasonable assurance that the improvements are made and that PREDICT remains an effective tool for screening imports. The volume of FDA-regulated imported food continues to grow, as does the need to ensure that such imports are safe. FDA physically examines about 1 percent of food shipment entry lines annually. The agency developed PREDICT to help target food shipments deemed higher risk and subject them to additional scrutiny. FDA’s assessment of PREDICT shows that the tool is generally working to focus FDA’s resources on the examination of food items determined to be of highest risk and expediting the release of lower-risk food items. PREDICT analyzes imported food entry lines using data from various sources, including FDA sources, other domestic sources, and foreign sources. Some of the domestic and foreign sources are open sources, and FDA uses such sources to obtain information about imported foods on an ad hoc basis. However, FDA does not have a documented process for identifying the type of open source data to collect, obtaining such data, and determining how PREDICT is to use the data. Without such a documented process, FDA does not have reasonable assurance that it will consistently obtain open source data for PREDICT in a regular and systematic manner. In addition, FDA’s May 2013 evaluation of PREDICT identified 24 recommendations to improve the tool, and FDA has fully implemented 15 of these recommendations. FDA officials explained that resource constraints have often limited their ability to fully implement all the remaining recommendations. Agency officials agree that implementing these recommendations would improve the effectiveness of PREDICT, but the agency has not yet established time frames for doing so. Establishing a practical timeline for implementing the remaining recommendations would help FDA ensure that the improvements are made and that PREDICT remains an effective tool for helping to prevent high-risk foods from entering the United States. To further enhance FDA’s PREDICT tool and its ability to ensure the safety of imported food, we recommend that the Secretary of Health and Human Services direct the Commissioner of FDA to take the following two actions:  document the process for identifying the type of open source data to collect, obtaining such data, and determining how PREDICT is to use the data and  establish a timeline for implementing, as resources become available, the remaining recommendations from FDA’s 2013 evaluation of PREDICT. We provided the departments of Commerce, Health and Human Services, and Homeland Security a draft of this report for their review and comment. In an e-mail, the Department of Commerce stated that it had no comments. The Department of Health and Human Services generally concurred with the recommendations and provided written comments on the draft, which are summarized below and presented in their entirety in appendix II of this report. The Department of Homeland Security provided technical comments, which we incorporated as appropriate. In its written comments, the Department of Health and Human Services concurred with our first recommendation and concurred in part with the second. For our first recommendation, HHS stated that ORA plans to work with appropriate units across the agency to develop and document a formal process to identify the type of information to collect, how to obtain the information, and how PREDICT may use it. For our second recommendation, HHS stated that it has recently fully implemented one of the internal recommendations from the 2013 internal evaluation that was previously designated as “partially implemented.” However, because HHS did not provide evidence of fully implementing the recommendation, we were unable to verify such implementation and made no change to the report. The Department also stated that it is supportive of the remaining recommendations from its 2013 internal evaluation but noted that establishing an implementation timeline is based on consideration of the feasibility and impact relative to available FDA resources and other FDA priorities, such as FSMA. The department indicated that the goals of the unimplemented recommendations may ultimately be achieved via these other FDA initiatives. We are sending copies of this report to the appropriate congressional committees, the Secretary of Commerce, the Secretary of Health and Human Services, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. You asked us to examine how the Food and Drug Administration (FDA) is using Predictive Risk-based Evaluation for Dynamic Import Compliance Targeting (PREDICT) to protect the safety of the U.S. food supply. Our objectives in this report were to examine (1) the data used by PREDICT and how PREDICT analyzes these data to identify high-risk food shipments for examination; (2) how implementation of the FDA Food Safety Modernization Act (FSMA) will affect PREDICT; and (3) the extent to which FDA has assessed the effectiveness of PREDICT and used the results of those assessments to improve the tool. To determine the data used by PREDICT and how PREDICT analyzes these data to identify high-risk food shipments for examination, we reviewed documents that described the data used by PREDICT and interviewed FDA officials. We assessed the data sources, data quality controls, and the methodology used by PREDICT and determined that they were sufficiently reliable to support accurate descriptions of the system inputs, processes, and resulting risk scores. We also assessed the data used by PREDICT to ensure that they were sufficiently reliable to describe factors considered by PREDICT and system rules used to generate risk scores. We interviewed FDA officials in the Office of International Programs to understand how FDA coordinates with foreign governments to obtain data for PREDICT. We also reviewed how the agency identifies, obtains, and uses open source information and compared this process to criteria for documentation found in the federal standards for internal control. To determine the extent to which the implementation of FSMA will affect PREDICT, we reviewed FSMA, FDA’s rules implementing FSMA, and FDA planning documents and interviewed FDA officials. To determine the extent to which FDA has assessed the effectiveness of PREDICT and used the results of those assessments to improve the tool, we interviewed FDA officials about ongoing monitoring of PREDICT, reviewed the 2013 evaluation that FDA has conducted on PREDICT, and assessed the extent to which FDA has implemented the recommendations from the evaluation. We then compared FDA’s actions to the criteria found in the federal standards for internal control regarding the findings of audits and other reviews. In addition, we examined data provided by FDA on PREDICT risk scores and violation rates from fiscal years 2012 through 2014, the most recent years for which data were available, and determined that the data were sufficiently reliable for our purposes, as discussed above. To inform all three objectives, we conducted site visits at four FDA and U.S. Customs and Border Protection (CBP) facilities located at ports of entry located in Baltimore, Maryland; Otay Mesa in San Diego, California; Long Beach California; and Los Angeles, California. We selected these sites to include air, ship, and truck ports of entry; variable geographic locations; the variability of food products that enter through the ports; and proximity to an FDA office. The information we obtained at these sites is not generalizable to all facilities and ports of entry. In addition, we interviewed officials from CBP and the National Marine Fisheries Service to understand how FDA coordinates with other federal agencies to protect the safety of the U.S. food supply. We also interviewed officials from a number of stakeholder organizations that are affected by PREDICT or contributed to PREDICT’s development. We conducted this performance audit from January 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Anne K. Johnson (Assistant Director), Kevin Bray, Ellen Fried, Steven Putansu, Gloria Ross, Stuart Ryba, Sara Sullivan, Vasiliki Theodoropoulos, and Karen Villafana made key contributions to this report.
Imported food makes up a substantial and growing portion of the U.S. food supply. FDA is responsible for oversight of more than 80 percent of the U.S. food supply. However, because the volume of imported food is so high, FDA physically examines only about 1 percent of imported food annually. In 2011, FDA implemented PREDICT, a computerized tool intended to improve FDA's targeting of imports for examination by estimating the risk of imported products. GAO was asked to review how FDA is using PREDICT to protect the U.S. food supply. This report examines (1) the data used by PREDICT and how PREDICT analyzes these data to identify high-risk food shipments for examination, (2) how implementation of FSMA will affect PREDICT, and (3) the extent to which FDA has assessed the effectiveness of PREDICT and used the assessments to improve the tool. To address these issues, GAO analyzed FDA documents, interviewed FDA officials, and analyzed data from fiscal years 2012 through 2014. The Food and Drug Administration's (FDA) Predictive Risk-based Evaluation for Dynamic Import Compliance Targeting (PREDICT) tool uses a variety of data and analyzes data by applying rules—conditional statements that tell PREDICT how to react when encountering particular information—to generate risk scores for imported food. Many of the data used by PREDICT come from internal FDA sources, such as FDA databases. PREDICT also uses data from sources outside of FDA, such as other federal agencies, states, and foreign governments. Some of the data are open source data—information that is publicly available, such as information from newspapers and websites. FDA's Office of Regulatory Affairs (ORA) relies on other FDA offices and federal agencies to provide open source data for PREDICT, but ORA does not have a documented process for identifying, obtaining, and using the data, relying instead on an ad hoc process. Federal standards for internal control call for agencies to document internal controls. Without a documented process for identifying, obtaining, and using open source data, FDA does not have reasonable assurance that ORA will consistently identify, obtain, and use such data for PREDICT. The implementation of the FDA Food Safety Modernization Act (FSMA), enacted in 2011, will provide PREDICT with additional data for estimating the risk of imported food. FDA identified FSMA provisions likely to generate data, including the Foreign Supplier Verification Program (FSVP), which requires importers to verify that their foreign suppliers use processes and procedures that provide the same public health protection as applicable U.S. requirements. Because FSVP and other FSMA-related programs are still new and not yet fully implemented, the details of how PREDICT will use the data have not been worked out. However, according to FDA officials, the data will be useful. For example, FSVP data will identify suppliers that are not in compliance with standards, and PREDICT will use those data in assigning risk scores to imports from those suppliers. FDA has assessed the effectiveness of PREDICT by monitoring of key data and by conducting an internal evaluation of the system, and it has implemented many, but not all, of its recommendations from the evaluation. GAO's analysis of FDA data from fiscal year 2012 through 2014 shows that in general, PREDICT is working as intended: imported food with higher-risk scores is more likely to be physically examined and to be found in violation of food safety standards or labeling requirements. In May 2013, FDA completed an evaluation of PREDICT that examined five key processes. As a result of the evaluation, FDA developed 24 recommendations for improving PREDICT and prioritized these recommendations based on feasibility and impact. According to FDA, the agency has fully implemented 15, partially implemented 6, and not implemented 3 of the recommendations. FDA officials said that the agency has not fully implemented all recommendations because of a lack of resources. However, federal standards for internal control specify that agencies are to ensure that the findings of reviews are promptly resolved. To that end, agencies are to complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management's attention. Establishing a timeline for implementing the remaining recommendations as resources become available would help ensure that PREDICT continues to remain an effective tool for screening imported food. GAO recommends that FDA take two actions to improve the effectiveness of PREDICT: (1) document the process by which FDA is to identify, obtain, and use open source data and (2) establish a timeline for implementing the remaining recommendations from FDA's 2013 evaluation of PREDICT. FDA generally agreed with GAO's recommendations.
Under the PPACA MLR requirements, private insurers must report annually to CMS their MLRs by each state and insurance market in which they operate, as well as the dollar amounts for the components that make up their MLRs, such as premiums. Insurers must pay rebates when their MLRs do not meet or exceed the minimum applicable PPACA MLR standards. Agents and brokers sell insurers’ plans to individuals and employers and assist individuals and employers in various ways. PPACA requires that private insurers offering group or individual health insurance coverage report annually on the various components that are used to calculate their PPACA MLRs, and each of these components must be reported by state and by market. Private insurers first reported this information in 2012, based on their 2011 experience. The markets include large group, small group, and individual. Large and small group employers are defined by the number of their employees. Prior to PPACA, a small employer was defined in federal law as having a maximum of 50 employees. From the time PPACA was passed in 2010 until 2016, PPACA gives states the option of continuing to define a small group employer as having 50 or fewer employees, but starting in 2016, they must define small employers as having from 1 to 100 employees.The individual market includes policies sold by insurers directly to individuals. Insurers report their MLR data to CMS by entering information on a standardized form that includes multiple data fields that make up the MLR and rebate formulas. The MLR data that insurers report to CMS include the following components, as required by law. Medical claims. These include claims paid and incurred for clinical services and supplies provided to enrollees by physicians and other clinical providers. Expenses for quality improvement (QI) activities. These include expenses for activities that are designed to increase the likelihood of desired health outcomes in ways that can be objectively measured. The activities must be primarily designed to (1) improve health outcomes; (2) prevent hospital readmissions; (3) improve patient safety, reduce medical errors, and lower infection and mortality rates; or (4) implement, promote and increase wellness and health activities. Insurers are also allowed to include expenses for health information technology required to accomplish these activities as well as a percentage of their expenses for converting disease classification codes. Premiums. These include the sum of all funds paid by an enrollee and employer, if applicable, as a condition of receiving coverage from the insurer. These also include any fees or other contributions associated with the health plan. Federal and state taxes and licensing or regulatory fees. These include federal income taxes, assessments, state insurance, premium and other taxes, and regulatory authority licenses and fees. Federal income taxes on investment income and capital gains are excluded. Non-claims costs. These include all other insurer expenses—those beyond medical claims, expenses for QI activities, and federal and state taxes and licensing or regulatory fees. CMS defines non-claims costs for the following categories: (1) agents’ and brokers’ fees and commissions; (2) cost containment expenses, which reduce the number of health services provided or the costs of such services, but are not related to an activity to improve health care quality; (3) claims adjustment expenses, such as office maintenance and supplies costs, not classified as cost containment expenses; (4) salaries and benefits that insurers pay to their employees who sell their plans; (5) other taxes that may not be excluded from premium revenue; (6) other general and administrative expenses, such as salaries and advertising; and (7) community benefit expenditures, which include expenses for activities such as health educational campaigns that are available broadly to the public. The remaining amount of premiums that an insurer does not spend on the components of medical claims, QI activities, taxes and fees, and non- claims costs, will be referred to as the insurer’s “premium surplus” in this report. Premium surplus includes profit and other reserved capital. The PPACA MLR is generally calculated by dividing (a) the sum of an insurer’s medical claims and expenses for QI activities (the formula numerator) by (b) the insurer’s premiums, after excluding from them the amount of insurer’s federal and state taxes and licensing or regulatory fees (the formula denominator). (See fig. 1.) Some insurers, such as those with a small number of enrollees, are permitted certain adjustments to their MLRs. These adjustments are referred to as credibility adjustments, and they are added to, and thus Credibility adjustments are provided to increase, the insurer’s MLR.address the unreliability associated with calculating an MLR based on a small number of enrollees. Insurers with a small number of enrollees calculated their MLRs for 2012 based on their 2011 and 2012 experience combined. All insurers will calculate their MLRs for their experience in 2013 and subsequent years based on data from a 3-year period. That is, insurers will add their data for the year for which the MLR is being calculated to their MLR data for the 2 prior years. For each insurer, separate MLRs are calculated for each state and market combination in which it does business and each MLR is used to determine whether an insurer must pay rebates. Insurers must meet a minimum PPACA MLR standard, generally 80 percent for the individual and small group markets and 85 percent for the large group market, with some exceptions. Specifically, the applicable PPACA minimum MLR standard is based on one of the following: 85 percent in the large group market, 80 percent in the small group market, and 80 percent in the individual market; a higher MLR standard if specified by law in the state in which the a Department of Health and Human Services (HHS)-approved, adjusted MLR standard for a particular state’s individual market. When an insurer’s PPACA MLR is lower than the applicable PPACA MLR standard, the insurer must pay a rebate. The rebate amount is based on the PPACA MLR rebate formula, as shown in figure 2. The difference between the applicable PPACA MLR standard and the insurer’s MLR is calculated and this difference is multiplied by the insurer’s premiums, after federal and state taxes and licensing or regulatory fees are removed. There are different ways in which rebates can be paid out by insurers that depend, in part, on whether rebates are associated with individual or group market plans. In general, insurers can choose to provide rebates in the form of a lump-sum payment or as a premium credit for the following MLR year. Insurers in the individual market must provide rebates to their enrollees while insurers in the small and large group markets may meet this obligation by providing rebates to group policyholders, for example, employers. In turn, group policyholders are responsible for allocating the amount of rebate that is proportionate to the total amount of premium paid by enrollees and may retain part of the rebate based on the amount of premium that they contributed. Agents and brokers sell plans for insurers and perform a variety of functions on behalf of individuals and employers. According to the Bureau of Labor Statistics, in 2012 there were approximately 337,000 jobs held by insurance sales agents, which include agents and brokers working independently as well as those who are employed by an insurer. Agents and brokers provide assistance to individuals and employers in choosing and enrolling in plans. For example, they may assess an individual’s insurance needs and describe the characteristics of different plans that best meet those needs. Agents and brokers may also provide assistance after an individual or employee has enrolled in a plan, for example, by helping enrollees communicate with health plans in trying to resolve disputed medical claims or adding a new family member to a current plan. Insurers who use agents and brokers to sell their plans typically pay them based on a percentage of the plan’s premium or as a flat fee, for example, determined by the number of enrollees in the plan. More than three quarters of insurers met or exceeded the minimum PPACA MLR standards in 2011 and in 2012, the median PPACA MLRs for all insurers were about 88 percent in each year, and there was variation across insurance markets. Insurers’ spending on enrollees’ medical claims and non-claims costs as a percentage of premiums varied across insurance markets in 2011 and 2012. Most insurers met or exceeded the PPACA MLR standards established for the markets and state in which they operated, but group market insurers were more likely to meet or exceed the standards than individual In 2011, about 76 percent of insurers met or exceeded market insurers.the minimum PPACA MLR standards and, in 2012, 79 percent of insurers met or exceeded the standards. As an example of variation across the markets, in 2012 about 86 percent of insurers in the large group market and 81 percent of insurers in the small group market met or exceeded the MLR standards, compared to 70 percent of individual market insurers. The median PPACA MLRs in 2011 and in 2012 among all insurers were about 88 percent, and the median for the large group market was higher than that of the small group and individual markets. For example, in 2012 the median PPACA MLR for insurers in the large group market was about 91 percent compared to 86 percent in the individual market and 85 percent in the small group market. (See table 1.) We observed that insurers’ median PPACA MLRs slightly increased from 2011 to 2012 and the percent of all insurers meeting or exceeding the standards increased by about 3 percentage points. However, these 2 years of data may not reflect future patterns in MLRs. (See app. I for a listing of the PPACA MLRs in each state for 2011 and 2012.) In 2011 and 2012, the percentage of net premiums that insurers spent on their enrollees’ medical claims varied across insurance markets. Specifically, insurers in the large group market spent a higher percent of their net premiums on medical claims in both years compared to insurers in the small group and individual markets. For example, in 2012 insurers in the large group market spent about 89 percent of their net premiums on medical claims compared to the 85 percent that individual market insurers spent and the 84 percent that small group market insurers spent. Our analysis of the data showed that for the other components of the MLR formula—non-claims costs, premium surplus and QI expenses— there was more of a mix in variation across markets. We found that insurers’ spending on non-claims costs as a percent of their net premiums varied by insurance market, with insurers in the individual and small group markets spending more than insurers in the large group market on non-claims costs in 2011 and in 2012. For example, in 2012 insurers in the individual market spent about 16 percent of their net premiums on non-claims costs compared to the 7 percent that large group insurers spent. With regard to premium surplus, our analysis showed that there was also variation across the insurance markets in 2011 and in 2012, with insurers in the individual market having less premium surplus than insurers in both the small and large group markets. An insurer’s premium surplus includes profit and other reserved capital but does not account for any PPACA MLR rebates the insurer may have to pay to enrollees. Insurers’ spending on QI expenses did not vary across markets as insurers in each market spent about 1 percent of net premiums on QI expenses in 2011 and in 2012. (See table 2.) While our analysis showed that from 2011 to 2012 there were some shifts in the percentage of spending on these different components among the different markets and among all insurers, these two years of data may not reflect future patterns in spending. (See app. II for a listing of insurers’ spending on the MLR components by state for 2011 and 2012.) Three of the six non-claims cost categories generally comprised the largest share of insurers’ spending, and there was some variation across the markets in spending on each of these three categories. The three categories that made up the largest share of insurers’ spending on non- claims costs were agents’ and brokers’ fees and commissions, other general expenses, and other claims adjustment expenses. We found that insurers’ spending on these three categories of non-claims costs varied by market. Insurers in the small group market spent a higher share on agents’ and brokers’ fees and commissions than insurers in the individual and large group markets. For example, in 2012 small group insurers spent about 42 percent of their total non-claims costs on agents’ and brokers’ fees and commissions compared to the 23 percent that large group insurers spent. In comparison, insurers in the individual and large group markets spent a higher share of non-claims costs than insurers in the small market on other general expenses, such as salaries, rent, and travel; and on other claims adjustment expenses including office and computer maintenance. (See table 3.) Insurers paid about $1.1 billion in total rebates to enrollees and policyholders who paid premiums in 2011, the first year that insurers were subject to the PPACA MLR requirements, and about $520 million in rebates in 2012. These amounts would each have decreased by about 75 percent had agents’ and brokers’ fees and commissions been excluded from the MLR and rebate calculations, assuming insurers made no other changes that could affect their MLRs. Of the $1.1 billion in total rebates that insurers paid in 2011, insurers in the large group market paid 37 percent of this total, or $405 million in rebates, the largest amount across the three insurance markets. Of the $520 million in total rebates that insurers paid in 2012, insurers in the small group market paid the largest amount in rebates (about $207 million). (See app. III for a listing of the rebate amounts that insurers paid in each state for 2011 and 2012.) We calculated the average rebate amount by dividing the total amount of rebates insurers paid by the total number of individuals (including dependents) enrolled in their plans. group market. (See table 4.) However, these two years of data may not reflect future rebate patterns. Our analysis of the data showed that rebates would have been reduced by about 75 percent if agents’ and brokers’ fees and commissions were excluded from the MLR and rebate calculations. Specifically, we found that the rebates paid by insurers to enrollees and policyholders who paid premiums in 2011 would have fallen from $1.1 billion to about $272 million, and in 2012 would have fallen from $520 million to about $135 million. There was variation in the impact of the recalculated MLRs across the three markets. For example, in both years, the differences between the actual and recalculated rebate amounts were greater, on a percentage basis, in the small group market compared to the individual market. (See table 5.) These rebate calculations are based on the assumption that insurers did not make other changes during this time that would have affected their MLRs. However, if the formula had been different, insurers might have made different business decisions in those years. (See app. IV for a listing of the rebate amounts that insurers would have paid with agents’ and brokers’ commissions and fees excluded from the MLRs in each state for 2011 and 2012.) All eight insurers that we interviewed reported that they increased their premium rates since 2011 due to a variety of factors, and most (five of the eight) reported that the factors were largely unrelated to PPACA MLR requirements. Key factors cited for making premium changes included: trends in medical claims, competition with other insurers, the PPACA requirements that insurers offer their plans on a guaranteed-issue basis as well as provide essential health benefits in their plans, and the per- capita fees associated with PPACA’s reinsurance program.example, one insurer that has increased premiums in the individual market since 2011 stated that its increased premiums were due in part to the increased costs that the insurer told us was associated with it For providing coverage on a guaranteed-issue basis, as mandated under PPACA. Three of the eight insurers reported that the PPACA MLR requirements were one factor among a variety of factors that influenced their decisions regarding premium rates, and two of these insurers told us that the MLR requirements have generally moderated their premium increases. For example, one insurer that has increased its premium rates in the individual market since 2011 stated that without the PPACA MLR requirements in place the insurer would have likely increased rates further. Another insurer explained that if it does not meet the PPACA MLR standards in its planning, it will adjust its premium rate increases to avoid the associated expense and administrative work required to issue rebates to enrollees. We asked insurers about their business practices from 2011, when the PPACA MLR requirements took effect, to the present. This time period is broader than the period for which we analyzed MLR data. non-claims costs as well as offer more competitive premium rates for their plans. All eight insurers we interviewed told us that the PPACA MLR requirements have not affected where they do business and have had no effect, or a very limited effect, on their spending on QI activities since 2011. Two of the eight insurers we interviewed stated that they exited certain insurance markets since the PPACA MLR requirements began, but they did not attribute those decisions to the MLR requirements. For example, one insurer who operated in all 50 states told us that it left the individual market in three states in 2014, in part because of its concerns in each market over maintaining a sufficient network of providers and being able to provide affordable coverage. The insurer further explained that a low number of enrollees in each market contributed to its decisions to no longer operate in the three states. None of the insurers we interviewed reported that the PPACA MLR requirements have generally influenced their spending on QI activities since 2011, and all eight insurers reported other influencing factors, such as customer and employer demand for QI programs, competition among insurers, and the goal of improving enrollees’ health outcomes, which in turn could lower the use and costs of health care services. Five of the eight insurers we interviewed also commented that the 2013 PPACA MLR calculation, which is based on a 3-year period including 2013 and the prior 2 years, will likely reduce some of the effects in the MLR formula of year-to-year variability in enrollees’ medical claims. Variability in medical claims occurs when actual and expected medical claims differ, and one source of such variability is the effect of large claims. One insurer, who paid rebates in 2011, noted that it would not have owed rebates had the MLR formula been based on 3 years of insurers’ experience. In comparison, another insurer told us that it will likely owe rebates for 2013 because the MLR data it reported in 2011 and 2012 will be averaged into 2013. The insurer added that over time, however, it believes the 3-year MLR formula should be beneficial for the insurer by reducing the likelihood of owing rebates. We provided a draft of this product to HHS for comment. HHS responded that it had no general or technical comments. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This appendix presents information on insurers’ median PPACA medical loss ratios by state and insurance market for 2011 (see table 6) and for 2012 (see table7). This appendix presents information on insurers’ spending as a percent of their net premiums, by state for all markets combined for 2011 (see table 8) and for 2012 (see table 9). This appendix presents information on the total amount of PPACA medical loss ratio rebates that insurers paid to enrollees and policyholders who paid premiums, by state and insurance market for 2011 (see table 10) and for 2012 (see table 11). This appendix presents information on the amount of rebates that insurers would have paid to enrollees and policyholders who paid premiums with agents’ and brokers’ commissions and fees excluded from the PPACA medical loss ratio and rebate calculations (absent other changes in business practices), compared to the actual amounts that insurers paid, by state for 2011 (see table 12) and for 2012 (see table 13). In addition to the contact named above, Gerardine Brennan, Assistant Director; George Bogart; Romonda McKinney Bumpus; Pamela Dooley; Julianne Flowers; and Laurie Pachter made key contributions to this report.
Private insurers are required to meet minimum PPACA MLR standards— expressed as the percent of premium dollars spent on patient care and related activities—and beginning in 2011 they must pay rebates back to enrollees and policyholders who paid premiums if they do not meet these standards. GAO was asked to review the effects of the PPACA MLR requirements on insurers and enrollees and how rebates would change if agent and broker payments were excluded from the MLR formula. This report examines (1) the extent to which insurers met the PPACA MLR standards, and how much they spent on the MLR components of claims, quality improvement activities, and non-claims costs; (2) the amount of rebates insurers paid and how this amount would have changed with agents' and brokers' commissions and fees excluded from the MLR; and (3) the perspectives of insurers on the effects of the MLR requirements on their business practices. To do this work, GAO analyzed the MLR data that insurers reported to CMS for 2011 and 2012 (the most recent data available) at the national level for each insurance market—large group, small group, and individual. GAO also interviewed eight insurers, selected based on variation in their size, concentration of business in the individual market, geography, whether they paid rebates, and profit status. In 2012 the number of enrollees covered by these insurers ranged from about 70,000 to 7 million. GAO's finding on insurers' perspectives is limited to those insurers interviewed and is not representative of the perspectives across all insurers reporting MLR data. The Patient Protection and Affordable Care Act (PPACA) established federal minimum medical loss ratio (MLR) standards for the percentage of premiums private insurers must spend on their enrollees' medical care claims and activities to improve health care quality, as opposed to what they spend on administrative (“non-claims”) costs. Insurers report to the Centers for Medicare & Medicaid Services (CMS) annually on their PPACA MLRs. More than three quarters of insurers met or exceeded the standards in 2011 and in 2012, and the median MLRs among all insurers were 88 percent. Insurers' MLRs and their spending on claims and non-claims costs varied across different insurance markets. Specifically, insurers in the large group market had higher median MLRs and spent a higher share of their premiums on enrollees' claims and less on non-claims costs, compared to insurers in the individual and small group markets. Insurers that did not meet or exceed the PPACA MLR standards in 2011 and 2012 paid rebates in the amounts of $1.1 billion and $520 million (respectively) back to enrollees and policyholders who paid premiums in those years. These amounts would have decreased by about 75 percent had the commissions and fees insurers paid to agents and brokers been excluded from the MLRs. Agents and brokers sell insurance products and provide various services to consumers and groups related to their insurance needs and the commissions and fees charged for these services are included in the MLRs. Insurers in the large group market paid the highest rebate amount ($405 million) across insurance markets in 2011 and insurers in the small group market paid the highest amount ($207 million) in 2012. Insurers in the individual market were more likely to pay rebates than insurers in the small and large group markets. GAO found that rebates would have fallen from $1.1 billion to $272 million in 2011 if the commissions and fees insurers paid to agents and brokers had been excluded from the MLRs, and rebates would have similarly fallen from $520 million to $135 million in 2012. GAO's calculations assumed that insurers did not make other changes in their business practices in response to a different method for calculating MLRs. GAO found that most of the eight insurers it interviewed reported that factors other than the PPACA MLR requirements affected their business practices since 2011. All eight insurers reported that they increased their premium rates since 2011 and that they based these decisions on a variety of factors, such as trends in medical care claims, competition with other insurers, and other requirements. Three of the eight insurers stated that the MLR requirements were one among several factors that influenced their decisions about premium rates. Four of the eight insurers stated they had recently made changes to their payments to agents and brokers, and one reported the MLR requirements were a primary driver behind its business decision. All eight insurers GAO interviewed stated that the MLR requirements did not affect their decisions to stop offering health plans in certain markets and have had no effect or a very limited effect on their spending on quality improvement activities. GAO provided a draft of this product to the Department of Health and Human Services (HHS) for comment. HHS responded that it had no general or technical comments.
Land use and institutional controls are usually linked, and should be considered together during the investigation phase of cleanup, according to EPA guidance. As a site moves through the early stages of the cleanup process, site managers should develop assumptions about reasonably anticipated future land uses and consider whether institutional controls will be needed to maintain these uses over time. EPA guidance states that, if remediation leaves waste in place that would not permit “unrestricted use” of the site and “unlimited exposure” to residual contamination, use of institutional controls should be considered to ensure protection against unacceptable exposure to the contamination left in place. Even sites that are appropriate for residential use after the cleanup process is complete may require institutional controls if they do not allow for unlimited use and unrestricted exposure. For example, residential properties may be located over a contaminated groundwater plume where the properties are not the source of contamination. In such a situation, well drilling restrictions put in place to limit the use of groundwater may serve as appropriate institutional controls. EPA recognizes four types of institutional controls—governmental controls, proprietary controls, enforcement and permit tools with institutional control components, and informational devices: Governmental controls use the regulatory authority of a government entity to impose restrictions. Generally, EPA must depend on state or local governments to establish these controls. Examples of governmental controls include zoning restrictions, local ordinances, and groundwater use restrictions. Proprietary controls involve legal instruments placed in the chain of title of the site or property, such as easements and covenants. Enforcement and permit tools with institutional control components are issued or negotiated to compel the site owner to limit certain site activities. These controls, which can be enforced by EPA under Superfund and RCRA legislation, include administrative orders and consent decrees. Informational devices warn the public of risks associated with using contaminated property. Examples of informational devices are deed notices, state registries of hazardous waste sites, and health advisories. Approximately 3,800 RCRA facilities have corrective action under way or will require corrective action. EPA refers to these facilities as its “corrective action workload.” Under the Government Performance and Results Act of 1993 (GPRA), which requires agencies to assess progress toward achieving the results expected from their major functions, EPA developed short-term goals for 1,714 of these facilities, referred to as the “GPRA baseline.” According to EPA’s GPRA goals, by 2005, EPA and the states will verify and document that 95 percent of the baseline facilities have “current human exposures under control” and 70 percent have “migration of contaminated groundwater under control.” According to EPA, over the last 10 years, the agency has focused increased attention on understanding and overcoming the complexities and challenges associated with using institutional controls. In recent years, this experience has led EPA to improve its approach to these controls. For example, the agency has hosted numerous meetings and workshops to identify institutional control issues and develop solutions; developed and administered national training programs for federal, state, tribal, and local agencies; developed a national strategy to help ensure that controls are successfully implemented; and established a national management advisory group to work on high-priority policy issues. Furthermore, in addition to issuing guidance in 2000 on evaluating and selecting institutional controls, the agency is currently developing four additional guidance documents covering specific implementation, monitoring, and enforcement issues. These improvements have been targeted at the full life- cycle of institutional controls from identification, evaluation, and selection to implementation, monitoring, and enforcement. In reviewing selected Superfund and RCRA sites in three different time periods or stages of cleanup, we found an apparent increase in the use of institutional controls over time. Two of the 4 older Superfund sites and 6 of the 8 older RCRA facilities we reviewed where cleanup was completed but residual contamination remained had no institutional controls in place. In contrast, of the 32 Superfund and 4 RCRA sites we reviewed where cleanup was completed during fiscal years 2001 through 2003 but residual contamination remained, 28 and 4, respectively, had one or more institutional controls in place. However, because EPA’s guidance is vague and does not specify in which cases controls are necessary, it is unclear whether any of the sites we reviewed were inconsistent with the agency’s policy. When considering recent remedy decisions in both programs, we found that, of the 112 Superfund and 23 RCRA remedy decision document sets we reviewed that were issued during fiscal years 2001 through 2003, most documents called for some type of institutional control to prevent or limit exposure to residual contamination. Moreover, although EPA guidance directs staff to include four specific factors in documenting the institutional controls to be implemented at a site, the documents we reviewed frequently included no more than two of these factors, and the language was often vague. In reviewing selected Superfund and RCRA sites in three different time periods or stages of cleanup, we found an apparent increase in the use of institutional controls over time. The proportion of Superfund sites with institutional controls in place increased from 10 percent for those deleted during fiscal years 1991 through 1993 to 53 percent for those deleted during fiscal years 2001 through 2003. The proportion of RCRA facilities with institutional controls in place increased from 5 percent for those sites we examined where corrective action was terminated prior to fiscal year 2001 to 13 percent for those sites where corrective action was terminated during fiscal years 2001 through 2003. Moreover, 83 percent of the Superfund and 65 percent of the RCRA remedy decision documents finalized during fiscal years 2001 through 2003 indicated the need for some sort of institutional controls, an increase over the proportion of completed sites with controls. (See tables 1 and 2.) While EPA recognizes that the use of institutional controls is becoming increasingly common, the agency points out that this should not be interpreted to mean that sites are being less thoroughly cleaned up. The EPA project manager for 1 Superfund site deleted with residual contamination and no institutional controls told us that if the site were being remediated today, EPA might consider institutional controls to restrict groundwater use. In addition, EPA is now considering institutional controls for a site that was cleaned up to a level allowing for unrestricted use and unlimited exposure at the time of remediation. The levels of acceptable lead contamination have decreased since completion of this remedy, so the levels of contamination at the site may now exceed the new standards. Four of the 12 older Superfund and RCRA sites we reviewed where residual contamination remained had institutional controls in place. Waste was left in place after cleanup at 4 of the 20 Superfund sites that were deleted during fiscal years 1991 through 1993; as figure 1 shows, one-half of these sites had institutional controls in place. Similarly, of the 40 RCRA facilities we reviewed where corrective action was terminated before fiscal year 2001, 8 had residual waste after cleanup; institutional controls appeared to be in place at 2 of these facilities (see fig. 2). The most common type of institutional control in place at these older Superfund and RCRA sites was a covenant; there was also a consent order and a conservation easement, as shown in figure 3. A covenant, as used in the institutional controls context, is a promise by a landowner to use or refrain from using the property in a certain manner. A consent order contains elements of both an administrative order (an order issued and enforced by EPA or states directly restricting the use of property) and a consent decree (in this context, a court order that implements the settlement of an enforcement case, which may restrict the use of the land by the settling party, such as prohibiting well drilling). A conservation easement, allowed by statutes adopted by some states, is established to preserve and protect property and natural resources. EPA guidance encourages the use of multiple controls—referred to as “layering”—stating that it is more effective than using only one institutional control. Controls were layered at only 1 of these 4 older sites. Conservation easement (1) Covenant (3) In contrast to sites where cleanup was completed in earlier years, 32 of the 36 Superfund and RCRA sites we reviewed where residual contamination remained after cleanup had one or more institutional controls in place. At most of the 53 Superfund sites deleted from the NPL during fiscal years 2001 through 2003, institutional controls were implemented if waste was left in place (see fig. 4). Furthermore, future controls were being considered at 2 of the sites where institutional controls were not originally planned. Of the 31 RCRA facilities we reviewed where corrective action was terminated during fiscal years 2001 through 2003, most corrective actions did not result in waste being left in place and, therefore, the facilities likely did not require institutional controls. As figure 5 shows, only 4 facilities had waste remaining, and all of these had institutional controls in place. The most common types of institutional controls in place at these Superfund and RCRA sites were covenants and consent decrees, followed by deed notices and easements (see fig. 6). Deed notices are informational documents filed in public land records, and these notices alert anyone searching the records to important information about the property. Easements are property rights conveyed by landowners to other parties, giving them rights with regard to the owner’s land. Of the 28 Superfund sites with institutional controls, 17 included multiple controls, or layering, as encouraged by EPA guidance. One of the 4 RCRA facilities had multiple institutional controls. In total, there were 66 controls in place at the 32 sites. Deed notice (8) Consent decree (12) Covenant (19) For both recently completed and older sites we reviewed, 6 of 36 Superfund sites and 6 of 12 RCRA sites with waste remaining did not have institutional controls in place. EPA site managers told us that the potentially responsible parties or property owners of several sites we reviewed had agreed to file a proprietary or informational control, such as a covenant or deed notice, to limit the use of the contaminated land or water. However, following our request for documents, EPA staff discovered that the controls had not been implemented. EPA is now working to implement institutional controls for some of these sites to ensure the protection of human health and the environment. Finally, at several sites we reviewed where contamination was left in place, the remedy decision documents did not call for institutional controls. Some of these sites were delegated to states for monitoring and possible future action. For example, in one case, groundwater contamination was contained as long as wells at a nearby plant continued to operate—the wells, which pump approximately 10 million gallons a day, provide protection by capturing contaminants from a former landfill on site before they migrate into the off-site groundwater. EPA asked the state to assume responsibility for monitoring the continued operation of the wells and to conduct an examination of groundwater contamination if well operation ceased. Finally, deleting Superfund sites and terminating corrective action at RCRA facilities where waste remains without implementing institutional controls may be contrary to EPA guidance. Guidance issued in 2000 states that an institutional control is generally required if the site cannot accommodate unrestricted use and unlimited exposure. However, the guidance does not specify under what circumstances controls are necessary. Instead, it uses language like “generally required” and “likely appropriate.” Four of the sites deleted during fiscal years 2001 to 2003, after the guidance was issued, had residual contamination but no institutional controls in place. However, because EPA’s guidance is vague and does not specify in which cases controls are necessary, it is unclear whether any of the sites we reviewed were inconsistent with the agency’s policy. EPA’s institutional controls project manager believed that some of these deviations from EPA’s guidance may have occurred because, during the period between the completion of the cleanup and site deletion, site managers may have inadvertently overlooked the need to implement the institutional controls. In reviewing files for 135 Superfund and RCRA remedy decisions that were issued during fiscal years 2001 through 2003, we found that most of the documents we reviewed called for some type of institutional control to prevent or limit exposure to residual contamination. As previously mentioned, we reviewed the principal remedy decision documents issued during this time period; however, other remedy decision documents may also include information about institutional controls. Of the 112 Superfund remedy decisions, 85 called for institutional controls. In 8 additional cases, remedy decision documents called for institutional controls under certain circumstances but not others. For example, one Superfund remedy decision document outlined the need for institutional controls if excavated contaminated soil were to be disposed of on-site, rather than at another facility. Finally, some of the Superfund documents we examined were interim remedy decision documents; while some of those documents did not call for institutional controls, future documents may include provisions for such controls if waste is left on-site after remedy construction is completed. Of the 23 RCRA remedy decisions issued between fiscal years 2001 and 2003, 15 called for institutional controls. Many remedy decision documents did not identify the specific institutional control mechanism, or type of control, to be used. Of the 93 sets of Superfund remedy decision documents we examined that called for institutional controls under all or certain circumstances, 81 discussed the mechanism to some degree. Almost all of the 15 sets of RCRA remedy decision documents we examined that called for institutional controls discussed the mechanism to a certain extent. However, in both sets of documents, these discussions were often vague, gave a list of options, or discussed mechanisms for one planned control but not another (e.g., a document only specified an institutional control mechanism for restricting the use of groundwater and did not specify a control for contaminated soil). For those documents that discussed specific institutional controls— including those that listed options rather than a selected control or controls—deed notices and groundwater use restrictions, followed by covenants and zoning, were most commonly mentioned, as shown in figure 7. Twelve of the documents were vague in describing a mechanism, and, in 13 cases, the documents did not mention a mechanism at all. Covenant (25) Deed notice (29) Groundwater use restriction (32) Other types of institutional controls (55) Thorough planning is critical to ensuring that institutional controls are implemented, monitored, and enforced properly. EPA guidance specifies that staff should evaluate institutional controls in the same level of detail as other remedy components. Furthermore, it advises staff to make several determinations regarding a number of key factors (see table 3) and to describe them in the remedy decision documents. As EPA’s draft guidance on institutional controls points out, without specific information on the institutional controls—such as their objectives; the mechanisms (or kinds of controls) envisioned; the timing of their implementation and duration; and who will be responsible for implementing, monitoring, and enforcing them—the site manager and site attorney may be unable to interpret the intent of the remedy selection document. For example, managers currently responsible for some sites we reviewed were not involved with the remedial investigation or preparation of the ROD for the sites and, therefore, may not fully understand what types of controls were envisioned when the document was written. In addition, without specific information on the proposed institutional controls for a site, the public may not fully understand the restrictions on site use necessary to prevent exposure to residual contamination. Vague language may also result in creating unintended rights and/or obligations. As shown in figures 8 and 9, the remedy decision documents we examined generally discussed the objective of the institutional controls. Eighty-six of the 93 sets of Superfund documents we reviewed that addressed institutional controls (whether under all or certain conditions), and all of the document sets for the 15 RCRA sites, discussed the objective, at least in general terms. For both programs, however, the level of detail in the discussion of the objective varied greatly. For example, one Superfund ROD called for “the use of institutional controls to help prevent human exposure to any residual contaminants at the site following the completion of remedy construction,” which is a general purpose of institutional controls rather than a specific objective. Other decision documents included more detailed discussions of objectives; for example, one document discusses institutional controls “for future development that would prevent inappropriate disturbance of remediated mine sites and potential remobilization of contaminants” and “to prevent the use of new drinking water wells where contaminated aquifers exist.” Of the 93 sets of Superfund documents and 15 sets of RCRA documents we examined, 81 and 14, respectively, discussed the mechanism to be used, at least generally. However, the specific mechanism for each institutional control was identified in only 35 of the sets of Superfund documents and in 5 of the sets of RCRA documents. Most discussions were vague, gave a list of options, or discussed mechanisms for one planned control but not another. For example, 24 documents mentioned “deed restrictions” without detailing how the deed would be restricted. EPA guidance points out that the term “deed restriction” is not a traditional property law term, but rather a shorthand way of referring to types of institutional controls. Furthermore, it states that site managers should avoid the generality of “deed restriction” and instead be specific about the types of controls under consideration. Other remedy decision documents were incomplete, suggesting mechanisms for one medium, such as soil, but not another, such as groundwater. In 30 of the Superfund cases and 4 of the RCRA cases, the remedy decision documents gave several options for control mechanisms rather than identifying those that were most appropriate. In contrast, some documents do include a detailed discussion of the institutional control mechanism. For example, one document suggested implementing and monitoring deed notices to ensure that land use is consistent with the cleanup levels selected for the site. If the land is used for residential purposes, additional institutional controls, such as a restrictive covenant, may be needed to limit access to soils. Because some institutional controls—such as informational devices—cannot be enforced, or may not transfer if the property is sold, careful consideration of the institutional control mechanism is generally necessary. EPA guidance points out that since parties other than EPA often implement institutional controls, site managers should consider the time required to put a control in place. However, as shown in figures 8 and 9, less than one- third of the Superfund remedy decision documents and only 1 of the RCRA documents we examined specified the timing of institutional control implementation. Twenty-five Superfund documents and 1 RCRA document specified when the institutional controls should be implemented—for example, “before the RA is final”—although some of the documents were vague or only indicated timing for one out of several controls. Moreover, for 14 of the Superfund sites, the institutional controls referred to in remedy decision documents had already been implemented. Documents for 45 Superfund and 4 RCRA sites specified how long the institutional controls should remain in place—which was, in most cases, until the contamination was no longer present or cleanup levels were achieved. However, some of the documents indicated the duration of only one of several planned controls. In the remedy decision documents we examined, many of the Superfund and RCRA documents did not discuss any of the parties responsible for implementing, monitoring, and enforcing institutional controls. To the extent that responsibility was addressed, most of the discussion centered only on the implementing party, rather than those responsible for monitoring and enforcing institutional controls. Only 11 Superfund and 3 RCRA document sets discussed parties responsible for monitoring institutional controls, and only 13 Superfund and 4 RCRA document sets discussed parties responsible for enforcing institutional controls (see figs. 8 and 9). According to the EPA draft guidance issued in December 2002, early cooperation and coordination between federal, state, and local governments in the selection, implementation, and monitoring of institutional controls is critical to their implementation, long-term reliability, durability, and effectiveness. Where EPA is implementing a remedy, states often play a major role in implementing and enforcing institutional controls. In addition, under the RCRA program, the state typically imposes and oversees the remedial action. Some governmental controls may be established under state jurisdiction. Furthermore, a local government may be the only entity that has the legal authority to implement, monitor, and enforce certain types of institutional controls, such as zoning changes. EPA guidance states that while EPA and the states take the lead on response activities, local governments have an important role to play in the implementation, long-term monitoring, and enforcement of institutional controls. Without the cooperation of these other parties, the successful implementation of institutional controls may not be ensured. In many cases, remedy documents we examined contained no evidence that planning of institutional controls included consideration of all aspects of the four key elements in the remedy selection process. In total, 34 of the 93 sets of Superfund and 5 of the 15 sets of RCRA remedy decision documents discussed all four elements, at least in part. For example, the documents may have discussed the duration of the institutional controls but not when they will be implemented, or the documents may have discussed who will implement only one of the controls required. EPA’s institutional controls project manager stated that discussion in the ROD may be intentionally vague because key decisions on such issues as who may implement the remedy and institutional controls have not yet been made. He also speculated that site managers may not have given adequate consideration to all relevant aspects of institutional controls at the remedy decision stage. Without careful consideration of all four factors, an institutional control put in place at a site may not provide long-term protection of human health and the environment. Furthermore, EPA’s 2002 draft guidance recommends planning of the full institutional control life cycle early in the remedy stage—including implementation, monitoring, reporting, enforcement, modification, and termination—to ensure the long- term durability, reliability, and effectiveness of institutional controls. The guidance states that, critically evaluating and thoroughly planning for the entire life cycle early in the remedy selection process could have eliminated many of the problems identified to date. In addition, according to the EPA guidance, calculating the full life-cycle cost is an essential part of the institutional control planning process. This estimate is important to compare the cost-effectiveness of institutional controls with that of other remedy elements and to ensure that parties responsible for implementing, monitoring, and enforcing institutional controls understand their financial liability for these activities. Relying on institutional controls as a major component of a selected remedy without carefully considering all of the applicable factors—including whether they can be implemented in a reliable and enforceable manner—could jeopardize the effectiveness of the entire site remedy. At the Superfund sites we reviewed, institutional controls often were not implemented before site deletion, as EPA requires. Moreover, efforts to monitor institutional controls after they are implemented may also be insufficient. Finally, EPA may have difficulties ensuring that the terms of certain types of institutional controls in place at some Superfund and RCRA sites can be enforced, and state laws may limit EPA’s ability to implement and enforce needed controls. Institutional controls were often not implemented before site deletion, as required, at the Superfund sites we reviewed. Under EPA guidance, a site may not generally be deleted from the NPL until all appropriate response actions, including institutional controls, have been implemented. Timely implementation of institutional controls is important because, until the controls are in place at a site, there is a greater potential for the public to become exposed to any residual contamination. At 32 of the 53 Superfund sites deleted during fiscal years 2001 through 2003, institutional controls were likely appropriate, according to EPA guidance, because waste remained in place at these sites above levels that allowed for unrestricted use and unlimited exposure. Our discussions with cleanup officials and our review of supporting documentation, however, indicate that all institutional controls were implemented before site deletion at only 24 of these 32 sites. In the case of 4 of the remaining 8 sites, even though EPA site managers believed certain of the institutional controls had been implemented at the site, our subsequent requests for documentation revealed that these controls had not been implemented. At 2 of these sites, there were no institutional controls in place at all. In another 2 cases, institutional controls were implemented, but only after deletion of the site. In 2 other cases, remedy decision documents did not call for institutional controls, but because EPA guidance does not specify in which cases controls are necessary, it is unclear whether these 2 sites were inconsistent with this guidance. Furthermore, institutional controls were implemented before site deletion at only 2 of the 4 Superfund sites deleted during fiscal years 1991 through 1993 that had residual contamination above levels that would allow for unrestricted use of the site. The 2 other sites were deleted without institutional controls, even though the site manager for 1 of these sites believed there were institutional controls in place. EPA’s institutional controls project manager believed that sites with residual contamination may have been deleted without institutional controls at least in part because site managers lost track of the need to implement the institutional controls between the time that active remediation of the site ended and the site’s deletion. Implementation of institutional controls at the RCRA facilities we examined generally occurred by the time the corrective action was terminated. RCRA program guidance does not address the timing of implementation of institutional controls relative to termination of corrective actions. Rather, owners and operators of RCRA facilities that treated, stored, or disposed of hazardous waste must submit documentation indicating the location and dimensions of a closed hazardous waste facility before its closure. Facility closure in the RCRA program occurs after all RCRA-related activities at a site, including corrective action, end and after the facility undergoes a closure process. Among the 6 state RCRA corrective action programs we reviewed, state officials for 3 of the programs stated that if institutional controls are required, they must be in place before the RCRA corrective action is terminated. Of the 4 RCRA facilities where corrective action was terminated during fiscal years 2001 through 2003 that likely required institutional controls, only 2 had all controls in place by the time the corrective action was terminated. At 1 of the remaining facilities, the sole institutional control was implemented about 1 year after the corrective action was terminated; at the last facility, at least one of several controls was implemented after the corrective action was terminated. Monitoring of institutional controls at Superfund sites after they have been implemented may be inadequate to ensure their continued protectiveness. At sites where contamination is left in place above levels that allow for unlimited use of the site and unrestricted exposure to site contaminants, CERCLA requires reviews once every 5 years of the continued protectiveness of the remedy, including any institutional controls in place. According to EPA’s guidance, these 5-year reviews usually consist of community involvement and notification, document review, data review and analysis, site inspection, interviews, and a determination of remedy protectiveness. As a part of these reviews, EPA’s guidance calls for a determination of whether institutional controls successfully prevent exposure to site contaminants and a specific check on whether they are still in place. EPA officials acknowledged, however, that reviews that only occur every 5 years may be too infrequent to ensure the continued protectiveness of the institutional controls. At some of the sites we examined, 5-year reviews uncovered institutional control violations that could have been discovered and stopped earlier with more frequent monitoring. For example, an institutional control at 1 Superfund site we examined prohibited any use of groundwater without prior written approval from EPA. When EPA conducted its 5-year review in April 2003, agency officials discovered that over 25 million gallons of groundwater from the site had been pumped for use as drinking water during 2002. Moreover, the agency official who conducted the 5-year review did not know how long groundwater had been pumped without EPA’s approval. While many Superfund sites are no longer active, sites that are being reused may be especially vulnerable to activities occurring on-site that may violate an institutional control during the time period between 5-year reviews. At 1 Superfund site we visited, for example, the institutional control for the site requires monitoring for worker safety precautions during digging on the site. At the time of our site visit, however, active digging was occurring at the site about which the EPA official charged with supervising the site was not aware (see fig. 10). The EPA official had not visited the site since the previous 5-year review, which had occurred 4 years earlier. Five-year reviews, even when they do eventually occur, may not ensure that institutional controls are in place. EPA’s guidance on conducting 5-year reviews instructs officials conducting the review to verify that (1) institutional controls are successful in preventing exposure to site contaminants and (2) institutional controls are in place. We interviewed officials at the 32 Superfund sites deleted during fiscal years 2001 through 2003 and the 4 Superfund sites deleted during fiscal years 1991 through 1993 with residual contamination. Most of these officials stated that, during 5-year reviews, they confirmed that the site remedy—including institutional controls—continued to protect the public from exposure to site contaminants. However, while they usually confirmed the protectiveness of the remedy, 8 did not also verify that site institutional controls were in place. For example, EPA site managers in charge of 3 sites told us they generally did not check whether institutional controls were in place during 5-year reviews. Managers of 4 other sites stated that they generally verified that institutional controls were in place during 5-year reviews; our subsequent requests for documentation, however, revealed that the institutional controls these site managers believed to be in place were never actually implemented. One additional site manager was unsure whether the 5-year review process even included a check on the continued presence of institutional controls. A determination that institutional controls successfully prevent exposure to contaminants at a site is meaningless if the controls that are supposed to be at the site are, in fact, not in place, or their presence is unknown. Unless EPA verifies that institutional controls remain in place during its 5-year reviews, the agency cannot ensure the continued protectiveness of site remedies. Monitoring of Superfund sites by parties other than EPA may occur more often than every 5 years, but this monitoring may not significantly contribute to ensuring the protectiveness of institutional controls at sites. Thirty-two Superfund sites were deleted during fiscal years 2001 through 2003 with contamination left in place. At 26 of these sites, parties responsible for contamination, site owners, or state or local government entities were responsible for conducting some form of site monitoring in addition to the 5-year reviews. In principle, this additional monitoring could help to ensure that site institutional controls remain protective. Often, however, this monitoring is unrelated to the institutional controls on the site. At fewer than half of these 26 sites, for example, do the additional monitoring activities specifically include a review of the sites’ compliance with institutional controls; at the other sites, monitoring either focused on analyzing site groundwater or on other activities. Moreover, at none of the 26 sites did monitoring include a specific check on whether site institutional controls were in place, as 5-year reviews do. In fact, at 4 of these sites, monitoring that checked whether institutional controls were in place would have found that controls that had supposedly been implemented were not. In addition, some parties responsible for site monitoring sometimes do not meet their monitoring requirements. In 4 cases, site managers indicated that monitoring parties had either not performed the required monitoring or they were unable to provide documentation of this monitoring. In 1 case, for example, an official in a town with a Superfund site refused to perform monitoring of the site, even though there was significant evidence of trespassing at the site, according to the responsible EPA site manager. In contrast with the Superfund program, the RCRA corrective action program does not include any national requirement to review facilities with residual contamination that have been closed. As a result, EPA has no way of knowing whether institutional controls implemented at such facilities remain in place, or whether they remain protective of human health and the environment. At least some states, however, conduct their own monitoring of closed RCRA corrective action facilities, including determining whether institutional controls remain in place and have not been violated. This practice may be in recognition of the necessity to track the status of RCRA facilities that have waste in place after the corrective action process is terminated and they are closed. Officials that we interviewed in 4 of 6 states reported some form of postclosure monitoring of RCRA corrective action facilities in their states; an official in 1 additional state stated that her agency is working to implement such monitoring. Two of these states specifically require that facility owners self-certify the continued presence of institutional controls. One state program, for example, requires facility owners to submit a form every 2 years certifying that facility institutional controls are still in place. In addition, this state’s officials conduct inspections of the closed sites every 5 years, during which they verify the self-certifications and ensure that institutional controls remain in place. As of 2001, according to a 50-state survey that an independent research group prepared using funding from EPA, 17 states had established schedules for auditing sites where institutional controls have been implemented, including 7 states that review such sites at least annually. In addition to potentially inadequate monitoring, EPA may have difficulties enforcing the terms of certain institutional controls currently in place, or planned, for some Superfund and RCRA sites. Some institutional controls selected for sites are purely informational and do not limit or restrict use of the property. Informational institutional controls, according to EPA’s guidance, include deed notices, state hazardous waste registries, and advisories to the public. For example, while a deed notice—which is required by the RCRA corrective action program for certain closed facilities—alerts anyone searching land records to the continuing presence of contamination at the site, such a notice does not provide a legal basis for regulators to prevent a property owner from disturbing or exposing that contamination. Seven of the 32 Superfund sites deleted during fiscal years 2001 through 2003 with waste remaining had some form of informational institutional control in place. Furthermore, EPA recognizes that another mechanism used often at sites to impose institutional controls, a consent decree, is not by itself binding on subsequent property owners or occupants. We found consent decrees in place at 12 of the 32 Superfund sites with residual contamination deleted during fiscal years 2001 through 2003. The use of multiple institutional controls at the same site could alleviate concerns about the use of nonenforceable mechanisms, as long as one of the additional controls is enforceable. In some cases, however, informational, nonenforceable institutional controls were the only controls in place at sites. This was the case at 1 of the Superfund and 2 of the RCRA corrective action sites that we examined that had reached the end of the cleanup process. Moreover, among the sets of remedy decision documents finalized during fiscal years 2001 through 2003 that we examined, 56 of 112 Superfund and 6 of 23 RCRA corrective action sets of documents specified at least one institutional control mechanism; among these, 6 of the Superfund and 3 of the RCRA sets of documents specified only an informational device as the sites’ institutional control. State property laws, which traditionally disfavor restrictions attached to deeds and other land use restraints in order to encourage the free transferability of property, can hinder EPA’s ability to implement and enforce institutional controls. EPA’s guidance warns that state property laws should be researched to ensure that certain types of institutional control mechanisms can be enforced. For example, one state only allows use restrictions attached to a deed to be enforced for 21 years from the recording of the deed. As an EPA official charged with managing a site with such restrictions in this state recognized, the issue of following up on this site after 21 years presents a planning problem for EPA. In several cases, EPA or state officials stated that property owners had to agree before certain proprietary controls, including covenants, could be put in place. Therefore, EPA officials are forced to negotiate aspects of the institutional control with the property owner. This process has the potential to compromise or dilute the enforceability of the proprietary control that is ultimately negotiated. Because RCRA generally does not authorize EPA to acquire any interests in property, many proprietary controls require that third parties such as states be willing to be involved. RCRA officials must thus rely on states, localities, or sometimes even adjacent property owners to hold an easement over a facility property. At least one EPA regional official we interviewed was aware of a state that refuses to serve as a third party in such cases, limiting EPA’s ability to put in place such institutional controls. States have legislative options available to help ensure that institutional controls can be enforced. Certain states have enacted statutes that provide the state with the legal authority to restrict land use at contaminated properties. Colorado, for example, passed legislation in 2001 that allows the state’s Department of Public Health and Environment to hold and enforce environmental covenants. Colorado’s agreements are binding upon current and future owners of the property, thus allowing the state to enforce these agreements should they be violated. These covenants had been used at 11 state sites, including 1 RCRA corrective action facility, as of August 2004. In addition, several states have adopted statutes providing for conservation easements, which override certain common law barriers to enforcement. A recent effort by the National Conference of Commissioners on Uniform State Laws sought a way to allow states to implement enforceable institutional controls. In 2003, this group finalized a Uniform Environmental Covenants Act that is available for state legislative adoption. According to the group, this legislation provides clear rules for state agencies to create, enforce, and modify a valid real estate document— an environmental covenant—to restrict the use of contaminated real estate. The act creates this new type of institutional control and, according to the group, ensures that it can be enforced. Several states have shown interest in adopting the legislation, according to the chairman of the group that drafted it. Institutional controls help to ensure the protectiveness of remedies at Superfund and RCRA sites where waste remains in place after cleanup. If institutional controls are not properly functioning or cease to apply to the site, the administrative and legal barriers between the residual contamination and potential human exposure to site contaminants disappear. Because of the potential danger of losing these barriers, EPA has recognized the importance of monitoring whether institutional controls are still in place and whether they continue to prevent exposure to residual contamination during its 5-year reviews. Current efforts to monitor institutional controls, however, may not occur with sufficient frequency to identify problems in a timely manner and may not always include checks on controls. Institutional controls are often key components of selected cleanup remedies and, as such, need to be monitored, enforced, and kept in place as long as the danger of exposure to residual contamination remains. Residual contamination can remain at a site long after EPA’s involvement is completed, and an entity other than EPA may assume responsibility for long-term monitoring and enforcement of the controls. However, historically, EPA had no system in place to readily identify which sites had institutional controls in place or whether the controls were being monitored and enforced. To improve its ability to ensure the long-term effectiveness of these controls, EPA has recently begun implementing tracking systems for its Superfund and RCRA corrective action programs. These systems currently track only minimal information on the institutional controls—as currently configured, they do not include information on long-term monitoring or enforcement of the controls. In addition, initial reports of tracking system data show that there are potential problems in implementing the systems. Regulators must track institutional controls at hazardous waste sites in order to ensure that they remain effective over the long term. Such controls are often intended to remain in place long after cleanup work has been completed to ensure that a site’s future use is compatible with the level of cleanup at the site and to limit exposure to residual contamination. EPA maintains that an institutional control tracking system should include information about the selection and implementation of the controls as well as their monitoring, reporting, enforcement, modification, and termination. According to EPA, several unique characteristics of institutional controls make tracking them particularly challenging. First, the life-span of institutional controls may begin as early as site discovery and can continue for as long as residual contamination remains above levels that would allow for unrestricted use or unlimited exposure. Therefore, institutional controls may remain necessary at a site indefinitely. Second, the long-term effectiveness of institutional controls depends on diligent monitoring, reporting, and enforcement. Third, institutional controls are often implemented, monitored, and enforced by an entity other than the one responsible for designing, performing, and/or approving the remedy. As a result, an entity other than EPA may be responsible for ensuring that one of the remedy’s critical components—the institutional control—is both effective and reliable in the long term. Historically, EPA has had no way to (1) readily identify which hazardous waste sites relied on institutional controls to protect the public from residual contamination or (2) monitor how the controls were working over the long term. According to EPA’s institutional controls project manager, the need for institutional control tracking systems has been discussed since at least the early 1990s, and environmental groups have long advocated the development of such systems. While several existing EPA information systems track basic information on hazardous waste sites, such as cleanup status and selected remedies, these systems were not designed to capture information on institutional controls at the level of detail necessary to allow for effective tracking and monitoring of the use of these controls. As previously discussed, our analysis of EPA’s use of institutional controls at Superfund and RCRA sites showed that the agency has generally not ensured that institutional controls are adequately implemented, monitored, and enforced. In some cases, for example, we found that controls had not been implemented on a timely basis, and, in at least 4 cases, controls that agency staff thought were in place had never been implemented. An effective institutional control tracking system may alert EPA management to such situations. EPA has recently begun implementing institutional control tracking systems for the Superfund and RCRA corrective action programs. The Institutional Controls Tracking System (ICTS) was designed with the capability to track controls used in a variety of hazardous waste cleanup programs. However, at least initially, ICTS will only include data for Superfund “construction complete” sites. For RCRA corrective action sites, EPA is utilizing its existing RCRA information database to identify sites where institutional controls have been established. In both instances, the EPA tracking systems include only limited, basic information. EPA has not yet decided the extent to which ICTS may be expanded in the future to include more detailed information. The RCRA program currently has no plans to track more detailed information regarding institutional controls at its facilities. EPA began developing ICTS in 2001. According to EPA, ICTS is a state-of- the-art tracking system that is Web-based, is scalable, and will serve as the cornerstone for future programmatic and trend evaluations. The system is built around a cross-program, cross-agency, consensus-based institutional control data registry developed by the agency. The ICTS draft project management plan notes that EPA envisioned an integrated tracking system that would be developed collaboratively using a work group approach that relied on existing data sources for its information. The primary sources of the data to be entered in ICTS include RODs and any amendments; explanations of significant differences; notices of intent to delete; and actual institutional control instruments, such as consent decrees, easements, ordinances, and advisories. The objectives of ICTS are to make institutional controls more effective by creating links across all levels of government through a tracking network; improve EPA program management responsibilities; establish relationships with coregulators (other federal agencies, along with state and local regulatory agencies); improve information exchange with individuals interested in the productive use of a site after cleanup; and improve existing processes allowing for notification to excavators of areas that are restricted or need protection prior to digging. EPA designed ICTS to be implemented in three separate phases, or “tiers,” of data collection activities. The initial data gathering effort was focused on collecting Tier 1 data for all sites on the Superfund construction complete list, which includes all deleted sites. Data collected during Tier 1 can be used by EPA management to generate reports with basic status information about institutional controls at sites. Tier 1 data consist of information on whether site decision documents report the presence of residual contamination at the site above a level that prohibits unlimited use and unrestricted exposure, and if present, whether the documents call for controls; the objectives of the institutional control; the specific control instruments, including the administrative or legal mechanism that establishes a specific set of use restrictions; any person and/or organization that may be directly or indirectly involved with institutional controls at the site; and the source of the information that is entered into the data entry form. The initial version of ICTS was designed to provide some baseline information on institutional controls and a step toward a more comprehensive system. EPA envisions that Tier 2 would (1) identify which institutional controls are in place to prevent use of which media (e.g., soil or groundwater); (2) identify parties responsible for implementing, monitoring, and enforcing the controls; and (3) provide for attaching the latest inspection report. Tier 3 information would include detailed site location information, such as the actual boundaries of the institutional controls. According to the draft ICTS quality assurance project plan, EPA plans to make information from ICTS accessible to EPA and other federal agencies, state and local governments, tribes, and industry groups. Some information may also be made available to the public via the Internet about site-specific institutional controls near and within local communities. Initially, only data for those Superfund sites where construction of remedies has been completed will be entered into ICTS. Although no decision has been made to date, future data collection efforts may include additional sites in EPA’s other cleanup programs (RCRA and Underground Storage Tanks). According to ICTS plans, the tracking system also has the flexibility to include data for sites in other programs, such as Brownfields and State Voluntary Cleanup Programs. Between April and July 2004, EPA regions entered data into ICTS for most of the 899 Superfund construction complete sites, including data on about 280 sites that had been deleted from the NPL. Reports on these data indicate that 154 of the deleted sites had residual contamination; institutional controls were reported for 106 of these sites. Site decision documents did not report institutional controls for the other 48 sites, or about one-third of the deleted sites with residual contamination. EPA’s institutional controls project manager cautioned, however, that the data reported may be inaccurate and need to be verified. The official was concerned, for example, that (1) the standard for what constitutes residual contamination was not consistently applied across all regions, (2) some data may have come from interim decision documents rather than final documents, and (3) some staff entering data into ICTS may have confused whether institutional controls were implemented or only planned. In addition, the EPA official stated that the EPA regions were asked to enter the data into ICTS in 8 weeks, using the best available information and/or their best professional judgment. Because of the expedited data entry, additional research into the status of institutional controls at the site- specific level and significant data quality assurance efforts are necessary to ensure the accuracy of the data. Upon completing the ICTS Tier 1 data entry, EPA plans to assess the data to evaluate the current status of institutional controls at all construction complete sites for data gaps and site-specific control issues. According to the ICTS strategy, once the agency has determined where data gaps and site-specific institutional control problems may exist, the agency will prioritize the work to address these issues on the basis of a variety of factors, including resources and the number of sites with potential issues. EPA’s goal is to identify and review institutional control problems at all construction complete sites over approximately the next 5 years, relying on a combination of special evaluations and scheduled 5-year reviews, focusing on deleted sites as the highest priority. The sites identified as priorities will likely be addressed through a special evaluation, unless a routine 5-year review is scheduled within 12 months of problem identification. Priority evaluations will focus on whether institutional controls were required and properly implemented for all media not cleaned up to levels that allow for unlimited use and unrestricted exposure. EPA does not yet know the scope of these priority evaluations, but expects that these evaluations will be conducted over the next 2 years, resources permitting. After 2 years, the remaining sites will be evaluated in conjunction with or as a component of the normal 5-year review process. To track institutional controls at RCRA corrective action sites, EPA modified RCRAInfo—the agency’s database of information on individual RCRA sites—to identify sites where institutional controls have been established as part of, or to augment, an interim or final corrective action. Details to be entered into RCRAInfo for pertinent sites include the type of institutional controls (governmental control, proprietary control, enforcement or permit tool, or informational device); the scheduled and actual dates that the controls were fully implemented and effective; and the responsible agency (state or EPA). While EPA currently has no plans to track more detailed information regarding institutional controls at its facilities, the RCRA database requires identifying a location where additional information concerning the specific control can be accessed (e.g., responsible agency contact information). In April 2004, EPA officials asked the regions and/or states to enter the requested information into RCRAInfo by September 30, 2004, for the 1,714 GPRA baseline facilities, and by the end of fiscal year 2005 for the remainder of the 3,800 RCRA facilities in the corrective action workload universe. Analysis of the RCRA institutional control tracking system information showed that, by November 22, 2004, only 4 EPA regions, and 7 states in those regions, had identified a total of 87 facilities where institutional controls had been established. Moreover, according to the head of EPA’s RCRA corrective action program, because the agency asked the regions and states to identify and report on only those facilities with institutional controls, rather than asking for reports on all sites indicating whether or not controls were established, the agency does not know the extent to which the data reported by this minority of regions and states are complete. Additionally, the official stated that the agency does not know whether the institutional controls that were reported were actually verified to be in place and operating as intended. In December 2004, the RCRA corrective action program official reminded officials in all 10 EPA regions of the importance of entering these data. Unlike the Superfund ICTS, the agency has no plans to verify that the institutional control information reported for RCRA corrective action facilities accurately reflects actual conditions. Information on institutional controls in the new Superfund and RCRA tracking systems was primarily derived from reviews of decision documents contained in the individual site files. As such, these data reflect the planned use of institutional controls, which may or may not reflect the controls as actually implemented. As previously noted, our review of the use of institutional controls at Superfund sites disclosed four cases where the planned controls had never been implemented. These cases illustrate the need for EPA to determine not only whether institutional controls were required at a site but also whether they were implemented. While EPA currently plans to review the actual use of controls at all Superfund sites with residual waste, such reviews may take up to 5 years to complete. The RCRA program, on the other hand, has no current plans to determine whether (1) institutional controls have been required in all appropriate situations or (2) all required controls were actually implemented. Information necessary to determine whether institutional controls are being monitored and enforced is not currently included in either the Superfund or RCRA tracking systems. As previously noted, monitoring of institutional controls at Superfund sites after they have been implemented may be inadequate to ensure their continued protectiveness. Failure to monitor or enforce institutional controls can lead to compromising the protectiveness of remedies put into place and, consequently, potential exposure of the public to residual hazardous waste. While EPA plans to include information on monitoring and enforcing institutional controls at Superfund sites in the Tier 2 data for ICTS, EPA’s institutional controls project manager stated that it is uncertain whether ICTS will ever be expanded to include Tiers 2 or 3 data. Further, there is no plan to include such information in the RCRA tracking system, since EPA regulations do not require any review of terminated RCRA corrective action sites. Currently both tracking systems only identify where an interested party may go to obtain more information on a particular site. As previously noted, the objectives of ICTS include improving information exchange with individuals interested in the productive use of a site after cleanup, and the existing processes allowing for notification to excavators of areas that are restricted or need protection prior to digging. EPA acknowledges that there is an immediate need for disseminating readily available information about institutional controls at contaminated sites. This need will only increase in the future as sites’ remediation advances and as more contaminated land and water resources are identified for potential reuse. Without knowledge of the controls at a site, excavators might unknowingly contact or otherwise disturb residual contaminated media. At this time, to obtain information about possible institutional controls at the site of interest, excavators would need to search many different databases and sources of information before operations could begin. While information on institutional controls at RCRA corrective action sites is planned to be available to the public by April 2005 and this capability is planned for ICTS in the future, EPA has not yet determined what information on institutional controls at Superfund sites will be made available to the public. Additionally, EPA currently has no assurance that the institutional control information on RCRA sites that will be made available to the public accurately reflects actual conditions. The Superfund ICTS and RCRA tracking systems, together, currently cover a universe of more than 2,600 hazardous waste sites. Expanding the existing tracking system information to reflect the institutional controls as actually implemented and to include long-term monitoring and enforcement information will likely be a resource-intensive task. Nevertheless, without such additional data, EPA has no assurance that the institutional controls actually implemented are continuing to provide the level of protectiveness intended. In this regard, EPA currently has established a task force that will decide what will be done with regard to any expansion of the institutional control tracking systems. Many of the sites that have been cleaned up under EPA’s Superfund and RCRA corrective action programs rely on institutional controls to ensure that the public is not exposed to sites’ residual contamination, and it is likely that a growing number of sites remediated in the future will rely on such controls. However, the long-term effectiveness of these institutional controls depends on EPA resolving several issues. First, EPA’s guidance does not specify under what circumstances a site with residual contamination should have institutional controls. Rather, the guidance states that an institutional control is “generally required,” or “likely appropriate,” if the site cannot accommodate unrestricted use and unlimited exposure. In addition, EPA has identified four factors in its guidance that should be considered during the remedy decision stage—the objective of the institutional control; the mechanism, or type of control, used to achieve that objective; the timing of the implementation of the control and its duration; and the party who will bear the responsibility for implementing, monitoring, and enforcing the institutional controls. Adequately addressing these factors is intended to help ensure that the control will effectively protect human health. But without documentation that these four factors are considered at the remedy decision stage, there is no assurance that sufficient thought has gone into designing the institutional controls and ensuring that they can be successfully implemented, monitored, and enforced. Once the controls are implemented, monitoring is necessary to determine their continued effectiveness and to check that they remain in place. Current efforts to monitor institutional controls, however, may not occur with sufficient frequency to identify problems in a timely manner and may not always include checks on controls. Finally, EPA’s current efforts to begin tracking institutional controls could be a positive step toward achieving successful implementation, monitoring, and enforcement of institutional controls at Superfund and RCRA sites. As presently configured, however, these tracking systems may not significantly contribute to improving the long- term effectiveness of institutional controls. Although EPA has recognized many of these problems and is developing draft guidance documents that may address many of them, until these documents are finalized, the extent to which they will resolve the problems we have identified is unclear. In order to ensure the long-term effectiveness of institutional controls, we recommend that the Administrator, EPA: clarify agency guidance on institutional controls to help EPA site managers and other decision makers understand in what cases institutional controls are or are not necessary at sites where contamination remains in place after cleanup; ensure that, in selecting institutional controls, adequate consideration is given to their objectives; the specific control mechanisms to be used; the timing of implementation and duration; and the parties responsible for implementing, monitoring, and enforcing them; ensure that the frequency and scope of monitoring at deleted Superfund sites and closed RCRA facilities where contamination has been left in place are sufficient to maintain the protectiveness of any institutional controls at these sites; and ensure that the information on institutional controls reported in the Superfund and RCRA corrective action tracking systems accurately reflects actual conditions and not just what is called for in site decision documents. We provided EPA with a draft of this report for its review and comment. EPA agreed with the findings and recommendations in the report and provided information on the agency’s plans and activities to address them. Regarding our recommendation that EPA clarify in its guidance when controls are needed, EPA stated that the agency will continue to develop cross-program guidance to clarify the role of institutional controls in cleanups and has a number of such guidance documents in draft form, under development, or planned. Regarding our recommendation that EPA demonstrate sufficient consideration of all key factors in selecting controls, EPA stated that the agency agrees that sufficient consideration of all key factors should be completed at remedy selection, but does not agree that this information should be included in the remedy decision document. However, our report does not suggest that the information should be included in the remedy decision document, but should be included in some cleanup-related documentation. Regarding our recommendation that EPA ensure that the frequency and scope of monitoring efforts are sufficient to maintain the effectiveness of the controls, EPA noted that it is revising guidance to address this issue. For example, according to EPA, the agency’s draft implementation, monitoring, and enforcement guidance will require periodic evaluation and certification from a responsible entity at the site stating that the controls both are in place and remain effective, and the draft implementation and assurance plan guidance will include specific roles and responsibilities for monitoring efforts. Finally, regarding our recommendation that EPA ensure that the information on controls reported in new tracking systems accurately reflects actual conditions, EPA stated that, among other actions, regions are currently undertaking a quality assurance effort to ensure that the information in the system reflects actual conditions. EPA’s completion of its ongoing and planned activities should, if implemented successfully, effectively address the concerns we raised in this report. In addition to comments directly relating to our recommendations, EPA also offered a number of general comments on the draft report. EPA pointed out that a “missing institutional control” does not, by itself, necessarily represent an unacceptable human exposure or environmental risk or suggest a breach of remedy. We agree that the mere presence of residual contamination at a site does not necessarily indicate the need for institutional controls, and we acknowledge that EPA generally—although not always—requires that institutional controls be put in place at sites where total cleanup is not practical or feasible. We believe, however, that in cases where EPA’s selected remedy for a particular site includes institutional controls as an integral component of the remedy, the agency has determined that such controls are necessary and, as such, the controls should be effectively implemented, monitored, and enforced. In addition, EPA noted that an evaluation of a small universe of sites may overestimate the number of sites with potential institutional control problems. However, we are not making any population estimates, but are describing only the results for those specific cases we reviewed. This report specifically acknowledges that the results from the nonprobability samples for our analysis cannot be used to make inferences about a population because some elements of the populations being studied have no chance or an unknown chance of being selected as part of the sample(s). Finally, EPA commented that an increased use of institutional controls does not mean that the agency advocates less treatment; we do not believe that this report implies that this is the case. The full text of EPA’s comments is included in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees; the Administrator, EPA; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix III. The primary objective of this review was to examine the long-term effectiveness of institutional controls at nonfederal sites in the Environmental Protection Agency’s (EPA) hazardous waste cleanup programs. Specifically, we reviewed (1) the extent to which institutional controls are used at sites addressed by EPA’s Superfund and Resource Conservation and Recovery Act (RCRA) corrective action programs; (2) the extent to which EPA ensures that institutional controls at these sites are implemented, monitored, and enforced; and (3) EPA’s challenges in implementing systems to track these controls. Although both the Superfund and RCRA programs address federal and nonfederal sites, our review did not address federal sites because federal agencies are generally responsible for cleaning up their own sites and EPA involvement is limited. Furthermore, our review focused on institutional controls that remain in place after site deletion or termination to determine whether these controls are effective in the long run. We also focused our review of RCRA facilities on those whose cleanup was led by EPA. To examine the extent of the planned use of institutional controls, we examined all 112 Superfund records of decision (ROD)—involving 101 Superfund sites—finalized during fiscal years 2001 through 2003, and statements of basis or other final decision documents for all 23 RCRA corrective action facilities that reached the remedy decision stage during that period. In this regard, we examined only the principal remedy decision documents for the sites in our universe, rather than all remedy decision documents. Institutional controls may be called for in a number of EPA documents. In the Superfund program, at least two types of documents, in addition to RODs, may sometimes include information about institutional controls at the site—ROD amendments and explanations of significant differences. In the RCRA program, a variety of documents may include information about institutional controls, including permits, permit modifications, statements of basis, and other documents. Because of the number of potential sources of information regarding the planned use of institutional controls, we asked regional officials responsible for the sites to provide us with documentation relevant to the remedy decision at the site. In most cases, regional officials provided us with either a statement of basis, a final decision document, or both. Because we did not look at all remedy decision documents for these sites, we may not have captured all institutional controls at the sites we examined. To address the extent of institutional control use at Superfund sites and RCRA corrective action facilities, we examined EPA’s use of institutional controls at a nonprobability sample of nonfederal sites and facilities where (1) the cleanup process was completed in earlier periods, for historical perspective; (2) cleanup had recently ended; and (3) the remedy had only recently been selected, for insight into the future use of these controls. To gain a broader view of past use of institutional controls, we reviewed files for all 20 Superfund sites deleted from the National Priorities List (NPL) during fiscal years 1991 through 1993; in addition, in the two EPA regions with the most such facilities—Region III in Philadelphia and Region V in Chicago—we reviewed files for all 40 RCRA facilities at which, according to EPA’s database, a preliminary investigation was conducted and corrective action was terminated before fiscal year 2001. Regarding sites where the cleanup was recently completed, we examined site documentation for all 53 Superfund sites deleted from the NPL during fiscal years 2001 through 2003 and at all 31 RCRA facilities where corrective action was terminated during the same period. With the exception of the historical RCRA facilities we examined in two regions, for those deleted sites or terminated facilities whose documentation indicated the use, or potential use, of institutional controls, we conducted follow-up interviews with EPA or state officials knowledgeable about the site to obtain detailed information and additional documentation and to determine what institutional controls were actually in place. To identify the universe of Superfund sites deleted from the NPL during fiscal years 1991 through 1993 and 2001 through 2003, as well as those sites where a remedy decision was reached during fiscal years 2001 through 2003, we obtained data from EPA’s Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS)—a computerized inventory of potential hazardous waste sites that contains national site assessment, removal, remedial, enforcement, and financial information for over 44,000 sites. CERCLIS is a relational database system that uses client-server architecture (i.e., each computer or process on the network is either a client or server), installed on separate local area networks at EPA headquarters and all 10 regional Superfund program offices, and is used by more than 1,900 EPA staff. A September 30, 2002, report issued by EPA’s Inspector General found that over 40 percent of CERCLIS data they reviewed were inaccurate or not adequately supported. The Inspector General’s review focused on site actions, which it defined as activities that have taken place at a site—such as site inspections, removals, studies, potentially responsible parties searches, RODs, and remedial actions. As a result of its review, the Inspector General concluded that CERCLIS could not be relied upon to provide error-free data to system users. For our review, we verified CERCLIS data related to the NPL sites in our universe, but we did not verify detailed site action data for all sites in CERCLIS. To address the reliability of CERCLIS data, we met with the Inspector General’s staff to discuss the nature of the errors disclosed in their report. According to the Inspector General’s staff, the reliability of CERCLIS data was more of a concern at the action level rather than the site level. They indicated that confirming the data with EPA regions would decrease concerns about data reliability. As a result, we confirmed all relevant CERCLIS data fields for all 53 NPL sites deleted during fiscal years 2001 through 2003 and all 23 NPL sites deleted during fiscal years 1991 through 1993; in addition, we verified information regarding all 232 remedy decisions, including 117 RODs, finalized during fiscal years 2001 through 2003. We verified all relevant CERCLIS data fields with staff in the relevant region, as appropriate, including confirming that sites were nonfederal and had been deleted or had a remedy decision during the time frames of interest. Regional staff found no errors with any of the deleted NPL sites in our universe. Regional staff identified errors regarding 2 of the 232 remedy decisions in our universe, including a change to information regarding 1 ROD, and added 1 remedy decision document to our universe, resulting in a 1 percent error rate. We corrected the CERCLIS site-level data that we used for our analysis to reflect regions’ changes. In addition, we obtained remedy documentation, Federal Register notices of deletion, and other documents from regional staff that corroborated the accuracy of our data. We also conducted interviews with officials knowledgeable about deleted sites where it appeared there were institutional controls or where it was unclear. As a result of these interviews and further analysis, we amended the number of records of decision finalized during fiscal years 2001 through 2003 to 112 and the relevant number of sites deleted during fiscal years 1991 through 1993 to 20. After taking these additional steps, we determined that the CERCLIS data we used were sufficiently reliable for the purposes of this report. In addition, we visited 5 Superfund sites that had been deleted from the NPL. For the site visits, we went to EPA Region III, headquartered in Philadelphia, which had (1) the most Superfund sites deleted during fiscal years 1991 through 1993 and fiscal years 2001 through 2003 and (2) the most RCRA facilities reaching corrective action termination during the latter time period. Over the course of 5 days in July 2004, we visited the 5 sites that had institutional controls in place in EPA Region III. We conducted a physical inspection of each site to verify compliance with the terms of the institutional controls in place, accompanied by either the EPA site manager or a representative of the responsible party, or both. We also visited the relevant county recorder’s office to verify that relevant institutional controls for each site had been recorded and to assess the process for accessing these documents. We also met with local officials responsible for informal monitoring of 1 site. In addition, we met with state officials to learn about a statewide system of groundwater management zones, an institutional control in place at 2 of the sites we visited. To identify the universe of RCRA facilities that reached the corrective action termination or remedy decision stage throughout the life of the program, and specifically during fiscal years 2001 through 2003, we obtained data from the RCRAInfo system—the EPA Office of Solid Waste’s national, mission-critical, major application consisting of data entry, data management, and data reporting functions used to support the implementation and oversight of the RCRA Subtitle C Hazardous Waste Program as administered by EPA and State/Tribal partners. RCRAInfo is a relational database management system (Oracle) that is centralized and Web-enabled, stored on a central Unix server at EPA’s Research Triangle Park, North Carolina, facility. Access to RCRAInfo is restricted to authorized EPA Headquarters, EPA Regional, and State staff with RCRA program oversight or implementation responsibilities. During our review, we also spoke with officials in each of the 10 EPA regions regarding their use of the code in the RCRAInfo system used to indicate the termination of corrective action. Specifically, we asked them whether a site coded in this way could include an institutional control, as had been indicated by an official in EPA headquarters early in our review. Officials in 6 EPA regions indicated that regional policy dictated that a site coded in this manner should not include institutional controls, while officials in the other 4 regions stated that it could. In addition, officials in 5 of the regions expressed doubts or uncertainty about whether use of the code had been consistent over time, whether personnel within their region used the code consistently, or whether states in the region interpreted the code in a uniform manner. While EPA’s Inspector General has not examined the reliability of the RCRAInfo database, at least one previous report about its predecessor system—the Resource Conservation and Recovery Information System—raised additional significant questions about data reliability. For our review, we verified the data obtained from RCRAInfo with knowledgeable staff in each EPA region. We asked regional officials to verify that (1) the facilities in our universe belonged there and (2) there were no facilities that should be present in our universe but were not. Verifying the facilities in our universe entailed verifying information about each facility, such as whether it was a federal or nonfederal facility, whether corrective action activities at the facility were led by the state or by EPA, and whether the site had reached the relevant milestone within the prescribed time frame. As a result, we checked all relevant RCRAInfo data fields for the 30 EPA-led RCRA facilities where corrective action was terminated during fiscal years 2001 through 2003 and 21 EPA-led RCRA facilities where a remedy decision was finalized during that period, according to data provided by RCRA officials in EPA headquarters. We verified all relevant RCRAInfo data fields with staff in the relevant region, as appropriate, including confirming that facilities were nonfederal and had had corrective action terminated or had a remedy decision during the time frames of interest. From our universe of RCRA facilities where corrective action was terminated, regional officials deleted 1 facility, added 3 more, and edited the data for 1 additional facility, for a total of 32 facilities. Subsequent follow-up work and interviews with site managers brought the relevant universe of RCRA facilities to 31. Similarly, from our universe of RCRA facilities where a remedy decision was finalized, regional officials deleted 1 facility, added 3 more, and edited the data for 1 additional facility, for a total of 23 facilities. We corrected the RCRAInfo data for facilities in our universe to reflect regions’ changes. In addition, we obtained documentation of remedy selection and corrective action termination from regional staff that corroborated the accuracy of our data. We also conducted interviews with knowledgeable site officials at terminated facilities where it appeared there were institutional controls or where it was unclear. After taking these additional steps, we determined that the RCRAInfo data we used were sufficiently reliable for the purposes of this report. To learn the extent to which EPA ensures that institutional controls at Superfund sites and RCRA corrective action facilities are implemented, monitored, and enforced, we interviewed EPA or state officials knowledgeable about particular sites. To identify sites of interest, we examined documentation related to all 20 Superfund sites deleted from the NPL during fiscal years 1991 through 1993, as well as all 53 Superfund sites deleted from the NPL and all 31 RCRA facilities where corrective action was terminated during fiscal years 2001 through 2003. For those deleted sites or terminated facilities among these whose documentation indicated the use, or potential use, of institutional controls, we conducted follow-up interviews with EPA or state officials knowledgeable about the site to obtain detailed information and documentation regarding the implementation, monitoring, and enforcement of any institutional controls in place. To understand the extent to which states implement, monitor, and enforce institutional controls in the RCRA corrective action program, we interviewed RCRA program managers in the 2 states with the most corrective action remedy decisions and terminations at state-led facilities during fiscal years 2001 through 2003—Colorado and New Jersey. We also interviewed officials in 4 additional states that were selected at random from the 37 states that, in addition to Colorado, were authorized by EPA to conduct RCRA corrective action activities as of March 2002—California, Nevada, South Dakota, and Texas. In addition, we reviewed An Analysis of State Superfund Programs: 50-State Study, 2001 Update, a 2002 report by the Environmental Law Institute, an independent environmental research organization, and interviewed the report’s main author. To inform their study, the Environmental Law Institute collected documents from states, requested program information from them, and conducted telephone interviews to clarify responses and reconcile any discrepancies. While a few states declined to participate, the study achieved a 92 percent response rate. As a result of our review, we determined that this study was sufficiently methodologically sound for the purposes of our review. To identify the challenges of developing a system to track institutional controls, we interviewed the EPA officials in charge of developing tracking systems for the Superfund and RCRA corrective action programs. We also analyzed documentation related to these efforts and initial data drawn from these systems. In addition, we discussed systems to track institutional controls with officials we interviewed in 6 states, including how the states tracked institutional controls, if at all, and whether the states had any concerns about such national tracking systems. In addition, we collected information about the Superfund program’s Institutional Controls Tracking System (ICTS) to inform a data reliability review of this new database. ICTS is an Oracle database accessed through a user interface consisting of HTML Web pages with JavaScript. The current version of ICTS was designed to provide some baseline information on institutional controls but was planned as a step toward a more comprehensive system. The current ICTS has been used to gather baseline information on institutional controls at approximately 900 EPA Superfund construction completion sites. Officials in all 10 EPA regions were asked to populate the system in 8 weeks using the best available information and/or their best professional judgment. Because of the expedited data entry, EPA plans additional research into the status of institutional controls at the site- specific level and significant data quality assurance activities. In light of the uncertain quality of the data, in this report we present data from ICTS with appropriate caveats. We conducted our work from October 2003 to January 2005 in accordance with generally accepted government auditing standards, including an assessment of the data reliability and internal controls. John B. Stephenson, (202) 512-3841 ([email protected]) Vincent P. Price, (202) 512-6529 ([email protected]) In addition to the individuals named above, Nancy Crothers, Shirley Hwang, Justin Jaynes, Richard Johnson, Jerry Laudermilk, Judy Pagano, Nico Sloss, and Amy Sweet made key contributions to this report.
The Environmental Protection Agency's (EPA) Superfund and Resource Conservation and Recovery Act (RCRA) programs were established to clean up hazardous waste sites. Because some sites cannot be cleaned up to allow unrestricted use, institutional controls--legal or administrative restrictions on land or resource use to protect against exposure to the residual contamination--are placed on them. GAO was asked to review the extent to which (1) institutional controls are used at Superfund and RCRA sites and (2) EPA ensures that these controls are implemented, monitored, and enforced. GAO also reviewed EPA's challenges in implementing control tracking systems. To address these issues, GAO examined the use, implementation, monitoring, and enforcement of controls at a sample of 268 sites. Institutional controls were applied at most of the Superfund and RCRA sites GAO examined where waste was left in place after cleanup, but documentation of remedy decisions often did not discuss key factors called for in EPA's guidance. For example, while documents usually discussed the controls' objectives, in many cases, they did not adequately address when the controls should be implemented, how long they would be needed, or who would be responsible for monitoring or enforcing them. According to EPA, the documents' incomplete discussion of the key factors suggests that site managers may not have given them adequate consideration. Relying on institutional controls as a major component of a site's remedy without carefully considering all of the key factors--particularly whether they can be implemented in a reliable and enforceable manner--could jeopardize the effectiveness of the remedy. EPA faces challenges in ensuring that institutional controls are adequately implemented, monitored, and enforced. Institutional controls at the Superfund sites GAO reviewed, for example, were often not implemented before the cleanup was completed, as EPA requires. EPA officials indicated that this may have occurred because, over time, site managers may have inadvertently overlooked the need to implement the controls. EPA's monitoring of Superfund sites where cleanup has been completed but residual contamination remains often does not include verification that institutional controls are in place. Moreover, the RCRA corrective action program does not include a requirement to monitor sites after cleanups have been completed. In addition, EPA may have difficulties ensuring that the terms of institutional controls can be enforced at some Superfund and RCRA sites: that is, some controls are informational in nature and do not legally limit or restrict use of the property, and, in some cases, state laws may limit the options available to enforce institutional controls. To improve its ability to ensure the long-term effectiveness of institutional controls, EPA has recently begun implementing institutional control tracking systems for its Superfund and RCRA corrective action programs. The agency, however, faces significant obstacles in implementing such systems. The institutional control tracking systems being implemented track only minimal information on the institutional controls. Moreover, as currently configured, the systems do not include information on long-term monitoring or enforcement of the controls. In addition, the tracking systems include data essentially derived from file reviews, which may or may not reflect institutional controls as actually implemented. While EPA has plans to improve the data quality for the Superfund tracking system--ensuring that the data accurately reflects institutional controls as implemented and adding information on monitoring and enforcement--the first step, data verification, could take 5 years to complete. Regarding the RCRA tracking system, the agency has no current plans to verify the accuracy of the data or expand on the data being tracked.
The Forest Service and Interior manage about 700 million acres of federal land between them, much of which is considered to be at high risk of fire. Federal researchers estimate that from 90 million to 200 million acres of federal lands in the contiguous United States are at an elevated risk of fire because of abnormally dense accumulations of vegetation, and that these conditions also exist on many nonfederal lands. Addressing this fire risk has become a priority for the federal government, which in recent years has significantly increased funding for fuels reduction. Fuels reduction is generally done through prescribed burning, in which fires are deliberately lit in order to burn excess vegetation, and mechanical treatments, in which mechanical equipment is used to cut vegetation. Figure 1 shows before and after photos of a site that was thinned to reduce the risk of fire. Although prescribed burning is generally less expensive than mechanical treatment, prescribed fire may not always be the most appropriate method for accomplishing land management objectives—and in many locations it is not an option, either because of concerns about smoke pollution or because vegetation is so dense that agency officials fear that a prescribed fire could escape and burn out of control. In such situations, mechanical treatments are required, generating large amounts of wood—particularly small-diameter trees, limbs, brush, and other material that serve as fuel for wildland fires. Woody biomass can be put to many uses. Small logs can be peeled and used as fence posts, or can be joined together with specialized hardware to construct pole-frame buildings. Trees also can be milled into structural lumber. Using computer-operated equipment, some mills can manufacture lumber from logs as small as 4 inches in diameter. Other wood products such as furniture, flooring, and paneling can be produced. Woody biomass also can be chipped for use in paper pulp production and other uses—for example, a New Mexico company combines juniper chips with plastic to create a composite material used to make road signs. Woody biomass also can be converted into other products, including liquid fuels such as ethanol and other products such as adhesives. Finally, woody biomass can be chipped or ground for energy production—for example, to fire power plants, or produce steam or hot water heat for manufacturing processes or buildings. Figure 2 shows a trailer full of wood chips being emptied into a container at a California power plant fueled by woody biomass; figure 3 shows chips ready to be fed into a boiler. Citing biomass’s potential to serve as a source of electricity, fuel, chemicals, and other materials, the President and the Congress have encouraged federal activities regarding biomass utilization—but until recently, woody biomass received relatively little emphasis. A list of major congressional direction follows: The Biomass Research and Development Act of 2000 directed the Secretaries of Agriculture and Energy to coordinate their research and development efforts, leading to the production of biobased industrial products; created the interagency Biomass Research and Development Board, supported by a Biomass Research and Development Technical Advisory Committee; directed the Secretaries of Agriculture and Energy to implement a “Biomass Research and Development Initiative” under which the agencies would provide grants, contracts, and financial assistance for research on biobased industrial products; and authorized an appropriation of $49 million for each of fiscal years 2000 through 2005 to carry out the act’s provisions. The Farm Security and Rural Investment Act of 2002 established a federal procurement preference for biobased products requiring federal agencies purchasing items costing more than $10,000 to give preference to biobased products; directed the Secretary of Agriculture to award grants for developing and constructing biorefineries (equipment and processes that convert biomass into fuels and chemicals and that may produce electricity); directed the Secretary of Agriculture to provide grants, loans, and loan guarantees to farmers, ranchers, and rural small businesses to purchase renewable energy systems and make energy efficiency improvements, and to make available from the Commodity Credit Corporation $23 million for these activities for each of fiscal years 2003 through 2007; and directed the Secretary of Agriculture to make available from the Commodity Credit Corporation $5 million in fiscal year 2002 and $14 million for each of fiscal years 2003 through 2007 to carry out the provisions of the Biomass Research and Development Act of 2000, and extended through fiscal year 2007 the Biomass Research and Development Act’s authorization of $49 million each fiscal year. The Healthy Forests Restoration Act of 2003 authorized appropriations of $5 million for each of fiscal years 2004 through 2008 for each of two grant programs—a Forest Service program focusing on community-based enterprises and small businesses using biomass, and a USDA program providing grants to offset the costs of purchasing biomass by facilities that use it for wood-based products or other commercial purposes; and increased the authorization contained in the Biomass Research and Development Act of 2000 from $49 million to $54 million for each of fiscal years 2002 through 2007. The American Jobs Creation Act of 2004 contained tax incentives promoting the use of woody biomass to generate electricity. Utilization of woody biomass also is emphasized in the federal government’s National Fire Plan, a strategy for planning and implementing agency activities related to wildland fire management. For example, a National Fire Plan strategy document cites biomass utilization as one of its guiding principles, recommending that the agencies “employ all appropriate means to stimulate industries that will utilize small-diameter, woody material resulting from hazardous fuel reduction activities.” Federal agencies also are carrying out research concerning the utilization of small diameter wood products as part of the Healthy Forests Initiative, the administration’s initiative for wildland fire prevention. Most of the federal government’s woody biomass utilization efforts are being undertaken by USDA, DOE, and Interior. Some activities are performed jointly. For example, USDA, DOE, and Interior signed a Memorandum of Understanding to promote the utilization of woody biomass, and USDA and DOE conduct a joint biomass grant program. Each department also conducts its own woody biomass activities, which generally involve grants for small-scale woody biomass projects, research on woody biomass uses, and education, outreach, and technical assistance aimed at woody biomass users. USDA, DOE and Interior have undertaken a number of joint efforts related to woody biomass. In June 2003, the three departments signed a Memorandum of Understanding on Policy Principles for Woody Biomass Utilization for Restoration and Fuel Treatments on Forests, Woodlands, and Rangelands. The purpose of the memorandum is “to demonstrate a commitment to develop and apply consistent and complementary policies and procedures across three federal departments to encourage utilization of woody biomass.” The departments also sponsored a 3-day conference on woody biomass in January 2004. To discuss woody biomass developments and to coordinate their efforts, the departments established an interagency Woody Biomass Utilization Group, which meets quarterly. Another interdepartmental collaboration effort is the Joint Biomass Research and Development Initiative, a joint USDA and DOE grant program authorized under the Biomass Research and Development Act of 2000. The program provides funds for research on biobased products. In fiscal year 2004, the two departments awarded $25 million to 22 projects, and cost sharing by private sector partners raised the value of the projects to nearly $38 million. While the program generally promotes all forms of biomass rather than targeting woody biomass, in 2004 the grant solicitation included woody biomass as an area of emphasis and, according to a USDA official, 10 projects emphasizing or incorporating woody biomass were funded that year, for a total of about $7.7 million. For example, the Hayfork Biomass Utilization and Value Added Model for Rural Development project in California received about $503,000 to support the design and early implementation phases of a biomass utilization facility, including a log sort yard, small log processor, and wood-fired electrical generation plant. Another California project, the Small-Scale, Biomass-Fired Gas Turbine Plants Suitable for Distributed and Mobile Power Generation, received about $242,000 to evaluate the economic benefits of using forestry residues for generating power in small-scale power plants. USDA and DOE also have collaborated on an assessment of biomass availability, including woody biomass, and have prepared a report summarizing their findings. In another interagency effort, BLM worked with DOE’s National Renewable Energy Laboratory (NREL) to identify and evaluate renewable energy resources—including biomass—on public lands, resulting in a February 2003 report titled “Assessing the Potential for Renewable Energy on Public Lands.” More recently, USDA and Interior entered into a cooperative agreement with the National Association of Conservation Districts in 2004 to promote woody biomass utilization. Activities to be performed by the association under the agreement include organizing national and regional workshops on woody biomass utilization and developing outreach materials to stimulate investment in small wood industries and bioenergy. USDA, DOE, and Interior also participate in joint activities at the field level. NREL and the Forest Service have collaborated in developing and demonstrating small power generators that use woody biomass for fuel. These generators, known as BioMax units, are being demonstrated at several sites, including a high school in Walden, Colorado, and a furniture- making business at the Zuni Pueblo in New Mexico. Figure 4 shows the BioMax 15 power generator. The Forest Service also collaborates with Interior in awarding and funding grants under the Fuels Utilization and Marketing program, a jointly funded grant program targeting woody biomass utilization efforts in the Pacific Northwest. Another collaborative effort at the field level involves a Forest Service rural community assistance coordinator specialist in the Southwest Region and includes officials from BLM and the state of New Mexico, as well as environmental group and utility company representatives. In addition to studying woody biomass availability and conducting market assessments, this biomass working group is proposing policy changes favorable to woody biomass. It also has studied barriers to biomass use and provided input on project designs so that projects are less likely to be challenged. The agencies also are collaborating with state and local governments to promote the use of woody biomass. The Forest Service, NREL, and BLM entered into a Memorandum of Understanding with Jefferson County, Colorado, in 2004 to study the feasibility of developing an electricity generating facility using woody biomass from forest thinning projects intended to reduce the risk of wildland fire. In addition to the agencies and Jefferson County, the agreement included the Colorado State Forest Service and a local energy utility. In its January 2005 feasibility study, the partnership reported that about 166,000 tons of biomass would be available each year from forest thinnings and new construction waste. With this development, the local energy utility announced that it would consider converting a boiler at one of its plants to burn biomass to generate steam heat for downtown Denver buildings. Another example of federal agencies working with local governments involves a power plant in Canon City, Colorado, that uses coal and wood chips to fire its boilers. The power plant announced in January 2005 that it plans to sell renewable energy certificates to help recover costs associated with introducing the renewable fuel source. The wood chips used in the power plant are produced by forest-thinning operations conducted by BLM, the Forest Service, and state and local governments, while the environmental and market analysis for the project was co-funded by DOE. Yet another example of local cooperation involves a January 2005 “declaration of cooperation” signed in central Oregon by officials from the Forest Service, BLM, state and tribal government, the timber industry, and environmental groups. The groups have agreed to work together to stabilize the supply of woody biomass as a way of helping create a market for the material. Most of USDA’s woody biomass utilization activities are undertaken by the Forest Service, with other USDA services playing a smaller role. USDA’s activities involve grants, research and development, and education, outreach, and technical assistance. USDA implements several grant programs related to woody biomass. The Forest Service provides grants through its Economic Action Programs (EAP), created to help rural communities and businesses dependent on natural resources become sustainable and self-sufficient. In 2003, according to Forest Service officials, the Forest Service funded 73 projects related to woody biomass utilization; grants ranged from $5,000 to $225,000, for a total of about $3.5 million. A Forest Service official told us that similar levels of effort existed in 2001 and 2002, but that the level of effort in 2004 declined because of reduced funding levels. The Forest Service currently is preparing a report summarizing the activities carried out under EAP grants nationwide. Forest Service officials told us that EAP grant funds are distributed among Forest Service regional and national units, which in turn allocate the funds according to regional or national priorities, respectively. For example, the Northern and Intermountain Regions decided to use their regional EAP allocations not only to fund Economic Recovery—a Forest Service program providing financial and technical assistance to improve the economic, environmental, and social conditions of rural communities—but also to fund two regional woody biomass grant programs, one focusing on using small-diameter wood to create specialty products such as flooring, paneling, and wood-plastic composites and the other focusing on biomass utilization for energy production. This second program, known as the Fuels for Schools program, provides grant funds to help public schools retrofit their fuel and gas heating systems to woody biomass heating systems that reduce heating costs. The Darby School District in Montana, for example, provides heat to three schools with wood burning boilers; this conversion reduced its fuel bill by about 43 percent during the first year of operation. The project requires about 500 tons of woody biomass per year, the byproduct of about 50 acres’ worth of fuel reduction treatments, according to project officials. As of December 2004, according to Forest Service officials, three Fuels for Schools projects (including the Darby School District) had been completed, and about 20 schools had completed engineering analyses and were preparing to apply for grant funds. Figure 5 shows the automated wood chip conveyor installed to provide fuel to the boiler as part of the Darby School District project. The Forest Service has created an additional grant program in response to a provision in the Consolidated Appropriations Act for Fiscal Year 2005, authorizing up to $5 million for grants to create incentives for increased use of biomass from national forest lands. A congressional committee report accompanying the act directed the Forest Service “to develop this program with the clear intent to make grants that will result in increased commercial use of biomass products, and which will thereby result in reduced overall hazardous fuels program costs.” Specific Forest Service goals for the grant program are to (1) help reduce management costs by increasing the value of biomass and other forest products generated by hazardous fuel treatments, (2) create incentives and reduce the business risk for increased use of biomass from national forest lands, and (3) institute projects that target and help remove economic and market barriers to using small-diameter trees and woody biomass. Grants will be awarded for up to 3 years in amounts from $50,000 to $250,000, and will require a 20 percent match on the part of grantees; applications are due May 16, 2005, with awards to be announced by June 1, 2005. Two other USDA agencies---the Cooperative State Research, Education, and Extension Service (CSREES) and USDA Rural Development—maintain grant programs that potentially include woody biomass utilization activities. CSREES oversees the Biobased Products and Bioenergy Production Research grant program, under which a total of $5.4 million is available to support research into the use of agricultural materials— including woody biomass—for fuels or products. CSREES also provides grants to states for research under the McIntyre-Stennis Act of 1962, which was enacted to promote forestry research by state colleges and universities. Projects can fall into one of eight areas listed in the act, one of which is the utilization of wood and other forest products. However, this grant program does not emphasize wood products over the other areas, and a CSREES official told us that most funded projects address issues other than woody biomass. USDA Rural Development oversees grant and loan programs targeting renewable energy, potentially providing support to woody biomass utilization activities. Within Rural Development, the Rural Business- Cooperative Service oversees the renewable energy grant program authorized by the Farm Security and Rural Investment Act of 2002, emphasizing renewable energy systems and energy efficiency among rural small businesses, farmers, and ranchers. In September 2004, $22.8 million was awarded to a total of 167 recipients; however, most grants were directed toward projects using wind power or agricultural biomass rather than woody biomass. Also within Rural Development, the Rural Utilities Service maintains a loan program for renewable energy projects. A Rural Utilities Service official told us that none of the $119 million loaned under this program since fiscal year 2000 has gone toward woody biomass, although the program would welcome such projects. Forest Service researchers are conducting research into a variety of woody biomass issues. Researchers have conducted assessments of the woody biomass potentially available through land management projects—for example, in 2003, Forest Service researchers prepared an assessment of the land suitable for mechanical treatment in the western states and the woody biomass that could potentially be produced. Researchers also have developed models of the costs and revenues associated with thinning projects, such as the Fuel Treatment Evaluator. In using this model, users can input the specific area to be treated (by state or county), the desired end condition of the area to be treated, and so forth. Users also can enter prices for forest products—sawtimber, small-diameter biomass, and the like. The tool then estimates the amount of material in each of various size classes that would have to be removed to achieve the desired end condition, the project cost, and the likely revenues from the project. Researchers also are studying the economics of woody biomass use in other ways; one researcher, for example, is beginning an assessment of the economic, environmental, and energy-related impacts of using woody biomass for power generation. The Forest Service also conducts extensive research into uses for woody biomass, primarily at its Forest Products Laboratory. The laboratory’s strategic plan includes the goal of developing new and improved technologies to use low-value, underutilized forest resources, including thinnings and small-diameter timber, and the laboratory Director told us the laboratory has changed its research approach over the past several years to focus more on the issue of small-diameter trees. Woody biomass- related research at the laboratory includes research into a variety of potential uses for the material, including wood-plastic composites; structures made from small-diameter roundwood; improved paper pulping processes that can accommodate small-diameter trees; water filtration systems using woody biomass fibers; flooring, paneling, and laminated wood beams made from small-diameter trees; and others. For example, one scientist we met with told us that the laboratory is using woody biomass to make water filters that can remove heavy metals, oils, phosphates, and pesticides from water. The laboratory is currently testing the use of these filters to remove heavy metal contaminants from mining site runoff. Another scientist we met with described his efforts to develop techniques for using sound waves to test the strength of small-diameter timber in order to assess its suitability for particular applications. Still other officials are working on less expensive ways of converting woody biomass to liquid fuels; researchers at the laboratory told us they are working on new ways of separating wood into its constituent components—lignin, hemicellulose, and cellulose—in order to improve the conversion process. The Forest Service conducts extensive education, outreach, and technical assistance activities through a variety of staff—small-diameter utilization specialists, rural development program managers, regional EAP coordinators, and others. Much of this activity is conducted by the Technology Marketing Unit (TMU) at the Forest Products Laboratory, which provides technical assistance and expertise in wood products utilization and marketing. TMU has produced an extensive array of publications conveying information about specific aspects of small- diameter wood utilization and marketing—for example, publications on biomass for small-scale heat and power, structural grading of logs from small-diameter trees, and the economic feasibility of making wood products from small-diameter trees—and issues a bimonthly newsletter titled Forest Products Conservation & Recycling Review. TMU staff also provide direct technical assistance to individuals or companies seeking information or assistance. One such user in New Mexico was interested in finding a use for local woody biomass. TMU staff worked with the individual to develop a wood-plastic composite using juniper fibers that could be made into road signs; the composite signs, unlike wooden signs, are not chewed on by animals—and are thus favored by the Forest Service because they do not have to be replaced as frequently. The individual now operates a 15-employee sign-making business utilizing low-value woody biomass. Figure 6 shows signs made from woody biomass mixed with plastic. Similarly, TMU has worked with businesses in Montana to find uses for roundwood, including roundwood buildings and bridges. Roundwood structures developed with TMU assistance include wood kiosks displayed at the 2002 Winter Olympics in Utah; a roundwood community pavilion in Westcliffe, Colorado; and the Darby Community Library in Darby, Montana. In addition, a 165-foot suspension bridge designed with TMU assistance and being built primarily with 6-inch diameter lodgepole pine is currently under construction in Lolo, Montana. Figure 7 shows a roundwood kiosk made from small-diameter wood; figure 8 shows the interior of the library in Darby, Montana, which was constructed from roundwood. The Forest Service also has partnerships with state and regional entities that provide a link between scientific and institutional knowledge and local users. One such group, the Colorado Wood Utilization and Marketing Assistance Center, housed at Colorado State University, provides small grants in Colorado and assists communities in identifying technologies that will utilize forest thinnings to heat buildings and generate electricity. Another such partnership is through the Forest Service’s Wood Education and Resource Center in West Virginia, which assists constituents in addressing economic, environmental, technological, and social challenges through training, technology transfer, and applied research. Yet another partnership with state and regional entities involves the Forest Service and the Greater Flagstaff Partnership in Arizona, an alliance of 27 environmental and governmental organizations that researches and demonstrates approaches to forest ecosystem restoration in the ponderosa pine forests surrounding Flagstaff, Arizona. Staff in Forest Service field offices also provide education, outreach, and technical assistance. Each region has an EAP coordinator, and coordinators we spoke with provided numerous examples of their involvement in woody biomass. For example, one EAP coordinator organized a “Sawmill Improvement Short Course” designed to provide information to small sawmill owners regarding how to better handle and use small-diameter material, how to find small-diameter markets, and so forth. EAP coordinators also have conducted demonstrations of equipment for handling woody biomass cost-effectively, including several demonstrations of a “slash bundler” that can bundle and compress woody biomass for more efficient transportation. Figure 9 shows the slash bundler in operation. Other field staff also provide technical assistance; for example, the Fremont-Winema National Forest in Oregon employs a Forest Products and Economic Development Specialist, who told us he provides general information about new technologies and economic issues to entities looking to engage in woody biomass-related activities; provides assistance in assessing the woody biomass harvesting, processing, and utilization infrastructure; and works with potential grant applicants to help them develop appropriate projects with defined goals and outcomes, which are more likely to be funded. An EAP official told us that the assistance provided to small groups or businesses is critical to getting them established and making them competitive for other assistance, such as USDA Rural Development grants; the official stated that many small businesses lack the expertise to prepare a competitive business plan or to adequately estimate future costs and revenues. Until November 2004, the Forest Service employed a small-diameter utilization specialist who served as a national resource to provide education and technical assistance. This specialist told us he conducted frequent presentations to both agency and nonagency audiences on using woody biomass and worked as a liaison between parties interested in using woody biomass and agency officials or private companies that can assist them. He also maintained a small-diameter utilization Web site. However, in November 2004 he transferred out of the position, and the position has not yet been refilled. Although DOE maintains some grant programs and provides technical assistance to assist federal, state, and tribal agencies in switching to renewable energy, most of its activities focus on research and development. Following a recent reorganization, most of DOE’s woody biomass activities are overseen by its Office of the Biomass Program, although some activities also are conducted within the Federal Energy Management Program and the Tribal Energy Program. DOE maintains several grant programs that emphasize renewable energy, potentially including woody biomass. DOE’s Golden Field Office in Colorado administers the National Biomass State and Regional Partnership, which provides grants for biomass-related activities through five regional partners: the Coalition of Northeastern Governors Policy Research Center, the Council of Great Lakes Governors, the Southern States Energy Board, the Western Governors’ Association, and DOE’s Western Regional Office. DOE provides funds to each regional partner; the partners, in turn, provide grants to states. Although the overall DOE partnership does not emphasize woody biomass over other types of biomass, the Western Governors’ Association is directing its DOE funds toward projects involving woody biomass, according to an official with the association. Another DOE grant program that potentially involves woody biomass is the State Energy Program, which provides grants to states to design and carry out their own renewable energy and energy efficiency programs. States manage the funds and are required to match 20 percent of the DOE grants. In 2004, about $44 million was directed in grants to the states, and another $16 million was directed to special state projects. While the grant program does not emphasize woody biomass over other energy sources, woody biomass projects may be included among those funded, depending on state priorities. The Tribal Energy Program promotes tribal energy sufficiency, economic development, and employment on tribal lands through renewable energy and energy efficiency technologies. Over the past 2 years, DOE has funded a total of 45 tribal energy projects, for a total of $8.4 million; the projects are primarily for energy and electricity, with some specifically targeting the utilization of woody biomass. A DOE-funded study involving the Yavapai- Apache Reservation in Arizona, for example, will examine the feasibility of a proposed power generation facility using woody biomass, while another study involving the Red Lake Band of the Chippewa Indians in Minnesota will examine the use of woody biomass for producing power, fuels, and products. DOE’s woody biomass research and development activities are managed by its Office of the Biomass Program, which has overall responsibility for managing DOE’s research activities relating to the use of biomass for fuels, chemicals, and power. Many woody biomass research and development activities within DOE are carried out by the National Bioenergy Center, a “virtual center” intended to unify DOE’s efforts to advance technology for producing fuels, chemicals, materials, and power from biomass. These activities generally encompass research into the conversion of biomass, including woody biomass, to liquid fuels, power, chemicals, or heat. In addition, a new biomass laboratory—the Biomass Surface Characterization Laboratory—was dedicated at NREL in January 2005. An NREL official told us that DOE does not have an effort specific to woody biomass, though its activities can be applied to the material. DOE also supports research into woody biomass through partnerships with industry and academia. Program management activities for these partnerships are conducted by DOE headquarters, and project management through DOE field offices. In addition to its research activities, the National Bioenergy Center provides information and guidance to industry, stakeholder groups, and users through presentations and lectures, according to DOE officials. Information also is made available through the DOE Web site. DOE also provides outreach and technical assistance through its State and Regional Partnership, Federal Energy Management Program (FEMP), and Tribal Energy Program. FEMP provides assistance to federal agencies seeking to implement renewable energy and energy efficiency projects, including assistance in designing renewable energy systems and obtaining private- sector financing. Among these efforts is a program focused on using biomass and alternative methane fuels in energy projects at federal facilities, and although the program does not focus specifically on woody biomass, a FEMP official told us that military and civilian agencies (including the Forest Service) across the country are increasingly contemplating projects in which woody biomass would be used to heat and power federal installations. In addition to grants, the Tribal Energy Program also provides technical assistance to tribes, including strategic planning and energy options analysis. Interior’s activities include limited grant programs and education and outreach; department agencies do not conduct research and development into woody biomass utilization issues. Interior also works with its land management agencies to develop policy and direction regarding woody biomass activities. Interior now requires that the agencies’ land management service contracts include an option allowing contractors to remove woody biomass generated through the contracts where ecologically appropriate, and has directed the agencies to develop contract mechanisms to include biomass removal in timber sale contracts. Many of Interior’s woody biomass activities are implemented by BLM, which recently established a woody biomass utilization strategy that will provide a framework for future agency activities and allow it to expand its biomass utilization efforts. The strategy, made final in July 2004, includes overall goals related to increasing the utilization of biomass from treatments on BLM lands, and individual action items within three substrategies: developing tools, building expertise within BLM and building networks with other agencies and organizations, and increasing the percentage of acres treated from which harvested biomass is subsequently used. Individual action items include developing contract specifications for appraising biomass and guidelines for estimating biomass volume; training BLM staff in the use of biomass guidance and tools; facilitating technology transfer with key partners such as governments, tribes, and contractors; and increasing funding available for biomass projects. BLM also is contemplating a small-scale preferred procurement initiative for woody biomass products, similar to the preferred procurement program for biobased products established in the Farm Security and Rural Investment Act of 2002. In addition to BLM, three other Interior agencies—the Bureau of Indian Affairs (BIA), the Fish and Wildlife Service (FWS), and National Park Service (NPS)—conduct activities related to woody biomass. An official from the U.S. Geological Survey told us that her agency does not conduct activities to promote woody biomass utilization. Interior generally does not have grant programs specifically targeted toward woody biomass. However, BIA has provided a limited number of grants to Indian tribes, including a 2004 grant to the Confederated Tribes of the Warm Springs Reservation in Oregon to conduct a feasibility study for updating and expanding a woody biomass-fueled power plant. Interior agencies conduct education, outreach, and technical assistance, but not to the same degree as the Forest Service. The primary BLM official responsible for woody biomass activities told us that BLM does not have staff at field locations assigned to identify community resources and to build community capacity, as does the Forest Service. According to this official, BLM’s community outreach is conducted primarily through its land use and management planning activities, which include interaction with environmentalists, community leaders, and others. This official said that BLM is making a concerted effort to promote woody biomass utilization, has hired new forest management staff, and is studying the possibility of engaging in outreach activities through proposed demonstration projects called “incubators,” which would serve as examples of successful woody biomass utilization. Funding has not yet been appropriated for these projects, according to this official. Interior also will use the National Association of Conservation Districts, with whom it signed a cooperative agreement, to conduct outreach activities related to woody biomass. BIA provides technical assistance to tribes seeking to implement renewable energy projects; specifically, the agency works with tribes to determine appropriate management activities and offers technical assistance in marketing forest products. Tribal projects include a proposal by the Northern Cheyenne Tribe in Montana to use woody biomass to provide steam and electricity for a manufacturing plant and a study by the Confederated Tribes of the Warm Springs Reservation of the feasibility of producing energy from forest thinning projects. BIA also sponsored a renewable energy conference, including an emphasis on woody biomass, in September 2004. Interior’s primary woody biomass official told us that tribal officials are very interested in biomass. Although FWS and NPS conduct relatively few woody biomass utilization activities, according to agency officials, in some cases the agencies will work to find a woody biomass user nearby if a market exists for the material. After a 2004 thinning project in Denali National Park, for example, NPS used some cut trees in cabin restoration projects and for firewood for backcountry cabins; however, the bulk of the biomass generated was provided to a nearby coal mine, which wanted material for use in a reclamation project at the mine site. NPS officials told us that their agency did not charge the mine for the material, but that the arrangement saved NPS several hundred thousand dollars in transportation and disposal fees because the material would otherwise have been sent to a landfill. The officials stated that finding a market for this material “represented a lot of time and effort on the part of local Park Service planners.” Both FWS and NPS officials told us that the agencies’ woody biomass activities are limited because the agencies produce only modest amounts of the material; most FWS and NPS fuel reduction activities use fire rather than mechanical thinning. Further, according to agency officials, in those instances where woody biomass is generated, the agencies often use the material for their own purposes—for example, using chipped biomass to stabilize soils during restoration projects. Aside from USDA, DOE, and Interior, several other federal agencies also are engaged in woody biomass activities through their advisory or research activities. The Environmental Protection Agency (EPA) provides technical assistance through its Combined Heat and Power Partnership to power plants that generate combined heat and power from various sources, including woody biomass and other sources of renewable energy. An EPA official told us that the partnership is fuel neutral, meaning that it does not promote the use of one fuel over another when producing combined heat and power. EPA also has a Green Power Partnership Program to assist federal agencies and companies in procuring power for their facilities from renewable sources. Three other agencies also have limited involvement in biomass activities through their membership on the Biomass Research and Development Board, created by the Biomass Research and Development Act of 2000. The board, which is intended to focus on all biomass issues, not solely woody biomass, is responsible for coordinating federal activities for the purpose of promoting the use of biobased industrial products. The board consists of membership from USDA, DOE, Interior, and EPA, as well as the National Science Foundation, the Office of the Federal Environmental Executive, and the Office of Science and Technology Policy (both within the Executive Office of the President). Officials we spoke with from the National Science Foundation, Office of Science and Technology Policy, and the Office of the Federal Environmental Executive told us that their involvement in issues specifically related to woody biomass is minimal. We also contacted officials from the Departments of Commerce and Transportation. Officials from both told us their departments do not conduct woody biomass utilization activities. Federal agency efforts to coordinate their woody biomass utilization activities, both among and within agencies, occurred through both formal and informal mechanisms. Formal coordination between agencies occurs through both the Woody Biomass Utilization Group and the Biomass Research and Development Board, although most agency officials we spoke with emphasized informal communication—through telephone discussions, e-mails, participation in conferences, and other means—rather than these groups as the primary vehicle for interagency coordination. To coordinate internal activities, both DOE and Interior have formal mechanisms—DOE coordinates its activities through the Office of Energy Efficiency and Renewable Energy (EERE), while both Interior and BLM have appointed officials to lead their woody biomass efforts; further, Interior’s woody biomass policy and BLM’s woody biomass strategy guide these organizations’ efforts. In contrast, the Forest Service—the USDA agency with the most woody biomass activities—has not assigned responsibility for coordinating its woody biomass activities, potentially leading to fragmentation of effort and diluting the impact of these activities. Two groups serve as formal vehicles for coordinating federal agency activities related to woody biomass utilization. The Woody Biomass Utilization Group, open to all national, regional, and field-level staff across numerous agencies, is a multiagency group that meets quarterly on woody biomass utilization issues. According to the group’s draft charter (which has not been made final), the group’s objectives are to (1) implement the policy principles of the June 2003 Memorandum of Understanding between USDA, DOE, and Interior; (2) coordinate, plan, and encourage woody biomass utilization; (3) serve as technical and policy advisers on woody biomass utilization; and (4) function as an information clearing house to help identify relevant woody biomass utilization technologies, foster joint demonstrations and pilot projects, identify research and development needs, and highlight successful woody biomass projects. The draft charter calls for a chair position to be rotated on an annual basis, generally between USDA, DOE, and Interior. The other formal group is the Biomass Research and Development Board, which is responsible for coordinating federal activities to promote the use of biobased industrial products. The board consists of membership from USDA, DOE, and Interior, as well as EPA, the National Science Foundation, the Office of the Federal Environmental Executive, and Office of Science and Technology Policy, and is co-chaired by USDA’ s Under Secretary for Natural Resources and Environment and DOE’s Assistant Secretary for Energy Efficiency and Renewable Energy. The board is supported by the Biomass Research and Development Technical Advisory Committee, which includes representatives of nonfederal groups such as industry, academia, trade associations, and the like. When discussing coordination among agencies, however, agency officials more frequently cited using informal mechanisms for coordination than the formal groups described above. For example, two officials we spoke with in the Forest Service’s Northwest Region told us that although they were aware of the interagency Woody Biomass Utilization Group, they were not aware of any of the group’s activities—or even whether the group has a charter. Several officials told us that informal communication among networks of individuals was essential to coordination among agencies; one Forest Service field official told us that, in contrast to formal groups, the more common method for coordinating among agencies is frequent, informal communication through e-mail, telephone calls, and discussions at regional or local conferences or workshops. Another Forest Service field official emphasized that his informal network of officials—both within and outside the agency and with whom he converses by telephone and e-mail regularly—helps him keep abreast of woody biomass developments by providing reports, documents, and other information. Similarly, a headquarters official in another agency described a network of individuals—both within and outside of the agency—with whom he remains in frequent e-mail and telephone contact. These individuals exchange information regarding projects, policies, potential impacts of legislation, success stories, and the like. In each case, the officials stated that they relied much more upon informal means of coordination than on formal interagency groups. Officials also described other forms of coordination. Two officials described a regional grant application review team that included Forest Service, BLM, BIA, and FWS staff that jointly reviewed applications for fuels treatment grants. Although the main emphasis of the grants was not woody biomass, there was discussion within the review team about biomass issues that ensue from fuels treatment projects. Another program that involves interagency coordination is the joint review of applications by USDA and DOE for renewable energy projects authorized by the Biomass Research and Development Act of 2000. In addition, two officials told us that the Forest Service was trying to organize a multiagency team to collaborate on woody biomass efforts within the agency’s Northwest Region. Other officials mentioned state-level interagency working groups focusing on fire and fuels reduction issues and consisting of representatives from the Forest Service, Interior agencies, and nonfederal entities. These groups are primarily concerned with fire suppression capacity, fuel reduction treatments, and community wildland fire planning efforts, not with woody biomass. However, according to these officials, the woody biomass issue is interwoven with these other issues and is often discussed. Further, the networks established by these interagency groups facilitate communication on a variety of issues, including woody biomass, among the states and agencies involved. DOE’s woody biomass utilization activities are coordinated through EERE. Within this office, the Office of the Biomass Program directs biomass research at DOE national laboratories and contract research organizations, while a small number of woody biomass activities are undertaken within two other programs, the Federal Energy Management Program and the Tribal Energy Program. Interior has appointed a single official to oversee its woody biomass activities and is operating under a woody biomass policy in the form of an April 2004 memorandum from the Assistant Secretary for Policy, Management and Budget. This memorandum directs all Interior bureaus and offices to implement the policy principles of the June 2003 Memorandum of Understanding between USDA, DOE, and Interior. According to the official responsible for overseeing Interior’s woody biomass efforts, this memo serves as departmental policy until a departmental manual can be updated. Interior also has appointed a Renewable Energy Ombudsman to coordinate all of the department’s renewable energy activities, including woody biomass. Similarly, BLM has appointed a single official to oversee woody biomass efforts, and, as noted, has developed a woody biomass utilization strategy to guide its activities, including overall goals related to increasing the utilization of biomass from treatments on BLM lands. In contrast, although the Forest Service developed a woody biomass policy in January 2005, unlike DOE and Interior, it has not assigned a specific individual or office with responsibility for overseeing its woody biomass activities. The agency does have an internal group—the Woody Biomass Utilization Team—that meets to discuss woody biomass issues, but this group does not have responsibility for implementing the policy. And according to some Forest Service officials we spoke with, agency woody biomass activities have been opportunistic, arising from local awareness of and interest in the issue rather than from a national strategy for approaching the issue. One Forest Service headquarters official told us that the agency’s woody biomass activities have been “a grassroots effort on the part of those who have a real burning passion for improving utilization.” However, according to this official, individuals who do not share that passion have not been involved in woody biomass because there has been no central requirement or strategy for addressing the woody biomass issue. Another headquarters official told us that the extent to which woody biomass has been addressed has depended on the knowledge, interest, and availability of the local forest staff and the presence of local markets for woody biomass. Several field officials we spoke with share this view; one field official told us that there is a great deal of interest in woody biomass technology on the part of field staff, but not much coordination and no formal strategy, while another noted that woody biomass activities are “largely dependent on local risk taking.” Yet another field official told us that there is no coordinated approach within the Forest Service to woody biomass; instead, determining what activities to undertake is left up to the forests and ranger districts, and depends on local leadership. The Forest Service does have an individual, located within the agency’s State and Private Forestry branch, who generally serves as the agency’s primary point of contact for woody biomass utilization. However, two officials noted that this individual serves primarily as a consultant, with no influence over budgets or activities. They also stated that, because this official works within the State and Private Forestry branch, he has no influence over agency activities regarding public lands and no influence over the Forest Service’s National Forest System or Research and Development branches, with their associated land bases or budgets. One headquarters official within the agency stated that without stronger central authority or a stronger woody biomass policy, the Forest Service will find it difficult to effect change because while the agency’s primary woody biomass official can discuss technology, innovation, supply, and other issues, he lacks the authority to influence land management practices. Two officials attributed the Forest Service’s lack of a coordinated woody biomass effort to the agency’s decentralized culture, with autonomy at the ranger district, national forest, and regional level. One official told us that this culture serves the agency well for some purposes but works against the agency when it tries to promote an idea or issue—such as woody biomass utilization—that has not been widely emphasized. Another official noted that each region in the Forest Service has considerable autonomy in developing its own policies, setting its own priorities, and establishing its own procedures, and that, while there is often value in having ideas originate from the field, a more formalized structure is often more effective at accomplishing overall agency objectives. According to this official, the woody biomass issue has reached the stage where a formalized, coordinated national strategy is appropriate. One official told us that the Forest Service’s emphasis on fuel reduction planning and implementation efforts under the National Fire Plan had focused the agency’s attention away from woody biomass. The 10-year comprehensive strategy for implementing the National Fire Plan contains four overall goals: (1) improving fire prevention and suppression, (2) reducing hazardous fuels, (3) restoring fire-adapted ecosystems, and (4) promoting community assistance, which includes woody biomass utilization. This official told us that the Forest Service’s emphasis on goals 1 and 2 has reduced its ability to focus on the other goals, and “now that the biomass is starting to pile up,” it is time for the Forest Service to begin focusing on woody biomass. The Western Governors’ Association issued a report in November 2004 concurring with this view, stating “Goal 4 must be given the same emphasis Goals 1 and 2 have received in order for its action items—and the 10-Year Strategy as a whole—to be accomplished.” Without an individual or office with responsibility for overseeing woody biomass activities within the agency, the Forest Service risks diluting the effects of its activities because individual units within the agency may undertake woody biomass activities that are not consistent with the activities of other units—or they may choose to undertake no woody biomass activities at all. Further, given the magnitude of the woody biomass issue and the finite funds available to the agency, it is important that the Forest Service ensure that activities on which it places a high priority are undertaken so that it can maximize its accomplishments within its budget. Agency officials cited two principal obstacles to increasing the use of woody biomass: the difficulty in using woody biomass cost-effectively— particularly the obstacles posed by the high cost of harvesting and transporting woody biomass—and the lack of a reliable supply of the material. Agency activities—grants, education and outreach, and research and development—are generally targeted toward the obstacles identified by agency officials. Many officials, however, told us that their agencies are limited in their ability to fully address these obstacles and that additional steps—such as subsidies and tax credits—beyond the agencies’ authority to implement are needed. But agency officials generally did not specify the level of subsidies or tax credits they believe would be necessary, and not all agree that such additional steps are appropriate. Most officials we spoke with cited the difficulty in using woody biomass cost-effectively—that is, in using the material to create products that generate more revenue than is required for their creation. Other obstacles cited include the lack of a reliable supply of woody biomass; internal agency barriers to effectively promoting woody biomass, including the lack of agency commitment to the issue; and the lack of a local infrastructure to harvest, transport, and process woody biomass. The obstacle most commonly cited by officials we spoke with (30 of 44 officials) is the difficulty of using woody biomass cost-effectively. Officials told us that the products that can be created from woody biomass— whether wood products, liquid fuels, or energy—often do not generate sufficient income to overcome the costs of acquiring and processing the raw material. For example, a Forest Service researcher in California estimated that the cost of generating electricity from woody biomass was about 7.5 cents per kilowatt hour, including costs to harvest, transport, and process the material, as well as operations, maintenance, and capital amortization costs. However, the same researcher noted that at the time of his study, the wholesale price paid for power in California was 5.3 cents per kilowatt hour—meaning that, without receiving additional income for their electricity, producers of woody biomass-generated electricity would lose about 2.2 cents for each kilowatt hour generated if they sold their electricity on the wholesale power market. One factor contributing to the difficulty in using woody biomass cost- effectively, according to 23 officials, is the cost incurred in harvesting and transporting woody biomass. For example, one Forest Service official pointed out that while a single 18-inch-diameter tree of a given height contains the same volume as 20 4-inch-diameter trees of the same height, it is much more expensive to harvest 20 trees than 1. Two officials told us that when the end use for woody biomass calls for chipped or ground material—for example, for use in power plants—it is often more efficient to chip the material in the forest and haul the chips to the plant rather than hauling the unprocessed woody biomass. However, these officials noted that the vehicles typically used to haul chips—known as chip vans—cannot navigate many forest roads, which were designed for logging trucks. Because hauling material in smaller vehicles is more costly, this adds to the difficulty in using the material cost-effectively. Officials pointed out that small installations located close to woody biomass sources will have lower transportation costs, enhancing their ability to use the material cost- effectively. Schools and other buildings located in communities near forests are thus particularly well-positioned for woody biomass use, according to officials—especially if these buildings are heated with natural gas or fuel oil, because once buildings convert their heating infrastructure to accept woody biomass, they can be heated at a lower cost by using woody biomass than by using natural gas or fuel oil. However, officials also noted that such installations consume relatively small amounts of woody biomass. Five officials primarily involved in research and development noted the costs involved in converting woody biomass to liquid fuels such as ethanol. For example, the chemical makeup of wood makes it more difficult and expensive to convert into ethanol than other substances such as corn, according to officials. Thus, although ethanol represents a potentially large opportunity for utilizing woody biomass (because of the demand for transportation fuels), the availability of cheaper raw materials such as corn presents an obstacle to its use. Of the 44 officials we spoke with, 22 told us that even if cost-effective means of using woody biomass were found, the lack of a reliable supply of woody biomass from federal lands presents an obstacle because business owners or investors will not establish businesses without assurances of a dependable supply of material. Officials identified several factors contributing to the lack of a reliable supply, including the lack of widely available long-term contracts for forest products, environmental opposition to federal projects, and the shortage of agency staff to conduct activities. Regarding long-term contracts, projects that use stewardship contracting authority may include contracts of up to 10 years—potentially stabilizing the long-term supply of woody biomass—whereas projects conducted outside of this authority must use contracts of a shorter duration. Agency officials cited one stewardship project—the White Mountain project in Arizona, which has a 10-year duration and is expected to treat 50,000 to 250,000 acres—as an example of the benefits of stewardship contracting in stabilizing supply. An official told us that two manufacturers are negotiating with the contractor to establish manufacturing plants using woody biomass removed as part of the project. According to this official, without the assurance of supply offered by a long-term contract, these manufacturers would not have shown interest. However, another official pointed out that Forest Service stewardship contracts must be approved at the regional level, making their use more cumbersome than other contract types. Adding further to the uncertainty of supply, 10 officials told us that environmental opposition poses an obstacle—for example, in the form of appeals and litigation that delay planned projects. Finally, according to five officials, staffing constraints make accomplishing projects in a timely manner difficult even without external opposition; two Forest Service officials told us that even if long-term contracts were available and environmental opposition were not a factor, the lack of staff still hampers the agency’s ability to implement projects. Six officials cited internal agency barriers that hamper agency effectiveness in promoting woody biomass utilization. Prior to the Forest Service’s January policy statement on woody biomass, one Forest Service official told us that the lack of a strong policy stating that using woody biomass is preferable to piling and burning it hampered the agency because no incentive existed for “field staff to think creatively about how to move to potential users.” This official told us that even if the Forest Service received no payment for the material, putting it to use was better than piling and burning it—which also brings no revenue—and this preference should be embedded in policy. Two Forest Service officials also noted that the agency’s mechanisms for designing and implementing projects were still geared toward larger, merchantable timber to the detriment of woody biomass. One official stated that “the Forest Service needs to improve its capabilities to design treatments, contracts, and agreements that will encourage utilization of smaller diameter material,” while another official echoed this view by stating that “timber operations account for the bulk of institutional knowledge about material removal.” Finally, several officials stated that federal agencies have not been effective in communicating the potential benefits of fuel reduction. According to the officials, fuel reduction would reduce fire suppression and rehabilitation costs, avoid damage to watersheds, avoid smoke pollution, and the like. Officials told us that communicating these benefits could reduce opposition to fuel reduction projects, which was cited as a factor in the uncertainty of woody biomass supply. Other officials cited the lack of agency commitment to the issue. For example, a BIA official told us that BIA has not provided the resources and structure required for promoting and developing woody biomass utilization projects. Six officials told us that more funds should be devoted to researching new or less expensive ways to use woody biomass in order to overcome economic obstacles to its use. And two Forest Service officials cited that agency’s lack of a woody biomass policy as an obstacle to effective agency promotion of woody biomass utilization. A variety of other obstacles were noted as well. One official told us that some large facilities such as prisons could use woody biomass to generate their own electricity for less than the cost of electricity sold by electrical utilities. However, such facilities generally would need to have electricity available from the grid in the event that their own generators were unavailable—and, according to this official, utilities can charge rates for this electricity (known as standby power) that are equal to the rates charged for electricity that is actually delivered. In other words, for every hour the utility is prepared to deliver electricity to the facility, the utility charges a fixed portion of the rate that would have been charged had the electricity actually been delivered—100 percent of the rate in some cases, according to this official. As a result, installations would pay not only the costs of generating their own electricity but also the standby power rates charged by the utility—costs that, when combined, may exceed the cost of simply purchasing electricity from the utility. Withdrawing from the electricity grid entirely can be problematic as well; this official stated that utilities can charge fees—known as exit fees—for doing so. Another obstacle cited by officials is the lack of a local infrastructure for harvesting, transporting, and processing woody biomass, including loggers, mills, and appropriate equipment for treating small-diameter material. Three Forest Service officials we spoke with told us that in some cases the decline in federal logging has left areas without any infrastructure at all, while in other cases the infrastructure that is left is equipped to handle large trees rather than woody biomass. According to officials, contractors need equipment designed for handling woody biomass rather than larger trees in order to cost-effectively harvest and transport the material. However, contractors may not have the capital to purchase this new equipment, and may be unable to obtain loans without assurances of a long-term supply of woody biomass. The agency activities we identified were generally targeted toward the obstacles agency officials cited. Agencies provided grants, engaged in outreach, and conducted research aimed at overcoming economic obstacles to woody biomass use, and conducted activities to address other obstacles as well. However, several officials believe that additional steps beyond the agencies’ authorities are needed to fully address the woody biomass issue. Agency activities related to woody biomass were generally aimed at overcoming the obstacles agency officials identified, including many aimed at overcoming economic obstacles. For example, staff at the Forest Service’s TMU have worked with potential users of woody biomass to develop products whose value is sufficient to overcome the costs involved in harvesting and transporting the material; EAP coordinators have worked with potential woody biomass users to overcome economic obstacles; and Forest Products Laboratory researchers are working with NREL to increase the yield of ethanol from woody biomass, making wood-to-ethanol conversion more cost-effective. Some agency activities also are targeted at providing more certainty of supply. A Forest Service official in New Mexico has been meeting with environmental groups to try to obtain consensus on the need for forest- thinning activities. Obtaining consensus can reduce the likelihood of environmental opposition, making Forest Service projects easier to accomplish and allowing a steadier supply of biomass. Although not all groups will support the projects, according to this official, obtaining agreement from major groups can blunt opposition from other groups. Other officials are working on models to predict the amount of woody biomass potentially available, giving users a better sense of the supply of raw materials. Despite ongoing agency activities, 14 officials told us that additional steps—such as subsidies or tax credits—that are beyond the agencies’ authorities are necessary to develop a market for woody biomass. According to several officials, the obstacles to using woody biomass cost- effectively are simply too great to overcome by using the tools—grants, outreach and education, and so forth—at the agencies’ disposal. One official stated that “in many areas the economic return from smaller- diameter trees is less than production costs. Without some form of market intervention, such as tax incentives or other forms of subsidy, there is little short-term opportunity to increase utilization of such material.” Three officials stated that subsidies have the potential to reduce the per-acre cost of thinning, because if there is a market for woody biomass, contractors will be willing to harvest the material for a lower fee, knowing that they can recoup some of their costs by selling the material. According to these officials, subsidies thereby create an important benefit—reduced fire risk through hazardous fuels reduction—if they promote additional thinning activities by stimulating the woody biomass market. Officials told us that tax incentives, subsidies, and low-interest loans may serve to stimulate infrastructure for harvesting, processing, and transporting woody biomass, and that such assistance should target not only larger plants and facilities but smaller operators as well. Harvesters and loggers, for example, could use the assistance to purchase the expensive equipment and machinery required to treat woody biomass and thus help to build the required infrastructure. It is not only federal officials who hold this view. In testimony before the Congress, the owner of a sawmill that uses woody biomass to generate electricity for the mill stated that woody biomass-to-energy does not work as a stand-alone enterprise. According to this individual, “The cost structure associated with removing woody biomass from the forest, hauling the material to a facility and converting the fiber into a product suitable for electricity production is prohibitive without massive subsidization.” Others see a need for state requirements that utilities procure or generate a portion of their electricity by using renewable resources, known as renewable portfolio standards. Forest Service officials in the Southwest Region are encouraging states in the region to enact renewable portfolio standards that include a woody biomass component. These officials are urging states to go beyond simply requiring electricity from renewable resources and require, or provide favorable treatment of, electricity generated from woody biomass produced as part of forest restoration projects. The official primarily responsible for this effort stated that “using this biomass source will help lower costs and allow restoration activities to occur on many more thousands of acres than present budgets allow.” Agency officials generally did not specify the level of subsidies or tax credits they thought necessary, and not all officials believe that these additional steps are efficient or appropriate. One official told us that, although he supports these activities, the creation of tax incentives and subsidies would create enormous administrative and monitoring requirements. Another official stated that although federal policy changes such as increased subsidies could address obstacles to woody biomass utilization, he does not believe they should be made. Rather, he believes that research and development efforts, combined with market forces, will eventually result in “equilibrium”—in other words, in woody biomass utilization finding its appropriate level. If cost-effective uses of woody biomass can be found, its utilization will increase. Yet another official stated that while production tax credits or subsidies may be successful in getting businesses or industries started, he does not believe they are sustainable over the long term. In addition, he is reluctant to create credit- or subsidy-dependent businesses that would be at the mercy of the annual appropriations cycle. Instead, market-driven solutions are more appropriate—for example, providing information to exploit the existing market, or developing requirements or incentives (such as renewable portfolio standards) that create a market on their own. Further, not all agree with the assumption that the market for woody biomass should be expanded. One agency official told us he is concerned that developing a market for woody biomass may result in overuse of mechanical treatment (rather than prescribed burning) as the market begins to drive the preferred treatment. In other words, given a choice between mechanical thinning and prescribed burning, a forest manager might choose mechanical thinning not because it was the most appropriate tool for the project at hand but to satisfy the demand for woody biomass. This official stated that “if we do that, we are not being good stewards of the land.” Environmental group representatives also have urged caution in taking any steps that expand the market for woody biomass. Representatives of one national environmental group told us that relying on woody biomass as a renewable energy source will lead to overthinning, as demand for woody biomass exceeds the supply that is generated through responsible thinning. They also questioned the incentive to create or reconstruct roads in the forests to facilitate inexpensive transportation of woody biomass because they believe doing so introduces unwanted side effects—increased erosion and sedimentation, increased access to areas of the forest that previously had no roads, and increased maintenance and enforcement costs for the federal agencies. Finally, the representatives questioned the true energy gain of using woody biomass—that is, whether the energy involved in harvesting, transporting, and processing woody biomass exceeds the energy contained in the biomass—stating that “it doesn’t make economic sense to burn expensive gasoline to get cheap biomass.” However, they stated that the benefits gained by using the biomass rather than piling it in landfills or leaving it in the forest where in some locations it would continue to pose a significant fire risk may justify any net energy loss. The amount of woody biomass resulting from increased thinning activities could be substantial, adding urgency to the search for ways to use the material cost-effectively rather than simply disposing of it. The use of woody biomass, however, will become commonplace only when users— whether small forest businesses or large utilities—can gain an economic advantage by putting it to use. Federal agencies are targeting their activities toward overcoming this and other obstacles—for example, by providing technical assistance and grant funds to businesses facing economic challenges in using woody biomass. But some agency officials believe that these efforts alone will not be sufficient to stimulate a market that can accommodate the vast quantities of material expected. While additional key steps may be necessary at the federal and state levels, we believe the agencies will continue to play an important role in stimulating woody biomass use. However, while both DOE and Interior have designated individuals or offices for coordinating woody biomass activities, no individual or office within the Forest Service has been similarly designated. Without an individual or office with responsibility for overseeing and coordinating woody biomass activities within the agency, the Forest Service can neither ensure its multiple activities contribute to the agency’s overall objectives nor assess the effectiveness of individual activities. Further, by taking a piecemeal approach to the issue, the agency risks diluting the impact of its activities because different agency units may be emphasizing different priorities. Some local variation may be appropriate—to account for regional differences in infrastructure, for example, or in forest type. Nevertheless, a coordinated approach is essential if the Forest Service is to capitalize fully on its potential to increase woody biomass utilization. To improve the Forest Service’s effectiveness in promoting woody biomass utilization, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to assign responsibility for overseeing and coordinating the agency’s woody biomass utilization activities to a specific official or office within the agency. We provided a draft of this report to the Secretaries of Agriculture, Energy, and the Interior for review and comment. USDA concurred with our findings and recommendation, and the department’s comment letter is presented in appendix II. DOE officials stated they had no comments on the report, while Interior did not provide comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Agriculture, Secretary of Energy, Secretary of the Interior, Chief of the Forest Service, Director of BLM, and other interested parties. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix III. The objectives of our review were to determine (1) which federal agencies are involved in efforts to promote the use of woody biomass, and the actions they are undertaking; (2) how these federal agencies coordinate their activities related to woody biomass; and (3) what these agencies see as the primary obstacles to increasing the use of woody biomass and the extent to which they are addressing these obstacles. To get a better understanding of woody biomass issues, we initially met with officials at the Forest Service and Office of the Chief Economist within the Department of Agriculture (USDA), the Department of Energy’s (DOE) Office of Energy Efficiency and Renewable Energy, the Department of the Interior, the Bureau of Indian Affairs, the Bureau of Land Management (BLM), the Fish and Wildlife Service, and the National Park Service. We also met with representatives from nonfederal organizations, including the Western Governors’ Association, Colorado State University, the state of New Mexico, the state of California, the Santa Ana Pueblo, the Wilderness Society, the Nature Conservancy, Public Service Company of New Mexico, and others. We also visited the Forest Service’s Forest Products Laboratory in Madison, Wisconsin; a woody biomass-heated community center in Nederland, Colorado; and a wood-fired power plant in Burney, California. We subsequently developed a structured interview guide to collect information on woody biomass utilization activities, coordination efforts, and challenges to utilizing woody biomass. Because the practical difficulties of developing and administering a structured interview guide may introduce errors—resulting from how a particular question is interpreted, for example, or from differences in the sources of information available to respondents in answering a question—we included steps in the development and administration of the structured interview guide for the purpose of minimizing such errors. We pretested the instrument at two locations by telephone and modified it to reflect questions and comments received during the pretests. To determine whom to interview, we began with agency headquarters officials who had been identified by the agencies as points of contact for woody biomass activities. As part of these interviews, we asked for the names of additional officials—regardless of location or agency affiliation— who could provide additional information on, or insights into, woody biomass issues. We continued this expert referral technique until the references we received became repetitive. In all, we used our structured interview guide to interview a nonprobability sample of 44 officials in various agencies and geographic locations. Our sample included officials at various levels within the agencies, including agency headquarters; Forest Service regional, national forest, and ranger district offices; Forest Service research facilities, including regional research stations and the Forest Products Laboratory; a BLM district office; DOE national laboratories; and others. Our structured interviews were conducted with officials from the following departments and agencies: Cooperative State Research, Education, and Extension Service. Forest Service (including the National Forest System, Research and Development, and State and Private Forestry branches). Natural Resources Conservation Service. Golden Field Office. National Energy Technology Laboratory. National Renewable Energy Laboratory. Office of Energy Efficiency and Renewable Energy (including the Federal Energy Management Program, the Office of the Biomass Program, the FreedomCAR and Vehicle Technologies Program, and the Tribal Energy Program). Department of the Interior. Bureau of Indian Affairs. Bureau of Land Management. Fish and Wildlife Service. National Park Service. U.S. Geological Survey. Environmental Protection Agency. National Science Foundation. Office of Federal Environmental Executive, Executive Office of the President. Office of Science and Technology Policy, Executive Office of the President. We also contacted officials from the Departments of Commerce and Transportation, who told us their departments have no activities related to woody biomass utilization. To collect information on federal agency woody biomass utilization activities, we used our structured interview guide to ask officials to identify individuals or organizations responsible for biomass utilization activities within their agencies and to identify other federal agencies involved in such activities. We also asked them to provide information about the activities their agencies had under way as well as policies, strategic plans, and goals related to woody biomass. We also reviewed agency policies, strategic plans, and other documents; federal and nonfederal studies regarding technological, economic, and other issues related to woody biomass utilization; and pertinent laws and other documents. To corroborate the information we gathered through interviews, we compared interviewees’ responses with other information we reviewed. Because the documentary evidence we reviewed generally agreed with the information provided by key agency officials involved in woody biomass efforts, we believe the data are sufficiently reliable to be used in providing descriptive information on federal agency woody biomass utilization activities. To determine how agencies coordinate their woody biomass activities, we asked officials to provide information on individuals or organizations responsible for coordinating activities within their agencies and those responsible for coordinating activities involving other agencies, as well as on the types of formal and informal activities they undertook. We also reviewed agency documentation regarding coordination issues, including draft and final coordinating team charters and notes from coordinating team meetings. We then compared the information provided by agency officials with this documentation. Because the documentary evidence we reviewed generally agreed with the information provided by key agency officials involved in woody biomass efforts, we believe the data are sufficiently reliable to be used in providing descriptive information on agency woody biomass coordination efforts. To obtain information on obstacles that federal agencies face in their efforts to increase the use of woody biomass, we asked agency officials to identify and provide their opinions on the major obstacles to increasing the use of woody biomass, describe agency efforts that target the obstacles they identified, and discuss additional steps they believe are necessary to address these obstacles. Because we asked only for opinions about obstacles to woody biomass utilization and additional steps needed to overcome them, we made no attempt to corroborate these responses. To corroborate responses regarding agency efforts to target the obstacles identified, we compared interviewees’ responses with the documentary evidence we gathered regarding the agencies’ woody biomass utilization activities. Because the documentary evidence we reviewed generally supported the information provided by interviewees, we believe the data are sufficiently reliable to be used in providing information about the extent to which the agencies are addressing these obstacles. We performed our work from June 2004 through March 2005 in accordance with generally accepted government auditing standards. In addition to those named above, James Espinoza, Steve Gaty, Richard Johnson, and Judy Pagano made key contributions to this report.
In an effort to reduce the risk of wildland fires, many federal land managers--including the Forest Service and the Bureau of Land Management (BLM)--are placing greater emphasis on thinning forests and rangelands to help reduce the buildup of potentially hazardous fuels. These thinning efforts generate considerable quantities of woody material, including many smaller trees, limbs, and brush--referred to as woody biomass--that currently have little or no commercial value. GAO was asked to determine (1) which federal agencies are involved in efforts to promote the use of woody biomass, and actions they are undertaking; (2) how these agencies are coordinating their activities; and (3) what agencies see as obstacles to increasing the use of woody biomass, and the extent to which they are addressing these obstacles. Most woody biomass utilization activities are implemented by the Departments of Agriculture (USDA), Energy (DOE), and the Interior, and include awarding grants to businesses, schools, Indian tribes, and others; conducting research; and providing education. Most of USDA's woody biomass utilization activities are undertaken by the Forest Service and include grants for woody biomass utilization, research into the use of woody biomass in wood products, and education on potential uses for woody biomass. DOE's woody biomass activities focus on research into using the material for renewable energy, while Interior's efforts consist primarily of education and outreach. Other agencies also provide technical assistance or fund research activities. Federal agencies coordinate their woody biomass activities through formal and informal mechanisms. Although the agencies have established two interagency groups to coordinate their activities, most officials we spoke with emphasized informal communication--through e-mails, participation in conferences, and other means--as the primary vehicle for interagency coordination. To coordinate activities within their agencies, DOE and Interior have formal mechanisms--DOE coordinates its activities through its Office of Energy Efficiency and Renewable Energy, while Interior and BLM have appointed officials to oversee, and have issued guidance on, their woody biomass activities. In contrast, while the Forest Service recently issued a woody biomass policy, it has not assigned responsibility for overseeing and coordinating its various woody biomass activities, potentially leading to fragmented efforts and diluting the impact of these activities. The obstacles to using woody biomass cited most often by agency officials were the difficulty of using woody biomass cost-effectively and the lack of a reliable supply of the material; agency activities generally are targeted toward addressing these obstacles. Some officials told us their agencies are limited in their ability to address these obstacles and that incentives--such as subsidies and tax credits--beyond the agencies' authority are needed. However, others disagreed with this approach for a variety of reasons.
Transitional Medicaid assistance offers families moving from cash assistance to employment the opportunity to maintain health insurance coverage under Medicaid, a joint federal-state health insurance program. Medicaid spent about $216 billion in fiscal year 2001 on coverage for certain low-income individuals. Transitional Medicaid assistance provides certain families losing Medicaid as a result of employment or increased income with up to 1 year of Medicaid coverage. Families moving from cash assistance to work are entitled to an initial 6 months of Medicaid coverage without regard to the amount of their earned income, and 6 additional months of coverage if family earnings, minus child care costs, do not exceed 185 percent of the federal poverty level. To qualify for either 6- month period, a family must have received Medicaid in 3 of the 6 months immediately before becoming ineligible as a result of increased income. When federal welfare reform was enacted in 1996, states implemented a variety of initiatives intended to help families move from welfare to the workforce. Welfare reform provided states additional flexibility in helping cash assistance recipients to both find work and achieve family independence. As a result, states have expanded and intensified their provision of work support services such as those for job search, job placement, and job readiness. Many individuals in this population had low skills and faced a number of barriers to maintaining work and independence. For example, our work has shown that factors such as limited English proficiency, poor health, and the presence of a disability were some of the factors that affected the extent to which former cash assistance recipients were able to find and keep employment. Maintaining health insurance coverage is important to persons entering the workforce because there are important adverse health and financial consequences to living without health insurance. The availability of health insurance enhances access to preventive, diagnostic, and treatment services as well as provides financial security against potential catastrophic costs associated with medical care. Research has demonstrated that uninsured individuals are less likely than individuals with insurance to have a usual source of care, are more likely to have difficulty in accessing health care, and generally have lower utilization rates for all major health care services. Uninsured individuals are more likely than those insured to forgo services such as periodic check-ups and preventive services, well-child visits, prescription drugs, dental care, and eyeglasses. As a result, individuals not covered by health insurance may need acute, costly medical attention for conditions that might have been preventable or minimized with early detection and treatment. Limitations in private sources of coverage underscore the importance of transitional Medicaid assistance as an option for those moving from cash assistance to employment. Private health insurance is not accessible to or affordable for everyone. Although most working Americans and their families obtain health insurance through employers, many workers do not have coverage because their employers do not offer it or the coverage offered is limited or unaffordable. Lack of insurance is more common among certain types of workers, employers, and industries and may disproportionately represent individuals transitioning from cash assistance to work. For example, individuals who work part-time or are employed in low-wage jobs are less likely to have access to affordable employer- sponsored coverage. Furthermore, those who do not have employer- sponsored coverage may find alternative sources of coverage, such as the individual insurance market, expensive or altogether unavailable. Without continued access to Medicaid, some of these individuals, who are often in low-wage jobs, will have limited or no access to alternative coverage and could end up uninsured. Employment-based coverage is the primary means for nonelderly Americans to obtain health insurance, and over two-thirds of nonelderly adults obtained their coverage through an employer in 2000. However, a significant number of workers do not have health insurance because either their employers do not offer it or they choose not to purchase it. In 2000, 30 million nonelderly adults were uninsured, even though 75 percent worked for some period during the year. (See fig. 1.) Lack of insurance coverage is more common among certain types of workers, employers, and industries. Part-time employees and employees of small firms (fewer than 10 employees) are more likely to be uninsured than employees who work full-time or for a large company. Individuals working in certain industries are less likely to be offered health insurance. For example, in 1999, more than 30 percent of workers in the construction, agriculture, and natural resources (for example, mining, forestry, and fisheries) industries were uninsured, as were about 25 percent of workers in wholesale or retail trade. In contrast, 10 percent or less of workers in the finance, insurance, real estate, and public employment sectors were uninsured. These patterns may disproportionately affect individuals leaving cash assistance because they often work in low-wage jobs, part- time, or in industries such as retail that often do not provide health coverage. Young adults, aged 18 to 24, are more likely than any other age group to be uninsured, largely because certain characteristics of their transition to the workforce—working part-time or for low wages, changing jobs frequently, and working for small employers—make them less likely to be eligible for employer-based coverage. Among those aged 18 to 24, 27 percent were uninsured, and among those aged 25 to 34, 21 percent were uninsured in 2000. (See fig. 2.) Even when employer-sponsored coverage is available, its costs may be prohibitive or its benefits very limited. Employer-sponsored health plans may not subsidize coverage for dependents, may restrict or exclude certain benefits, or may subject participants to out-of-pocket costs either through premium contributions or cost-sharing provisions that low-wage workers may find unaffordable. For example, a 2001 survey by Mercer/Foster Higgins found that, on average, large employers (500 or more employees) require employees enrolled in preferred provider organizations (PPO) to contribute $56 each month for employee-only coverage, or $191 each month for family coverage. For lower-wage workers, such as individuals leaving cash assistance and entering the workforce, even coverage that is affordable for a worker may be too expensive for covering the rest of the family members. Those without access to employer-sponsored coverage may look to the individual insurance market to obtain coverage, and in 2000, 5 percent of nonelderly Americans (or 12.6 million individuals) relied on individual health insurance as their only source of coverage. However, restrictions on who may qualify for coverage and the premium prices charged can have direct implications for consumers. For example, depending on their health status and demographic characteristics such as age, gender, and geographic location, individuals in the majority of states may be denied coverage in the private insurance market or have only limited benefit coverage available to them. In addition, while all members of an employer- sponsored group health plan typically pay the same premium for employment-based insurance regardless of age or health status, in most states individual insurance premiums are higher for older or sicker individuals than for younger or healthier individuals, potentially making this option unaffordable. For example, a recent study examined individual insurers’ treatment of applicants with certain preexisting health conditions, such as hay fever. The study of insurers in eight localities found that for applicants with hay fever, 8 percent would decline coverage; 87 percent would offer coverage with a premium increase, benefit limit, or both; and 5 percent would offer full coverage at the standard rate. Cost differences are often exacerbated by the fact that individuals must absorb the entire cost of their health coverage, whereas employers usually pay for a substantial portion of their employees' coverage. Because of limitations in the availability of private insurance—especially for low-paid, part-time workers and those in certain industry sectors that often characterize jobs available to individuals moving from cash assistance to work—transitional Medicaid assistance is an important option for health insurance coverage. Individuals with lower incomes have a much higher than average probability of being uninsured. (See fig. 3.) Typically, former welfare recipients entering the workforce work part-time or in low-wage jobs that are less likely to provide health coverage or only provide coverage at a prohibitive cost. For example, we noted in our 1999 report on states’ experiences in implementing transitional Medicaid assistance that one state found that out of nearly l,600 former welfare recipients surveyed, 43 percent of the heads of households worked fewer than 32 hours per week and did not have health insurance, and 32 percent held low-wage jobs, such as in retail stores, hotels, restaurants, and health care establishments. In addition, although some employers of former cash assistance recipients may not offer health insurance, numerous studies have shown that a significant number of these individuals have access to employer coverage but choose not to accept it. For example, a recent study showed that although about 50 percent of individuals transitioning from cash assistance to employment had access to employer coverage, only about one-third opted to participate in the employer-sponsored plan. The relatively low “take-up” rate is due largely to the high costs of many employer health plans. Transitioning workers, who commonly earn between $7 and $8 an hour, may simply be unable to afford their share of the premium, since their annual earnings range from 73 percent to 111 percent of the federal poverty level. (See table 1.) While the Medicaid statute provides families moving from welfare to work with up to 12 months of transitional Medicaid coverage, we have reported that certain states had obtained waivers from HCFA to extend the length of coverage provided, and that the share of eligible families that actually received this entitlement varied significantly by state. States offered from 1 to 3 years of transitional Medicaid assistance in 1999. In the several states that were able to provide data on participation in transitional Medicaid assistance, we found that participation rates among newly working Medicaid beneficiaries ranged from 4 to 94 percent. Several states had made efforts to facilitate beneficiaries' participation in transitional Medicaid. For example, nine states reported developing outreach and education materials to inform families and eligibility determination workers about transitional Medicaid assistance. While such approaches helped make transitional Medicaid more available, beneficiaries’ failure to report income as required often resulted in their losing eligibility after the first 6 months. States’ implementation of transitional Medicaid coverage varied, resulting in differing lengths of time for which coverage was provided and differing rates of family participation. As of 1999, the most current national data reported, 10 states—Arizona, Connecticut, Delaware, Nebraska, New Jersey, Rhode Island, South Carolina, Tennessee, Utah, and Vermont— provided over 1 year of coverage, while the remaining states provided 1 year of coverage. (See fig. 4.) In the several states that were able to provide such data, transitional Medicaid participation rates ranged from about 4 percent of the families moving from cash assistance in one state to 94 percent of such cases in another. However, low participation rates in transitional Medicaid assistance did not always indicate that families had lost Medicaid coverage altogether. For example, officials in the state with a 4 percent participation rate said that most families losing cash assistance were still enrolled in Medicaid through other eligibility categories for low- income families. We found that several states had initiatives in place to facilitate beneficiaries’ access to transitional Medicaid assistance. The following are examples of such initiatives. Nine states reported developing specific materials regarding transitional Medicaid assistance in easy-to-understand language for eligibility determination workers and beneficiaries. One state revised its computer systems so that eligible families leaving cash assistance due to employment were automatically transferred to transitional Medicaid assistance coverage. In addition, this state’s eligibility workers randomly contacted families who were leaving cash assistance to determine their health insurance status and to ensure that they obtained the additional months of Medicaid coverage for which they were eligible. As a result of this state’s efforts, about 70 percent of the families leaving cash assistance or Medicaid received transitional Medicaid coverage. Officials in three other states encouraged increased participation in transitional Medicaid assistance by contacting families with closed cash assistance cases to determine whether these families had obtained the additional months of Medicaid coverage if so entitled. One of these states, which also provided 24 months of transitional Medicaid assistance, reported that 77 percent of eligible families were receiving this benefit. However, even with such successful enrollment efforts, many families did not receive the full transitional Medicaid assistance benefits because they failed to periodically report their income as required. The Medicaid statute requires that beneficiaries report their income three times during the 12 months of transitional Medicaid assistance: once in the first 6-month period and twice in the second 6-month period. Failure to report income status in either of these 6-month periods results in termination of transitional Medicaid benefits. In 1999, we reported that families’ failure to periodically submit required income reports often resulted in their not receiving transitional Medicaid coverage for the full period of eligibility. For example, officials in three states we reviewed told us that families typically received only 6 months of transitional Medicaid, generally because they failed to submit the required income reports—and not because of a change in income that made them ineligible for transitional Medicaid. In contrast, the state that had a 94 percent participation rate for transitional Medicaid offered coverage for 24 months and had received HCFA approval to waive the periodic income- reporting requirements. Overall, we found that states that waived income- reporting requirements reported higher participation rates than states that did not. In implementing public programs such as Medicaid, difficult trade-offs often exist between ease of enrollment for eligible individuals and program integrity efforts to ensure that benefits are provided only to those who are eligible. The experience of some states in easing statutory periodic income-reporting requirements proved successful in increasing participation for eligible beneficiaries. In view of concerns that beneficiary reporting requirements were limiting the use of the transitional Medicaid benefit, HCFA proposed legislation to eliminate beneficiary reporting requirements for the full period of eligibility (up to 1 year). To date, no action has been taken on this proposal. In our earlier report we recommended that the Congress may wish to consider allowing states to lessen or eliminate periodic income-reporting requirements for families receiving transitional Medicaid assistance, provided that states offer adequate assurances that the benefits are extended to those who are eligible. Precedent for a full year of coverage in Medicaid has been provided in other aspects of the Medicaid program. For example, the Balanced Budget Act of 1997 allowed states to guarantee a longer period of Medicaid coverage for children, such as 12 months, regardless of changes in a family’s financial status. As of July 2000, 14 states had implemented this option. A similar approach could facilitate uninterrupted health insurance coverage for families that are moving from cash assistance to the workforce. Transitional Medicaid assistance can play an important role in helping individuals move successfully from cash assistance to employment, thus further advancing the goals of welfare reform. Without access to Medicaid coverage, these individuals, who are often in low-wage jobs, might have limited or no alternative health coverage and join the ranks of the uninsured. While our earlier work demonstrated that states varied in the extent to which families were participating in transitional Medicaid assistance, states that worked to minimize obstacles—particularly by reducing or eliminating income-reporting requirements—had higher participation rates. Removing periodic reporting requirements would help further increase the use of transitional Medicaid assistance, provided that sufficient safeguards remained in place to ensure that only qualified individuals receive the benefits. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Subcommittee may have. For more information regarding this testimony, please contact William J. Scanlon at (202) 512-7114 or Carolyn L. Yocom at (202) 512-4931. Susan Anthony, Karen Doran, JoAnn Martinez-Shriver, and Behn Miller made key contributions to this statement.
Welfare reform significantly changed federal policy for low-income families with children and established a five-year lifetime limit on cash assistance. Welfare reform also extended transitional Medicaid assistance through 2001. States have implemented various initiatives to help families move from cash assistance to the workforce, including some enhancements to transitional Medicaid. These initiatives likely helped to cut cash assistance caseloads by more than half from 1996 through mid-2001. Low-wage or part-time jobs--which are common for newly working individuals--often do not come with affordable health insurance, thus making transitional Medicaid coverage an important option. The implementation of transitional Medicaid assistance varied across the 21 states that GAO reviewed. State practices enhanced beneficiaries' ability to retain Medicaid coverage. However, many families did not receive their full transitional Medicaid assistance benefits because they failed to report their income three times during the 12-month period of coverage. Amending the Medicaid statute to provide states with greater flexibility to ease income-reporting requirements, as has been done for other aspects of the Medicaid program, could facilitate uninterrupted health insurance coverage for families moving from cash assistance to the workforce.
Federal operations and facilities have been disrupted by a range of events, including the terrorist attacks on September 11, 2001; the Oklahoma City bombing; localized shutdowns due to severe weather conditions, such as the closure of federal offices in the Washington, D.C., area in September 2003 due to Hurricane Isabel; and building- level events, such as asbestos contamination at the Department of the Interior’s headquarters. Such disruptions, particularly if prolonged, can lead to interruptions in essential government services. Prudent management, therefore, requires that federal agencies develop plans for dealing with emergency situations, including maintaining services, ensuring proper authority for government actions, and protecting vital assets. Until relatively recently, continuity planning was generally the responsibility of individual agencies. In October 1998, Presidential Decision Directive (PDD) 67 identified the Federal Emergency Management Agency (FEMA)—which is responsible for responding to, planning for, recovering from, and mitigating against disasters— as the executive agent for federal COOP planning across the federal executive branch. FEMA was an independent agency until March 2003, when it became part of the Department of Homeland Security (DHS), reporting to the Under Secretary for Emergency Preparedness and Response. Under PDD 67, its responsibilities include ● formulating guidance for agencies to use in developing viable plans; ● coordinating interagency exercises and facilitating interagency coordination, as appropriate; and ● overseeing and assessing the status of COOP capabilities across the executive branch. According to FEMA officials, the directive also required that agencies have COOP plans in place by October 1999. In July 1999, FEMA first issued Federal Preparedness Circular (FPC) 65. FPC 65 is guidance to the federal executive branch for use in developing viable and executable contingency plans that facilitate the performance of essential functions during any emergency. Specifically, the guidance ● established the identification of essential functions as the basis for ● defined essential functions as those that enable agencies to provide vital services, exercise civil authority, maintain safety, and sustain the economy during an emergency; ● defined the elements of a viable continuity of operations capability according to eight topic areas: identification of essential functions; development of plans and procedures; identification of orders of succession; delegations of authority; provision for alternate facilities; provision of interoperable communications; availability of vital records; and conduct of regular tests, training, and exercises; and ● set up an interagency working group to coordinate continuity planning. FPC 65 applies to all federal executive branch departments and agencies at all levels, including locations outside Washington, D.C. It directed the heads of each agency to assume responsibilities including ● developing, approving, and maintaining agency continuity plans and ● developing a COOP multiyear strategy and program management ● conducting tests and training of agency continuity plans, contingency staffs, and essential systems and equipment. At your request, we previously reported on federal agency headquarters contingency plans in place in October 2002. At that time, we determined that most agencies identified at least one function as essential, but the functions varied in number and apparent importance. Furthermore, while 20 of 23 agencies had documented COOP plans, none addressed all the guidance in FPC 65. We identified inadequate guidance and oversight as factors contributing to these weaknesses, and recommended that DHS (1) ensure that agencies without plans develop them, (2) ensure that agencies address weaknesses in their plans, and (3) conduct assessments of plans that included an independent verification of agency-provided data and an assessment of identified essential functions. In response to these recommendations, DHS reported in July 2004 that it (1) was developing an online system to collect data from agencies on the readiness of their continuity plans that would evaluate compliance with the guidance, (2) had conducted an interagency exercise, and (3) had developed a training program for agency continuity planning managers. DHS added that it planned to conduct an independent validation of each agency’s self-assessment after deployment of the readiness system. Based on an analysis of published literature and in consultation with experts on continuity planning, we identified eight sound practices related to essential functions that organizations should use when developing their COOP plans. These practices, listed in table 1, constitute an ongoing process that includes identifying and validating essential functions. With regard to COOP plans in place on May 1, 2004, many of the 23 agencies reported using some of the sound practices in developing plans, included identifying and validating essential functions, but few provided documentation sufficient for us to validate their responses. This indicates that agencies—although aware of these practices—may not have followed them thoroughly or effectively. For example, it is unlikely that a thorough risk analysis of essential functions could be performed without being documented. Further, the essential functions identified by agencies varied widely: the number of functions identified in each plan ranged from 3 to 538. In addition, the apparent importance of the functions was not consistent. For example, a number of essential functions were of clear importance, such as ● “conduct payments to security holders” and ● “carry out a rapid and effective response to all hazards, emergencies, and disasters.” Other identified functions appeared vague or of questionable importance: ● “champion decision-making decisions” and ● “provide advice to the Under Secretary.” The high level of generality in FEMA’s guidance on essential functions contributed to the inconsistencies in agencies’ identification of these functions. As was the case during our 2002 review, the version of FPC 65 in place on May 1, 2004, defined essential functions as those that enable agencies to provide vital services, exercise civil authority, maintain safety, and sustain the economy during an emergency. The document did not, however, define a process that agencies could use to select their essential functions. In June 2004, FEMA released an updated version of FPC 65, providing additional guidance to agencies on each of the topics covered in the original guidance, including an annex on essential functions. The annex lists several categories that agencies must consider when determining which functions are essential, including ● functions that must continue with minimal interruption or cannot be interrupted for more than 12 hours without compromising the organization’s ability to perform its mission and ● functions assigned to the agency by federal law or by order of the President. The new guidance goes on to outline steps addressing the prioritization of selected functions as well as the identification of resources necessary to accomplish them and of interdependencies with other agencies. On January 10, 2005, the Assistant to the President for Homeland Security issued a memorandum outlining additional guidance on essential functions and initiated a process to identify and validate agency-level functions. The memorandum noted that in the past many departments and agencies had had difficulty clearly identifying and articulating their essential functions. It attributed this difficulty, in part, to the lack of a defined set of national-level essential functions to guide agency continuity planning, resulting in multiple efforts to develop agency essential functions for different specific purposes (e.g., planning for Year 2000 computer continuity, information technology planning, and critical infrastructure planning). Further, it noted that departments and agencies sometimes do not distinguish between a “function” and the specific activities necessary to perform the function. To address these issues, the memorandum identified eight National Essential Functions that are necessary to lead and sustain the country during an emergency and, therefore, must be supported through continuity capabilities. Table 2 lists the eight National Essential Functions. The memorandum asked major agencies to identify their Priority Mission Essential Functions—those functions that must be performed to support or implement the National Essential Functions before, during, and in the immediate aftermath of an emergency. The document stated that, generally, priority functions must be uninterrupted or resumed during the first 24 to 48 hours after the occurrence of an emergency and continued through full resumption of all government functions. When identifying their functions, agencies were asked to also identify the National Essential Function that each priority function supports, the time in which the priority function must be accomplished, and the partners necessary to perform the priority function. The memorandum asked agencies to reply by February 18, 2005. The memorandum emphasized the need for the involvement of senior-level agency officials, calling for each agency’s functions to be first approved by an official with agencywide responsibilities. The memorandum then laid out a process by which the functions would be validated by an interagency group within the Homeland Security Council. According to FEMA officials, two agencies’ essential functions have already been reviewed, and there are plans to complete all agency reviews by the end of the summer. The validated functions would then be used to support development of a new continuity policy and would be used to develop and implement improved requirements for capabilities, inform the annual budget process, establish program metrics, and guide training and exercises and other continuity program activities. The memorandum did not set any time frames for these later steps. Together, FEMA’s revised guidance and the guidance from the White House significantly address the best practices that we identified. For example: ● Both documents call for agencies to identify dependencies necessary to perform the functions. ● FEMA’s guidance calls for agencies to prioritize their essential functions and identify the resources necessary to perform them. ● The White House guidance calls on agencies to identify the recovery time necessary for each function and outlines a process to validate the initial list of functions. If implemented effectively, the new guidance and the review process conducted by the White House could result in more consistent identification of essential functions across the executive branch. The functions could then form the basis for better plans for continuing the most critical functions following a disruption to normal operations. However, without time frames for completing the outlined process, it is unclear when the expected improvement will occur. When compared with our prior assessment, agency continuity plans in place on May 1, 2004, showed improved compliance with FEMA’s guidance in two ways: ● One agency and nine component agencies that did not have documented continuity plans in place at the time of our 2002 had put such plans in place by May 1. For each of the topic areas outlined in generally made progress in increasing compliance. owever, two major agencies did not have plans in place on May 1, H 2004. As of April 2005, one of these two had finalized its plan. In addition, after analyzing these plans, we found that none in place on May 1 followed all of FEMA’s guidance. Of the eight topic areas identified in FPC 65, these 45 COOP plans generally complied with the guidance in two areas (developing plans and procedures and order of succession); generally did not comply in one area (tests, training, and exercises); and showed mixed compliance in the other five areas. Specifically, when examining the governmentwide results of our analysis of the eight planning topics outlined in FPC 65, we determined the following: ● Essential functions. Most agency plans identified at least one unction as essential and identified which functions must be f continued under all circumstances. However, less than half th e COOP plans identified interdependencies among the functions, established staffing and resource requirements, or identified themission-critical systems and data needed to perform the function s. Plans and procedures. Most plans followed the guidance in this area including establishing a roster of COOP personnel, activation , procedures, and the appropriate planning time frame (12 hours to 30 days). ● Orders of succession. All but a few agency plans identified an order of succession to the agency head. Most plans included orders of succession for other key officials or included officials outside of the local area in the succession to the agency head. Many plans did not include the orders of succession in the agency’s vital records or document training for successors on their emergency duties. ● Delegations of authority. Few plans adequately documented the legal authority for officials to make policy decisions in an emergency. ● Alternate facilities. Most plans documented the acquisition of at least one alternate facility, and many included alternate facilities inside and outside of the local area. However, few plans documented that agencies had sufficient space for staff, pre- positioned equipment, or appropriate communications capabilities at their alternate facilities. ● Redundant emergency communications. Most plans identified at least two independent media for voice communication. Less than half of the plans included adequate contact information, and few provided information on backup data links. ● Vital records. Less than half of the plans fully identified the agency’s vital records. Few plans documented the locations of all vital records or procedures for updating them. ● Tests, training, and exercises. While many agencies documented some training, very few agencies documented that they had conducted tests, training, and exercises at the recommended frequency. During our prior review of 2002 plans, we found that insufficient oversight by FEMA contributed to agencies’ lack of compliance with the guidance. Specifically, we noted that FEMA had not conducted an assessment of agency contingency plans since 1999. As a result, we recommended that it conduct assessments of agency continuity plans that include independent verification of agency-reported information. In response, DHS reported that it was developing a readiness reporting system to assist it in assessing agency plans and planned to verify the information reported by the agencies. Although neither of these planned actions was completed by May 1, 2004, FEMA has made subsequent efforts to improve its oversight. According to FEMA officials, development of the readiness reporting system was completed in March 2005, and the system is expected to be operational and certified by October 2005, at which time there will be seven locations (including two FEMA locations) using the system. They added that once the system becomes fully operational, agencies will be required to periodically provide updated information on their compliance with FEMA’s guidance. These officials also reported that the agency had taken additional steps to improve readiness. Specifically, they stated that the interagency exercise held in mid-May 2004 successfully activated and tested agency plans; they based this assessment on reports provided by the agencies. Furthermore, FEMA has begun planning for another interagency exercise in 2006. In addition, as of April 2005, FEMA had provided training to 682 federal, state, and local officials from 30 major federal departments and agencies and 209 smaller agencies—including state, local, and tribal entities. FEMA officials stated that because of these additional successful efforts to improve readiness, they no longer planned to verify agency-reported readiness data. While the revised guidance, recent exercise, and ongoing training should help ensure that agency continuity plans follow FEMA’s guidance, FEMA’s ongoing ability to oversee agency continuity planning activities will be limited by its reliance on agency-provided data. Without verification of such data, FEMA lacks assurance that agency plans are compliant and that the procedures outlined in those plans will allow agencies to effectively continue to perform their essential functions following a disruption. Telework, also referred to as telecommuting or flexiplace, has gained widespread attention over the past decade in both the public and private sectors as a human capital flexibility that offers a variety of potential benefits to employers, employees, and society. In a 2003 report to Congress on the status of telework in the federal government, the Director of the Office of Personnel Management (OPM) described telework as “an invaluable management tool which not only allows employees greater flexibility to balance their personal and professional duties, but also allows both management and employees to cope with the uncertainties of potential disruptions in the workplace, including terrorist threats.” As we reported in an April 2004 report, telework is an important and viable option for federal agencies in COOP planning and implementation efforts, especially as the duration of an emergency event is extended. In a July 2003 GAO report, we defined 25 key telework practices for implementation of successful federal telework programs. Although not required to do so, 1 of the 21 agency continuity plans in place on May 1, 2004, documented plans to address some essential functions through telework. Two other agencies reported that they planned to use telework to fulfill their essential functions, and eight agencies reported that they planned for nonessential staff to telework during a COOP event, but their continuity plans do not specifically mention telework. However, none of the agencies that are planning to use telework during a COOP event documented that the necessary preparations had taken place. These preparations—derived from the 25 key telework practices for the development of an effective telework program—include informing and training the staff, ensuring that there is adequate technological capacity for telework, providing technological assistance, and testing the ability to telework. In summary, Mr. Chairman, although agency COOP plans have shown improvement since our prior assessment of 2002 plans, most plans in place on May 1, 2004, continued to exhibit inconsistencies in the identification of essential functions and significant lack of compliance with FEMA’s guidance. Both FEMA’s revision to this guidance and a recently initiated White House effort have the potential, if effectively implemented, to help agencies better identify their essential functions and thus develop better continuity plans. However, the lack of a schedule to complete the White House effort makes it unclear when these improvements might take place. Agencies’ efforts to develop continuity plans could also be aided by FEMA’s efforts to develop a readiness reporting system, conduct a governmentwide exercise, and train agency COOP planners, as well as by any guidance or policies that result from the White House effort. Finally, even though FEMA’s continuity planning guidance in place in May 2004 did not address telework, one agency’s continuity plan at that time included plans to use telework in response to an emergency. In addition, 10 agencies reported that they planned to use telework following a COOP event, but their plans were not clearly documented. In our report, we make recommendations aimed at helping to ensure that agencies are adequately prepared to perform essential functions following an emergency. We recommended that the Assistant to the President for Homeland Security establish a schedule for the completion of the recently initiated effort to validate agency essential functions and refine federal continuity of operations policy. We also recommended that the Secretary of Homeland Security direct the Under Secretary for Emergency Preparedness and Response to ● develop a strategy for short-term oversight that ensures that agencies are prepared for a disruption in essential functions while the current effort to identify essential functions and develop new guidance is ongoing; ● develop and implement procedures that verify the agency-reported data used in oversight of agency continuity of operations planning; and ● develop, in consultation with OPM, guidance on the steps that agencies should take to adequately prepare for the use of telework during a COOP event. In commenting on our findings and recommendations, the Under Secretary for Emergency Preparedness and Response of DHS stated that the department agreed that there has been improvement in COOP plans and attributed that improvement to a renewed emphasis by DHS and the White House. The department also agreed with the need for additional oversight and noted that FEMA had begun conducting COOP site assessments at departments and agencies to improve readiness. The Under Secretary’s letter drew attention to a number of actions taken after the May 1, 2004, cutoff date for our assessment. Finally, the Under Secretary pointed out that the readiness reporting system that FEMA is developing was not intended to be a COOP plan assessment tool, but that it instead provides key officials with the ability to determine plan status in near real time. We continue to believe that it is important for FEMA to assess agency plans as part of its oversight responsibilities. Regardless of the system’s intended use, we believe its capabilities, as described by FEMA, make it a valuable tool that the agency should use when exercising these responsibilities. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Committee may have at this time. For information about this testimony, please contact Linda D. Koontz at (202) 512-6240 or at [email protected], or James R. Sweetman at (202) 512-3347 or [email protected]. Other key contributors to this testimony include Barbara Collier, Mike Dolak, Nick Marinos, and Jessica Waselkow. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To ensure that essential government services are available in emergencies, federal agencies are required to develop continuity of operations plans. According to guidance from the Federal Emergency Management Agency (FEMA), which is responsible for providing guidance for and assessing agency continuity plans, a key element of a viable capability is the proper identification of essential functions. GAO previously reported on agency continuity plan compliance, and determined that a number of agencies and their components did not have continuity plans in place on October 1, 2002, and those that were in place did not generally comply with FEMA's guidance. GAO was asked to testify on its most recent work in continuity planning, which is discussed in a separate report, being released today (GAO-05-577). In this report, GAO reviewed to what extent (1) major federal agencies used sound practices to identify and validate their essential functions, (2) agencies had made progress since 2002 in improving compliance with FEMA guidance, and (3) agency continuity of operations plans addressed the use of telework arrangements (in which work is performed at an employee's home or at a work location other than a traditional office) during emergencies. Many of the 23 agencies that GAO reviewed reported using sound practices for identifying and validating essential functions, but few provided documentation sufficient for GAO to confirm their responses. (GAO identified these sound practices based on published literature and in consultation with experts on continuity planning.) Agency responses indicate that--although aware of the practices--agencies may not have followed them thoroughly or effectively. Further, the essential functions identified by agencies varied widely: the number of functions identified in each plan ranged from 3 to 538 and included ones that appeared to be of secondary importance. The absence in FEMA's guidance of specific criteria for identifying essential functions contributed to this condition. Subsequent guidance significantly addresses the sound practices that GAO identified. Also, the White House has begun a process to improve continuity planning. If this guidance and process are implemented effectively, they could lead to improved identification of essential functions in the executive branch. As of May 1, 2004, agencies had made progress in improving compliance with FEMA guidance, but significant weaknesses remained. Agencies that had plans in place in both years showed significant improvement in the area of tests, training, and exercises. However, although some improvement occurred for other planning areas, important weaknesses remained: for example, 31 of 45 plans did not fully identify mission-critical systems and data necessary to conduct essential functions. Inadequate oversight by FEMA contributed to the level of weaknesses in agency continuity plans. FEMA plans to improve oversight using an online readiness reporting system, which it plans to have fully operational later this year, and it has already taken other steps to help agencies improve their plans, such as conducting an interagency exercise. However, FEMA does not plan to verify the readiness information that agencies will report in the system. Finally, even though FEMA's continuity planning guidance in place in May 2004 did not address telework, one agency's continuity plan at that time included plans to use telework in response to an emergency. In addition, 10 agencies reported that they planned to use telework following a COOP event, but their plans were not clearly documented. In its report, GAO made recommendations aimed at helping to improve continuity planning. These included establishing a schedule for the completion of recently initiated efforts, developing a strategy for short-term oversight in the meantime, and developing and implementing procedures that verify the agency-reported data used in oversight of agency continuity of operations planning. The report includes comments from FEMA. In commenting, FEMA agreed that there has been improvement in COOP plans and that additional oversight is needed.
Local jurisdictions (i.e., counties and municipalities) administer criminal courts and permit pretrial detention of defendants accused of serious offenses and deemed to be dangerous to prevent them from committing crimes prior to trial, according to the Bureau of Justice Statistics. Many local jurisdictions have begun adopting alternatives to incarceration, which are intended to improve public health and safety while reducing costs. Such alternatives generally represent a shift in emphasis away from prosecuting those associated with lower-level crimes toward providing treatment for underlying mental health or substance abuse disorders and include programs for individuals in jail or court who screen positive for mental illness. Veterans treatment courts are one such alternative. These courts are typically local courts dedicated to handling criminal cases involving veterans with mental health or substance abuse problems. According to VA, veterans treatment courts share several general characteristics but vary in their specific policies and procedures because of, among other things, differences in local jurisdictions and criminal justice system practices. Veterans treatment courts are modeled after adult drug courts, which are specialized courts that target criminal offenders who have drug addiction and dependency problems. As in the case of adult drug courts, judges preside over veterans treatment court proceedings and monitor veterans’ progress with treatment in collaboration with a team that usually includes a court coordinator, prosecutor, public defender, and probation officer. Additionally, the team includes a VJO specialist. Veterans treatment courts vary in terms of criteria for taking a case, such as the types or levels of criminal offenses. As was the case with past generations of veterans, the transition from military life to civilian life can be challenging for Post-9/11 veterans. Most veterans are not involved with local criminal justice systems, but some veterans—particularly if their mental health, family readjustment, or other needs remain unmet—may become justice-involved. The Bureau of Justice Statistics reported that about 7 percent (about 50,000) of the total population of inmates in jail between February 2011 and May 2012 were veterans. According to the Bureau of Justice Statistics report, this estimate represents a 25 percent decrease from the number of veterans in jail in 2004. Moreover, veterans were incarcerated in jails at lower rates than nonveterans between February 2011 and May 2012, according to the Bureau of Justice Statistics. While many veterans who served in the military have successfully readjusted to civilian life with minimal difficulties, researchers and policymakers have identified concerns about how the experience of Post- 9/11 servicemembers may affect the incarceration rates among these veterans. According to VA, military experience (particularly combat) has been an underlying factor in behavior that prompts a response from law enforcement, such as domestic conflicts. During the last 14 years of U.S. military operations, many servicemembers have experienced numerous deployments, which can increase the risk of developing post-traumatic stress disorder (PTSD) and traumatic brain injury. According to VA, a strong relationship exists between PTSD and substance abuse. Research has demonstrated that justice-involved veterans have high rates of mental illness, substance abuse, homelessness, and other health issues. At the same time, some veterans may be unwilling or unable to access the supports and services they need. For example, we have previously reported that some veterans do not seek mental health treatment due to concerns about negative career outcomes, lack of understanding or awareness of treatment, and logistical challenges to accessing care. Left unaddressed, a combination of homelessness, unemployment, mental-health, or substance abuse issues can place veterans experiencing a difficult transition at higher risk of committing a crime. VA initiated the VJO Program in 2009. The mission of the program is to reduce and prevent criminal justice recidivism and homelessness among veterans by linking justice-involved veterans with appropriate supports and services. According to VA, incarceration is a strong predictor of veteran homelessness, and recidivism can limit VA’s ability to provide continuous care for mental health and other issues. The VJO Program’s fiscal year 2012-2016 strategic plan contains five broad strategic goals for the program as well as a number of related objectives (see table 1). Evaluative information helps the executive branch and congressional committees make decisions about the programs they oversee; that is, evaluative information tells them whether and why a program is working well or not. We have previously reported that an important system to obtain such information is through program performance assessment. (See fig. 1.) A program performance assessment system is an important component of effective program management and contains three key elements: 1. Program goals communicate what the agency proposes to accomplish and allow agencies to assess or demonstrate the degree to which those desired results were achieved. Strategic goals and related objectives are long-term goals that set a general direction for a program’s efforts. Performance goals are the specific results an agency expects its program to achieve in the near term. 2. Performance measures are concrete, objective, observable conditions that permit the assessment of progress made toward the agency’s goals. Performance measures show the progress the agency is making in achieving performance goals. 3. Program evaluations are individual systematic studies using performance measures and other information to answer specific questions about how well a program is meeting its objectives. The VJO Program operates through VA medical centers. VA’s central office provides each VA medical center the flexibility to determine how to best respond to the needs of justice-involved veterans within the local community. More specifically, VA established broad program parameters that allow VA medical center officials to set VJO specialists’ activities to meet the needs of veterans in local criminal justice systems. VA issued guidelines on the operations of the VJO Program through a series of memorandums to program officials. According to VA, the program is to serve veterans as they interact with the local criminal justice system at multiple points and settings—from initial contact with local law enforcement to release from jail. VA guidelines define a “justice-involved veteran” as a veteran who is (1) arrested by local law enforcement who can be appropriately diverted from arrest into treatment; (2) incarcerated in a local jail, and who either has a pending trial or is serving a sentence after a conviction; or (3) involved in adjudication or monitoring by a court. (See fig. 2.) The Veterans Health Administration (VHA), which operates VA’s health care system, administers the VJO Program. VA policy requires each of the 167 VA medical centers around the country to provide outreach to justice-involved veterans. VA prioritizes veterans charged with nonviolent crimes, but VA must consider a veteran’s current legal circumstances— and not legal history alone—to determine whether the program can meet the individual veteran’s needs while maintaining safety, according to VA policy. The program does not provide legal representation, nor does it accept legal custody of a veteran. Specifically, VHA staff in Veterans Integrated Service Networks (VISN) help oversee the VJO Program and provide technical assistance to VJO specialists. Further, VA’s central office, its VISNs, and VA medical centers manage different aspects of program operations. VA central office officials directed each medical center to have at least one full-time VJO specialist position, most of which are funded by central office funds, according to VA. Some VA medical centers employ one VJO specialist position while others have multiple positions. VA central office officials also train new VJO specialists and manage national data collection efforts. VJO specialists report directly to VA medical center officials, typically in the homelessness prevention or mental health services. According to VA central office officials, as of September 2015, VA employed 261 full-time VJO specialists, who are mostly social workers, and seven other VJO specialists who play a hybrid role between the VJO and the Health Care for Re-entry Veterans programs. To accomplish the program mission of serving justice-involved veterans, VA tasks VJO specialists to perform three core functions: identify, assess, and link justice-involved veterans to appropriate supports and services. Experience of Veterans Who Have Participated in the VJO Program All of the veterans participating in our group discussions were appreciative of the VJO Program. At one location, veterans said that the VJO specialists understood their needs, educated them about available VA supports and services, assisted them in determining their eligibility, and helped them obtain VA supports and services. An example from a VJO Specialist To illustrate the effect of the program on one veteran’s life, VJO specialists we interviewed in one of our selected areas provided the following anecdote. A homeless veteran—who had served in Operation Enduring Freedom— was arrested for trespassing for sleeping behind a VA building. The veteran had substance abuse issues. When he appeared before a veterans treatment court, he met a VJO specialist who assessed his needs and assisted him in enrolling in VA housing and substance abuse treatment. When the veteran was stable, the VJO specialist was able to link him to permanent housing and VA vocational training. After several years of treatment and assistance, the veteran was hired by VA and purchased a home with a VA loan. helps jail administrators identify incarcerated veterans by comparing the names of inmates with VA’s list of veterans. After they identify justice-involved veterans, VJO specialists determine the veterans’ treatment needs by assessing their mental health, social well-being, appearance, and attitude. VJO specialists also collect information on employment history, current housing situation, military service, and discharge status. According to VA staff, upon completing the assessment, VJO specialists develop a treatment plan to meet the veteran’s needs. A treatment plan typically includes recommendations for medical or mental health services, housing, or other services, according to VJO specialists in five of the nine selected areas. After they identify and assess justice-involved veterans, VJO specialists link veterans to VA or community supports and services. VJO specialists do not directly provide treatment to justice-involved veterans. VA staff said once VJO specialists refer and link veterans to the appropriate supports and services, they perform follow-up visits with veterans to ensure they are receiving them. For example, VJO specialists may assist veterans to find adequate housing or with any transportation issues. Some veterans in jails may receive one follow-up visit while others may receive more, depending on their needs according to a VJO specialist. In contrast, VJO specialists work with veterans who participate in veterans treatment courts for 1 to 2 years, depending on the local criminal justice system and on the amount of time a veteran is required to participate in the court program. VJO specialists refer and link veterans to the VHA for health care, mental health, substance abuse treatment, or housing services where the overseeing staff determine the type of treatment and services provided. VJO specialists also link veterans to the Veterans Benefits Administration for disability compensation, pension benefits, or vocational rehabilitation to be determined by overseeing staff based on a veteran’s eligibility, according to VA (see fig. 3). In addition, VJO specialists link veterans to community service providers when the VA-provided treatment is too far away for a veteran to participate, or when a veteran is not eligible for VA health care services or benefits according to VA. Within these broad program parameters, a VA medical center determines the type and amount of investment it makes in serving justice-involved veterans. VJO specialists’ activities vary based on whether they are working in courts, jails, or both. For example: Specialty courts: VJO specialists provide services to veterans in a variety of specialty courts, such as veterans treatment courts, drug courts, and mental health courts. However, VJO specialists work most often with veterans treatment courts compared to other specialty courts, according to VA central office officials. Local criminal justice systems: In local criminal justice systems without established veteran-focused programs, VJO specialists spend significant time attempting to develop or plan these types of programs, according to VISN officials. These activities involve but are not limited to planning veterans treatment court (or other alternative court) programs and negotiating the terms of access to conduct outreach in jail facilities. In other areas, VJO specialists work with local criminal justice officials who may be unaware of veteran-focused programs. In such cases, VJO specialists focus their work on educating local law enforcement about VA resources available to veterans in crisis and working with veterans in jails, according to two VISN officials with whom we spoke. Jail administrators: VJO specialists in two of the nine areas we selected for interviews worked with local jail administrators to set up a program that provides veteran-specific housing units. These housing units are a designated block of jail cells only for veterans. Within these units, local community service organizations provide incarcerated veterans with treatment and services while they are in jail, and VJO specialists assess and develop community re-entry plans for housing, VA treatment, or other necessary services. In another area we visited, VJO specialists developed a program to help “high-risk” veterans released from jail. VJO specialists used information from the veterans’ clinical assessment to determine which veterans were high-risk for re- offending, dropping out of treatment, or becoming homeless. One VJO specialist at that VA medical center said the program serves these veterans by providing weekly follow-up services within the first month of their release from jail, and offers regular follow-up services and support after the first month, based on the individual veteran’s needs. In fiscal year 2015, the VJO Program served about 46,500 justice- involved veterans, and the program has experienced steady growth in the number of veterans served, according to data from VA. During fiscal years 2012 through 2015, the number of justice-involved veterans annually served by the VJO Program increased from about 27,000 to 46,500, a 72 percent increase (see fig. 4). Justice-involved veterans who were served by the VJO Program in fiscal year 2015 had the following characteristics: Most were young men. Nearly 95 percent of the veterans served by the program were male and 52 percent of justice-involved veterans were between the ages of 18 and 44. Many reported not working full-time during the past 3 years. Approximately 40 percent of the veterans served by the program reported working part time, working irregularly, or having been unemployed during the past 3 years. Another one-third (33 percent) reported being retired or disabled. Almost three-fourths reported serving during a U.S. military intervention. Of veterans who received services through the VJO Program in fiscal year 2015, 73 percent had served during a military intervention (see fig.5). Two-thirds reported mental health problems and the vast majority had substance abuse problems. During the clinical assessment performed by VJO specialists, about two-thirds (68 percent) of the veterans reported one or more mental health problems, and 69 percent had substance abuse problems. Public order offenses were the most common type of reported criminal charges. The types of criminal offenses that veterans were charged with during fiscal year 2015 included, but were not limited to, public order offenses (33 percent), drug offenses (22 percent), property offenses (16 percent), and probation violations (12 percent). More than 40 percent reported either being homeless, losing a home, or living in an unstable housing environment (see fig. 6). VJO specialists we interviewed in two of the nine areas we selected said that some veterans become homeless once they leave jail because they are unable to pay their rent while incarcerated. Additional information about the characteristics of veterans in the VJO Program is provided in appendix II. VA has taken some steps toward evaluating the VJO Program via a longitudinal program evaluation that examines the extent to which veterans are being linked to services for which they are referred. VA planned to obtain information through interviews with 1,500 veterans. However, after the contractor submitted an interim report that detailed problems it encountered recruiting veterans to interview as part of the study, VA central office officials terminated the contract. As an alternative, VA plans to use administrative data to complete this evaluation. However, at the time of our review, VA central office officials were in the process of determining how to complete such a re-tooled evaluation. VA also developed a research agenda to evaluate the outcomes for veterans served by the VJO Program. For example, one research topic is to identify outcomes for veterans who participate in veterans treatment courts. In addition, officials said they plan to evaluate the extent to which veterans served by the VJO Program avoided incarceration and homelessness. Although VA has developed five broad strategic goals for the VJO Program and taken steps to evaluate it, VA has not fully measured progress toward any of the strategic goals because it has developed neither performance goals nor performance measures, contrary to leading practices for managing programs. Performance measures focus on whether a program has achieved measurable standards. They allow agencies to monitor and report program accomplishments on an ongoing basis. VA collects ongoing information about program operations and veterans served by the program, but this information does not help VA measure its progress toward accomplishing any of the five goals outlined in its strategic plan. According to VA central office officials, VA collects some information, such as (1) the number of VJO specialists and vacancies at each VA medical center; (2) VJO specialists’ workload, including the number of veterans that VJO specialists serve and their nonclinical activities, such as the number of trainings they conduct; and (3) information on veterans served in the program, such as demographic characteristics and medical histories. This information is useful for some aspects of program management. For example, VA central office officials told us that supervisors need information about the clinical and administrative workloads of VJO specialists to assess the need for additional staff and to monitor productivity. However, VA does not have a way to use the information it currently collects to compare actual program performance against expected results, or to analyze significant differences, contrary to federal standards for internal control. We have previously reported that performance measurement gives managers crucial information to identify gaps in program performance and plan any needed improvements. The information that VA currently collects does not allow VA to fully answer key questions, such as the reasons for observed performance, what are effective approaches to program implementation, and how to improve program performance. VA central office officials cited the need to be responsive to local conditions as the main reason why they have not set or used performance goals and measures for the VJO Program. Specifically, the areas served by each VA medical center have their own unique circumstances, they said, such as having or not having a veterans treatment court, being located in an urban or rural area, or being located in an area with a large or small population of veterans. Due to these unique local conditions, the officials stated, the program needs to be flexible. VA medical centers have discretion in determining the activities of VJO specialists. VA central office officials added that a broad measure, such as the number of veterans served by each VJO specialist per month, would not be appropriate because it could be misleading to compare VJO specialists since their work circumstances vary. For example, officials said that given VJO specialists work in different criminal justice systems, it would not be reasonable to set a national goal for the number of veterans they reach out to each month. While some VJO specialists conduct outreach to a large number of veterans in jails, others may focus their attention on a smaller number of veterans in veterans treatment courts. Similarly, they pointed out that while some VJO specialists in VA medical centers in large urban areas may be able to serve a large number of veterans, others in rural areas may need to drive several hours to see a veteran and can only see a few in any given day. We recognize that measuring progress toward goals within the VJO Program’s decentralized service delivery model poses challenges in designing performance goals and measures. Our past work has also acknowledged the challenges in developing national performance measures when flexible programs vary in their activities so as to meet local needs. Nonetheless, we have found one approach agencies used to successfully overcome this challenge was to develop common national measures. For example, our past work found that to assess the performance of the Expanded Food and Nutrition Education Program—a program that assists low-income families in acquiring skills to improve their family diet—the U.S. Department of Agriculture assessed local offices on common activities used by all offices. Another approach used by agencies with flexible programs was to encourage local projects to evaluate progress toward their own performance goals. For example, the National Tobacco Control Program—a program operated by the Centers for Disease Control and Prevention—has four goals connected to its mission to reduce tobacco-related diseases and death, including reducing youth’s tobacco use. To accomplish its goals, the agency allows states to direct their own activities. The agency provided states with guidance about identifying short term, intermediate, and long term outcomes, and encouraged states to assess their own individual efforts. VA central office officials we interviewed also said that they have not established performance goals or measures because some of the outcomes that veterans experience are influenced by factors outside the program’s control. For example, VJO Program-related outcomes—such as mental health recovery or criminal recidivism—can often depend on factors such as whether a community has a veterans treatment court or if a veteran is willing to adhere to his treatment plan, according to VA central office officials. We have highlighted strategies in our past work that agencies can use when faced with the challenge of having limited control over external factors that can affect a program’s outcomes. These strategies include: selecting a mix of outcome goals over which the agency has varying levels of control; using data on external factors to statistically adjust for their effect on the desired outcome; and disaggregating goals for distinct target populations for which the agency has different expectations. For example, our past work found that to measure progress toward its strategic goal to eliminate transportation related deaths, injuries, and property damage, the National Highway Traffic Safety Administration measured an intermediate outcome—the rate of front-seat safety belt use—and an end outcome—the rate of transportation related injuries. The agency also statistically adjusted the results of its performance measures by using the ratio of fatalities per vehicle mile driven to control for the simple fact that if more miles are driven, then more crashes are likely to result. Our past work also found that the Natural Resources Conservation Service created separate performance goals for each type of habitat (e.g., croplands, watersheds, wetlands, and grazing lands) so it could measure progress toward its strategic goal of maintaining healthy and productive land. Additionally, to help interpret the results of performance measures, we have also emphasized in our past work the importance of communicating adequate contextual information, such as factors inside or outside the agency’s control that might affect performance. VA central office officials said they plan to update the VJO Program’s strategic plan, which contains the program’s broad strategic goals, by the end of fiscal year 2016, but they do not intend to include performance measures. Leading practices demonstrate that developing ways to measure VJO Program efforts could help VJO specialists, VA central office officials, and Congress measure and monitor the extent to which program efforts are achieving their intended results, make needed improvements, and make funding decisions. Further, a fundamental element in an organization’s efforts to manage for results is its ability to set performance goals with specific targets and time frames that reflect strategic goals and to measure progress toward them as part of its strategic-planning efforts. Our previous work has highlighted characteristics of successful performance measures that could be helpful. Without a robust performance assessment system—which includes both performance measures and evaluations—officials managing the VJO Program lack a full picture of its success and of potentially underperforming areas for improvements. VA identified several key challenges related to the demand for services outpacing the VJO Program’s resources, which could limit the program’s capacity to serve all justice-involved veterans. These demand-resource imbalances were identified through VA’s strategic planning process and in consultation with VJO specialists and VISN officials. In developing its list of five challenges, which it reported out in 2012, VA assumed that demands on the VJO Program would continue to increase. Specifically, two of the five challenges VA identified relate to demand for services: (1) demand may increase due to greater use of VA’s Veterans Re-entry Search Service system and result in the program not being able to properly serve all eligible veterans; and, (2) demand from homeless veterans may increase if economic conditions worsen. The remaining three challenges VA identified relate to resources; (3) the program may not be able to link veterans to treatment due to shortages in VA clinical programs; (4) the program lacks direct control over information technology on which it is highly dependent; and, (5) the program may face funding cuts. The range of stakeholders we interviewed—VISN officials, VJO specialists, justice-involved veterans, and local criminal justice system officials—affirmed many of the challenges identified in the VJO Program strategic plan. In general, many stakeholders said that demand for VJO specialist assistance is outpacing the program’s ability to serve all potentially eligible veterans, and the gap may worsen over time. For example, VJO specialists and VISN officials we spoke with in five of the nine areas we selected for interviews told us workload challenges have intensified in recent years. In addition to the high demand for the program cited by program stakeholders, the increase in veterans treatment courts is further increasing demand for program services, according to stakeholders and VA internal documents. In particular: The number of veterans treatment courts is growing. While these courts can help improve veterans’ mental health and sobriety, the increase in the number of these courts is a major reason for the VJO Program’s workload challenges, according to VJO specialists, VISN officials, VA central office officials, and justice system partners we interviewed. According to these stakeholders, VJO specialists had already been working in many local jails and traditional courts across the country, and the expansion of veterans treatment courts added to their existing workload. Specifically, the number of veterans treatment courts nationwide grew from 65 in fiscal year 2010 to 360 in fiscal year 2015, according to VA data. Moreover, hundreds of additional veterans treatment courts are in the planning stages, according to an organization that advocates on behalf of veterans treatment courts. During fiscal years 2012 through 2015, justice-involved veterans served by the VJO Program and who participated in veterans treatment courts increased from about 1,900 to about 3,900. Working with veterans in veterans treatment courts is more time consuming than working with veterans in jails. According to VA central office officials, working with individual veterans participating in veterans treatment courts typically requires more of VJO specialists’ time than working with veterans in jail. VJO specialists we interviewed in seven of our nine selected areas also said working with veterans treatment courts is more time consuming than working in jails. For example, VJO specialists meet regularly with veterans and their treatment providers and apprise other members of the court team of a veteran’s adherence to court-ordered treatment. A veteran in a veterans treatment court generally participates in the VJO Program for 12 to 24 months. In contrast, VJO specialists we interviewed in seven of the nine areas said they generally work with veterans in jails for much shorter durations. According to VA central office officials, VA attempts to ease workload demands by hiring additional VJO specialists, as funds allow, but this has not fully addressed demand for program services. VA increased staffing for the VJO Program from 43 specialists in fiscal year 2010 to 261 in fiscal year 2015, according to VA data. VA central office officials said they have used an annual hiring process to allocate new VJO specialist positions. Through this process they added 13 VJO specialists in 13 locations in fiscal year 2015. VA central office officials said that the process for hiring additional VJO specialists consists of collecting information from VA medical centers about the current clinical and administrative activities of VJO specialists, and about any imminent workload demands, including those attributable to a new veterans treatment court, to determine which VA medical centers receive additional specialist positions. In fiscal year 2015, 54 of the 167 VA medical centers requested 1 or more of the 13 new VJO specialist positions. However, VA central office officials, VJO specialists, and VISN officials acknowledged that despite additional positions, VJO specialists are at capacity and are not able to fully address the demand for VJO Program services. VA central office officials also told us that they try to address the challenge of limited existing workload capacity by advising VJO specialists and VA medical center officials to avoid overcommitting program resources. For example, VA central office officials told us they advised VJO specialists that they should consider their existing workload and commitments before deciding to work with new veterans treatment courts or visit additional jails. Some VJO specialists we spoke with chose to focus on serving veterans in jails while others decided that serving those in veterans treatment courts was more effective. For example, VJO specialists and two criminal justice system officials in one area expressed concerns about the workload and resources associated with serving veterans treatment courts. In their view, a more cost-effective approach is to focus the program’s efforts on providing case management to justice- involved veterans on probation. However, VA central office officials acknowledge that without VA involvement, these courts likely would not function and proliferate. VJO specialists in five areas with veterans treatment courts told us they decided to prioritize working with veterans in treatment courts over incarcerated veterans given their workloads and because veterans participating in these courts are mandated to seek treatment. Specifically, veterans in these courts are generally held accountable for remaining in treatment for longer periods of time than those released from jails, which tends to produce more stable behavioral changes, according to these VJO specialists. A consequence, however, is that the specialists may see fewer veterans in veterans treatment courts or have less time to spend serving those in jails and other criminal justice settings, according to these VJO specialists we interviewed. In addition, almost every justice-involved veteran (13 of 14) we interviewed said that although they have benefitted from the program, VJO specialists do not frequent the jails enough to fully meet veterans’ needs. VA central office officials we interviewed also said they recognize that demand for the VJO Program could increase beyond existing capacity if its Veterans Re-entry Search Service becomes more widely used by jail administrators. VA developed the Veterans Re-entry Search Service in 2013 to meet a key strategic goal to identify justice-involved veterans more effectively. This online system improves the jails’ identification process compared with the self-identifying process currently used by most jail administrators. According to VA central office officials, jail administrators who are using the self-identification process are reporting a much smaller percentage of veterans. For example, an administrator in one local jail reported that 23 inmates self-identified as veterans, but the Veterans Re-entry Search Service revealed that there were 64 incarcerated veterans in the jail during the same time-period. In another jail system, the administrator reported that 220 inmates self-identified as veterans, but the online search revealed there were 400 incarcerated veterans. VA central office officials we interviewed recognize the risk the Veterans Re-entry Search Service poses to existing capacity. VA central office officials told us they plan to promote the new system to jail officials in small phases because they recognize that this system may overtax existing program capacity. VA central office officials told us that they decided to initially promote VA’s new system to 180 of the largest jails rather than roll it out to jail systems (over 3,000 in the United States) so as to avoid overwhelming VJO specialists with an unmanageable influx of new veterans. Once VA completes its outreach to these larger jail systems, it plans to promote the service to other jail systems, and VA central office officials expect about 1,000 jails to eventually use the service. In addition to challenges related to capacity, VJO specialists and VISN officials in several of our selected areas told us that shortages in clinical programs—especially residential drug treatment programs—can be a challenge because it limits treatment options for veterans. They also reported challenges associated with shortages in public housing and public transportation, the latter of which can impede veterans’ ability to readily attend court and medical appointments for treatment. These capacity challenges include: Limited capacity in residential substance abuse programs. VJO specialists and VISN officials in several of our selected areas reported that the high demand for residential substance abuse treatment— which provide appropriate housing to facilitate sobriety—limits the extent to which these programs are a viable option for justice-involved veterans. When openings are unavailable, VJO specialists said they refer veterans to the next best available treatment option. However, officials from one of the veterans treatment courts we interviewed said the next best available treatment option may not always fully meet a veteran’s treatment needs. For example, officials we interviewed at one veterans treatment court said that a veteran in the program has a heroin addiction and has tested positive for drug use. Thus, they would expect this veteran to be in a residential substance abuse treatment program, which provides greater monitoring. In this case, however, VA has recommended out-patient substance abuse treatment due to limited availability of residential slots. Limited housing options for sex offenders. VJO specialists and VISN officials in several of our selected areas also reported that housing options available for individuals who are required to register as sex offenders are limited. VA central office officials said that they are aware of this issue, which cuts across many of VA’s homeless programs. Limited public transportation options. VJO specialists and VISN officials in several of our selected areas also reported that many justice-involved veterans—some of whom have lost their driving licenses due to their offenses—rely on public transportation to take them to court and VA- or community-provided treatment. VJO specialists told us that the public transportation can be slow, limited, or nonexistent. For example, VJO specialists in one area we visited said it can take a full day for a veteran to travel to and from treatment. VJO specialists in another rural location said they have tried to address this challenge by hiring veterans who have received VJO Program assistance to help drive other veterans to their medical appointments. VA has not performed a comprehensive assessment of risks posed by the challenges the VJO Program faces, which is inconsistent with federal standards for internal control and one component of effective program management. These standards call for agencies to both identify all relevant risks and analyze risks that may prevent them from achieving their goals. Further, this assessment should include an estimation of a risk’s significance, an examination of the likelihood of any risk’s occurrence, and a decision as to what actions should be taken to manage the risk. Risk assessment is important because it also informs an entity’s policies, planning, and priorities. Without understanding risks, an agency may not be able to set appropriate policies, plans, and priorities because it would not be able to account for events that can adversely affect its ability to implement its objectives. In 2014, VHA leadership requested that each homelessness prevention program, including the VJO Program, identify risks that may affect veterans’ access to VA supports and services and identify possible actions that could be taken to mitigate those risks. To identify risks, VJO Program central office officials examined their broad strategic goals. In an October 2014 assessment report, VJO Program officials identified six risks. Identified risks included, for example, lack of an automated interface between the Veterans Re-entry Search Service and correctional facilities’ information technology systems to identify incarcerated veterans; reassignment of VJO specialists to other programs by VA medical centers; and prioritizing clinical services for veterans already receiving treatment over new justice-involved veterans by VA medical centers, thus limiting timely access to appropriate services. VA central office officials, as part of this 2014 effort, also estimated the significance and likelihood of the risks, consistent with federal standards for internal control. For example, regarding the significance if VA medical centers were to de-prioritize outreach to veterans, VA found that the program’s relationship with its criminal justice system partners could be weakened and justice-involved veterans may suffer. In addition, the assessment indicated actions VA would take to address each of the six identified risks, such as collecting additional information to monitor identified risks. VA did not, however, identify or analyze the risks posed by each one of the challenges identified in its strategic plan. Notably absent were the workload challenges facing the VJO Program. VA central office officials we interviewed acknowledged that the challenges identified in the strategic plan still exist and could affect the operations of the program. VA expects the VJO Program’s workloads to increase and its funding and workforce to remain level, raising questions about how VA can best deploy its resources and align its policies to best meet increasing demand. Not identifying all relevant risks limits VA’s ability to effectively compare and prioritize risks faced by the program. In addition, VA’s lack of performance goals, as previously discussed, negatively affects its ability to effectively identify and assess risk. As demonstrated in our previous work, federal standards for internal control state that a precondition to risk assessment is the establishment of clear performance goals (see fig. 7). Translating its broad strategic goals into measurable performance goals would allow VA to target the risks that may impede achievement of the program’s objectives. VA plans to conduct another risk assessment as part of its upcoming strategic planning efforts, using the same methodology it used in its risk assessment in 2014. As VA completes future assessments, it is important that the agency incorporates all of the elements of risk assessment detailed in federal standards for internal control, including identifying all relevant risks posed to achieving performance goals. Lacking comprehensive risk assessments may limit VA’s ability to target areas posing the greatest risks and, in turn, develop appropriate mitigation strategies. Perhaps some of the most challenging veterans to serve in this country are veterans who have committed a crime or other offense and who face significant long-term consequences for their actions. If VA intervention is timely and targeted, then these justice-involved veterans and their families have a better chance to avoid a detrimental future. A relatively new VA program, the VJO Program is designed to help justice-involved veterans—who often have high rates of mental illness, substance abuse, and other issues that may stem from their military service—avoid re- incarceration and homelessness. In designing the program, VA intended for support to respond to local conditions. While this flexibility can help the program cater to veterans’ needs in local communities, VA lacks a complete national perspective necessary to strategically manage the program. We are encouraged that VA has taken some initial steps toward assessing program performance and risks. However, as VA moves forward, it is especially important that the agency fully incorporate key practices for assessing performance and risks to strategically guide program decisions. Incorporating these key practices will allow VA to compare the program’s actual performance against expected results and comprehensively assess risks to develop appropriate mitigation strategies. More specifically, without developing a national perspective that comprehensively considers program performance and the greatest risks to the program’s goals, VA cannot fully target its resources toward efforts that achieve the intended results. As a result, justice-involved veterans may not receive the proper supports and services needed to re- establish healthy lives and avoid re-incarceration. If this lack of a national perspective persists, it will be difficult for VA to know whether its resources are being used to serve justice-involved veterans in the best manner possible. To improve management of the Veterans Justice Outreach Program, we recommend that the Secretary of Veterans Affairs direct the Undersecretary for Health to take the following two actions: 1. Establish performance goals with specific targets, time frames, and related performance measures that are linked to strategic goals to provide a basis for comparing actual program performance against expected results; and 2. Conduct a comprehensive assessment of the risks that challenges pose to achieving the program’s strategic and performance goals, and develop, as necessary, applicable mitigation strategies. We provided a draft of this report to the Department of Veterans Affairs (VA) for review and comment. In written comments, which are reproduced in appendix III, VA agreed with our recommendations and noted steps it plans to take to address them. Specifically, VA agreed to develop a plan linking VJO Program strategic goals with performance goals that sets specific targets at the VA medical center level. With regard to risk assessment, VA agreed to conduct a comprehensive assessment of risks using federal standards for internal control and begin collecting information from VA medical centers about workload challenges to fully inform any efforts to redefine how it delivers services. VA did not provide technical comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and the Under Secretary for Health. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202)-512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of this review were to examine: (1) how the Veterans Justice Outreach (VJO) Program delivers services, and the number and characteristics of veterans served by the program; (2) the extent the Department of Veterans Affairs (VA) has used performance assessment to help manage the Veterans Justice Outreach Program; and (3) what key challenges, if any, VA has identified and to what extent the agency has developed mitigation strategies, as necessary. To address all three objectives, we conducted semi-structured interviews and reviewed documentation from nine areas served by VA medical centers. These areas are: (1) Baltimore, Maryland; (2) Bedford, Massachusetts; (3) Chicago, Illinois; (4) Fargo, North Dakota; (5) Houston, Texas; (6) Orlando, Florida; (7) Salt Lake City, Utah; (8) San Diego, California; and (9) Seattle, Washington. For all nine areas we conducted interviews with VJO specialists and Veterans Integrated Service Network (VISN) officials responsible for overseeing the VA medical centers included in our review. In addition, for three of these areas—Baltimore, Maryland; Orlando, Florida; and San Diego, California—we also interviewed VA medical center officials; local criminal justice system stakeholders, including court coordinators and judges, and jail administrators; and held discussion groups with a small nonprobability sample of veterans participating in the VJO Program. The interviews we conducted with officials and stakeholder at the nine areas are nongeneralizable but provided insights on the challenges facing the program and its operations. We also interviewed relevant VA central office officials. We selected the nine areas in our review based on the size of the population of veterans in the area, geographic diversity, VA officials’ recommendations, and proximity to veterans treatment courts. (See table 2.) Specifically: To identify areas with veterans treatment courts, we obtained a list of veterans treatment courts from Justice for Vets, an organization that advocates on behalf of veterans treatment courts. We considered a VA medical center to be located near a veterans treatment court if the court is located in the same state and is within 40 miles of the center. Our selection process included some VA medical centers located near a court and some which were not. We obtained information about the population of veterans from VA’s Veteran Population Projection Model 2014, an actuarial model developed by VA. This model projects the veteran population from fiscal years 2014 through 2043 by using data through fiscal year 2013. We used this model to select VA medical centers in counties with varying numbers of veterans, ranging from approximately 11,000 (in Cass County, North Dakota, which includes Fargo) to 232,000 (in San Diego, California). In addition we used these cross-question methodologies: To better understand how the VJO Program delivers services to veterans, we reviewed relevant federal laws and regulations, VA policies, procedures, guidance, program fact sheets, the VJO Program’s fiscal year 2012-2016 strategic plan, documents from VJO specialists, and other types of documentation that describe the types of program activities and services used to serve justice-involved veterans. We did not independently verify the actions described in such documents. To describe the number and characteristics of the veterans served by the VJO Program, we obtained data from VA. This data included summary information on the number of veterans in the program from fiscal year 2012 through fiscal year 2015 (the most recent data available) as well as sex, age, race, and types of criminal offense. We also obtained fiscal year 2015 data from VA about veterans’ education level, marital status, the number of years since the veterans separated from the military, and mental health status. We determined that VA’s compilation of data about veterans served by the VJO Program was sufficiently reliable to include in our report by reviewing related documentation and interviewing knowledgeable agency officials. Specifically, we obtained and assessed official documentation such as users’ guides, frequently asked questions, and disclaimers, and we discussed our planned use of the data and any limitations with VA officials. We assessed the degree to which VA uses program performance assessment—setting program goals, evaluating programs and using performance measures—to manage the VJO Program by reviewing VA reports and documents, including the program’s fiscal year 2012- 2016 strategic plan, which is the most recent plan; program evaluation plans and preliminary results; and reports on veteran receipt of program services. We also interviewed VA central office officials and other knowledgeable individuals about VA’s current efforts to evaluate the VJO Program and about the program’s goals and efforts to measure progress toward those goals. We compared VA’s use of performance assessment against best practices for assessing program performance and federal standards for internal control. We obtained information on the challenges and VA’s respective mitigation strategies by reviewing VA documents, including a 2014 report assessing program risks. We also interviewed VA central office officials about the challenges and associated risks, how they are addressing the challenges, and what strategies, if any, they have developed to mitigate risks to achieving the program’s goals. We compared VA’s approach for assessing risks with criteria established in the federal standards for internal control. Specifically, we chose to use risk assessment—one of five key components in standards for internal control—because analyzing risk provides the basis for developing appropriate mitigation strategies. We also obtained data from VA on the number of veterans treatment courts and other veteran-focused courts the VJO Program serves, and on the number of full-time VJO specialists from fiscal year 2012 through fiscal year 2015. We compared this list of veterans treatment courts with other authoritative sources and discussed any limitations with VA. We found this number to be sufficiently reliable for our purposes. Table 3 provides data from VA’s Homeless Operations Management and Evaluation System. The selected demographics of veterans include characteristics, military history, criminal justice system involvement, living situation, employment and income, and clinical impression of veterans served by the VJO program during fiscal year 2015. Table 4 provides data on the number of justice-involved veterans by states and the District of Columbia, who were served by the VJO Program during fiscal year 2015. The highest concentrations of justice-involved veterans were in Florida (1,433), California (1,202), Ohio (1,062), Texas (991), and New York (886). In addition to the contact mentioned above, the following staff members made significant contributions to this report: Brett Fallavollita (Assistant Director), James Whitcomb (Analyst in Charge), Hedieh Fusfield, and John Lack. In addition, key support was provided by James Bennett, Joy Booth, Valarie Caracelli, David Chrisinger, Alexander Galuten, Benjamin Licht, Amy Moran Lowe, Paul Schearf, Stephanie Shipman, Almeta Spencer, and Andrew Stavisky.
Most veterans transition to civilian life trouble-free. For those who struggle with their transition to the point that they are arrested and jailed, VA created the VJO Program, which connects veterans with supports and services to help avoid re-incarceration. The program relies on VJO specialists to link veterans to treatment. GAO was asked to review the management of the VJO Program. This report examines 1) how the program delivers services and the number and characteristics of veterans in the program, 2) the extent to which VA uses performance assessment of the program, and 3) the key challenges VA has identified and the extent to which VA has developed mitigation strategies. GAO obtained VA data on program participants for fiscal years 2012 through 2015; reviewed documents; interviewed VA officials and staff from nine areas served by a VA medical center and selected for their geographic diversity and differences in the structures of local criminal justice systems; and in three of the areas interviewed criminal justice system stakeholders and veterans. While information from these interviews cannot be generalized, they provide insights on program challenges and operations. The Veterans Justice Outreach (VJO) Program—created by the Department of Veterans Affairs (VA)—operates through VA medical centers to provide services to veterans involved in local criminal justice systems, and in fiscal year 2015 served about 46,500 veterans, mostly men and many diagnosed with mental health or substance abuse problems. Officials from VA medical centers manage more than 260 VJO Program specialists who identify veterans in jails and local courts, assess their health and social needs, and link them to supports and services. VJO specialists monitor veterans' services and treatment in courts dedicated to veteran offenders. According to VA data, the number of veterans served by the program increased 72 percent from fiscal years 2012 - 2015. In addition, many veterans involved in the program were Post-9/11 veterans; about two-thirds were diagnosed with one or more mental health problems. VA has taken some steps to incorporate a performance assessment system into the VJO Program, one component of effective program management (see figure). Specifically, VA developed strategic goals and plans to conduct evaluations. However, VA has not established performance goals with related targets, timeframes, and performance measures for any of the program's five broad strategic goals. VA officials told GAO they have not taken this step, in part, because VA medical centers have flexibility in determining the activities of VJO specialists. GAO's past work has highlighted strategies that agencies can use in this situation, such as developing measures based on common activities. Best practices call for agencies to establish performance goals and associated performance measures. Until VA incorporates performance goals and measures, it will lack a systematic way to obtain ongoing information to identify possible underperforming areas for improvements. VA identified several key challenges—most of which were related to the demand for services outpacing the program's resources—but has not fully developed appropriate mitigation strategies. One key challenge, for example, is addressing increased program demand as jail administrators more widely use VA's online system that better identifies incarcerated veterans. In addition, a major reason for the demand-resource imbalance is the heavier workload of VJO specialists serving veterans in an expanding number of courts dedicated to veterans, according to VA officials and stakeholders that GAO interviewed. However, GAO found that VA did not comprehensively identify and assess risks posed by each of the key challenges it identified, contrary to federal internal control standards. Absent a comprehensive risk assessment, VA is not well-positioned to develop appropriate strategies to mitigate the greatest risks, which may limit its ability to help justice-involved veterans receive assistance and avoid re-incarceration. To improve program management, VA should establish performance goals and measures and conduct a comprehensive risk assessment. In commenting on a draft of this report, VA agreed with the recommendations and discussed actions it plans to take to implement them.
NRC is an independent agency established by the Energy Reorganization Act of 1974 to regulate civilian use of nuclear materials. NRC is headed by a five-member commission. The President designates one commission member to serve as Chairman and official spokesperson. The commission as a whole formulates policies and regulations governing nuclear reactor and materials safety, issues orders to licensees, and adjudicates legal matters brought before it. Security for commercial nuclear power plants is primarily the responsibility of NRC’s Office of Nuclear Security and Incident Response. This office develops overall agency policy and provides management direction for evaluating and assessing technical issues involving security at nuclear facilities, and it is NRC’s safeguards and security interface with the Department of Homeland Security, the intelligence and law enforcement communities, DOE, and other agencies. The office also develops and directs the NRC program for response to incidents, and it is NRC’s incident response interface with the Federal Emergency Management Agency and other federal agencies. NRC implements its programs through four regional offices. Figure 1 shows the location of commercial nuclear power plants operating in the United States. (See app. II for a list of the commercial nuclear power plants, their locations, and the NRC regions that are responsible for them.) Commercial nuclear power plants are also subject to federal and state laws that control certain matters related to security functions, such as the possession and use of automatic weapons by security guards and the use of deadly force. NRC begins regulating security at a commercial nuclear power plant when the plant is constructed. Before granting an operating license, NRC must approve a security plan for the plant. Since 1977, NRC has required the plants to have a security plan that is designed to protect against a design basis threat for radiological sabotage. Details of the design basis threat are considered “safeguards information” and are restricted from public dissemination. The design basis threat characterizes the elements of a postulated attack, including the number of attackers, their training, and the weapons and tactics they are capable of using. The design basis threat, revised twice since it was first issued in 1977, requires the plants to protect against “a determined violent external assault by stealth, or deceptive actions” or “an internal threat of an insider, including an employee in any position.” Under the 1977 design basis threat, plants had to add barriers to vital equipment and work zones and develop identification and search procedures for anyone entering restricted areas; upgrade alarm systems and internal communication networks and control keys, locks, and combinations; and maintain a minimum number of guards, armed with semiautomatic weapons, that had to be on duty at all times (unless NRC granted an exemption that allowed fewer guards). In 1993, in response to the first terrorist attack on the World Trade Center in New York City and to a vehicle intrusion at the Three Mile Island nuclear power plant in Pennsylvania, NRC revised the design basis threat for radiological sabotage to include the possible use of a vehicle bomb. This action required the installation of vehicle barriers at the power plants. On April 29, 2003, NRC issued a revised design basis threat that the commission believes is the “largest reasonable threat against which a regulated private guard force should be expected to defend under existing law.” NRC has given the power plants 18 months to comply with the new design basis threat. NRC’s inspection program is an important element in its oversight effort to ensure that commercial nuclear power plants comply with security requirements. Security inspectors from the agency’s four regional offices conduct annual inspections at each plant. These inspections are designed to check that the power plants’ security programs meet NRC requirements in the areas of access authorization, access control, and response to contingency events. The inspections also involve reviewing changes to the plant’s security plan and random samples of the plant’s own assessment of its security. NRC suspended its inspection program in September 2001 to focus its resources on the implementation of security enhancements. NRC is currently revising the security inspection program. NRC also conducted force-on-force exercises under the security inspection program. These force-on-force exercises, which were referred to as Operational Safeguards Response Evaluation (OSRE) exercises, were designed to test the adequacy of a plant’s capability to respond to a simulated attack. NRC began conducting these exercises in 1991 but suspended them after September 11, 2001. NRC intends to restructure the program. It has recently begun a series of pilot force-on-force exercises that are designed to provide a more rigorous test of security at the plants and to provide information for designing a new force-on-force exercise program. No date has been set for completing the pilot program or for initiating a new, formal force-on-force program. In order to respond to the heightened risk of terrorist attack, NRC has had extensive interactions with the Department of Homeland Security and the Homeland Security Council on security at commercial nuclear power plants. NRC also has issued advisories and orders that were designed to increase the size and improve the proficiency of plant security forces, restrict access to the plants, and increase and improve plant defensive barriers. On October 6, 2001, NRC issued a major advisory, stating that the licensees should consider taking immediate action to increase the number of security guards and to be cautious of temporary employees. NRC conducted a three-phase security inspection, checking the licensees to see if they had complied with these advisories. Each licensee’s resident inspector conducted phase one, which was a quick overview of the licensee’s security program using a headquarters-prepared survey. During phase two, NRC’s regional security inspectors conducted a more thorough survey of each plant’s security. During phase three, which concluded in January 2002, NRC’s regional security inspectors reviewed each licensee’s security program to determine if the licensee had complied with the additional measures suggested in the October 6, 2001, advisory. NRC used the results from the three-phase security inspection in developing its February 25, 2002, order requiring licensees to implement additional security mechanisms. Many of the order’s requirements were actions suggested in previous advisories. The licensees had until August 31, 2002, to implement these security requirements. In December 2002, NRC completed a checklist to provide assurance that the licensees had complied with the order. In addition, NRC developed a security inspection procedure to validate and verify licensee compliance with all aspects of the order. NRC estimates that this procedure will be completed by December 2003. On August 14, 2003, NRC stated that 75 percent of the power plants had been inspected for compliance with the order. NRC also took action on an item that had been a security concern for a number of years—the use of temporary clearances for temporary workers. Commercial nuclear power plants use hundreds of temporary employees for maintenance—most frequently during the period when the plant is shut down for refueling. In the past, NRC found instances in which personnel who failed to report criminal records had temporary clearances that allowed them unescorted access to vital areas. In its October 6, 2001, advisory, NRC suggested that licensees limit temporary clearances for temporary workers. On February 25, 2002, NRC issued an order that limited the use and duration of temporary clearances, and, on January 7, 2003, NRC issued an order to eliminate the use of these clearances. NRC now requires a criminal history review and a background investigation to be completed before allowing temporary workers to have unescorted access to the power plants. The vital area, within the protected area, contains the plant equipment, systems, devices, or material whose failure, destruction, or release could endanger the public health and safety by exposure to radiation. This area is protected by guard stations, reinforced gates, surveillance cameras, and locked doors. On April 29, 2003, in addition to issuing a new design basis threat, NRC issued two orders that are designed to ensure that excessive work hours do not challenge the ability of security forces in performing their duties and to enhance the training and qualification program for security forces. NRC’s security inspection program may not be fully effective because of weakness in three areas. First, during the annual inspections conducted from 1999 until September 2001, NRC’s regional security specialists used a process to categorize the seriousness of security problems that, in some cases, minimized their significance. As a result, NRC did not track these problems to ensure that they had been permanently corrected and may have overstated the level of security at power plants. Second, NRC does not routinely collect and disseminate information from security inspections to NRC headquarters, other NRC regions, or other power plants. Dissemination of this information may help other plants to correct similar problems or prevent them from occurring. Third, NRC has made limited use of some available administrative and technological improvements that would make force-on-force exercises more realistic and provide a more useful learning experience. NRC ensures that commercial nuclear power plants maintain security by monitoring the performance and procedures of the licensees that operate them. NRC’s inspection program is the agency’s only means to verify that these plants comply with their own NRC-approved security plans and with other NRC security requirements. NRC suspended its annual security inspection program after September 11, 2001, and currently is revising the program. NRC does not expect a new security inspection program to be implemented until some time in 2004. Although NRC has temporarily suspended its annual security inspections, it continues to check a plant’s self-assessments and conduct an inspection if the licensee identifies a serious problem. Under the previous security inspection program, initiated in 1999 and suspended in 2001, NRC used a “risk informed” performance-based system (the Reactor Oversight Process) that was intended to focus both NRC’s and the licensees’ resources on important safety matters. In an attempt to focus NRC attention on plants with the most serious problems, and to reduce regulatory burdens on the nuclear industry, the Reactor Oversight Process relied heavily on performance assessment data generated by the licensees and submitted quarterly to NRC. In the security area, these licensee self- assessments provided NRC with data on (1) the operation of security equipment (such as intrusion detectors and closed-circuit television cameras), (2) the effectiveness of the personnel screening program (including criminal history and background checks), and (3) the effectiveness of the employee fitness-for-duty program (including tests for substance abuse and behavioral observations). Under guidelines for these self-assessments, licensees are required to report only the most serious problems. NRC inspectors followed a multistep process to monitor security, including verifying the licensees’ self-assessments and conducting their own annual inspection. NRC inspectors did not verify all aspects of the licensees’ self-assessments. Instead, the inspectors made random checks of the quarterly self-assessments during their annual security inspection of the plant. During the inspections, the inspectors reviewed the following aspects of security at each plant: Access authorization and fitness for duty (performed annually). Inspectors interviewed supervisors and their staffs about procedures for recognizing drug use, possession, and sale; indications of alcohol use and aberrant behavior; and records of testing for suspicious behavior. These procedures were designed to ensure that the licensee conducts adequate personnel screening and enforces fitness-for-duty requirements—functions considered critical to protect against an insider threat of radiological sabotage. Access control (performed annually). Inspectors observed guards at entry points during peak hours, checked screening equipment, read event reports and logs, checked access procedures for the plant’s vital area, and surveyed data in the security computers. For example, inspectors observed searches of personnel, packages, and vehicles for contraband (i.e., firearms, explosives, or drugs) before entry into the protected area and ensured that the guards granted only authorized persons unescorted access to the protected and the vital areas of the plant. Response to contingency events (performed triennially). Inspectors tested the licensee’s physical security by testing the intrusion detection system. Random checks of changes to security plans (performed biennially). Under NRC regulations, licensees can change their security plans without informing NRC if they believe that the change does not decrease the effectiveness of the plan. Inspectors reviewed security plan changes and could have physically examined a change if an issue arose. If NRC inspectors detected a security problem in these areas, they determined the problem’s safety significance and whether it violated the plant’s security plan or other NRC requirements. If a violation occurred, and the inspectors determined that the problem was “more than minor,” they used a “significance determination process” to relate the violation to overall plant security. According to NRC officials, the significance determination process is also being revised. Under the process previously used, the inspectors assigned a violation one of the following four ratings: very low significance, low to moderate significance, substantial significance, and high significance. For violations more serious than very low significance, the licensee was required to prepare a written response, stating the actions it would take to correct the problem. However, violations judged to be of very low significance—usually categorized as non-cited violations—were routinely recorded; entered into the plant’s corrective action plan; and, from NRC’s perspective, closed. Violations were judged to be of low significance and categorized as a non-cited violation if the problem had not been identified more than twice in the past year or if the problem had no direct, immediate, adverse consequences at the time it was identified. In addition, for non-cited violations, NRC did not require a written response from the licensee and did not routinely follow up to ensure that a permanent remedy had been implemented unless the non- cited violation was randomly selected for review of the licensee’s corrective action program. We found that NRC frequently issued non-cited violations. NRC issued 72 non-cited security violations from 2000 to 2001 compared with no cited security violations during the same period. In addition, NRC issued non- cited violations for security problems that, while within NRC’s guidance for non-cited violations, appear to be serious and seem to justify the formality and follow-up of a cited violation. For example: At one plant, an NRC inspector found a security guard sleeping on duty for more than half an hour. This incident was treated as a non-cited violation because no actual attack had occurred during that time, and because neither he nor any other guard at the plant had been found sleeping more than twice during the past year. At another plant, a security officer falsified logs to show that he had checked vital area doors and barriers when he was actually in another part of the plant. The officer was the only protection for this area because of a “security upgrade project.” At another plant, NRC inspectors categorized two security problems as non-cited violations because they had not occurred more than twice in the past year. In one incident, an inspector observed guards who failed to physically search several individuals for metal objects after a walk- through detector and a hand-held scanner detected metal objects in their clothing. The unchecked individuals were then allowed unescorted access throughout the plant’s protected area. Also, security was compromised in a vital area—where equipment that could be required to protect public health and safety is located—when an inspector found that tamper alarms on an access door had been disabled. In this case, the only compensatory measure implemented was to have a guard check the location once during each 12-hour shift. In addition to NRC’s annual inspections, NRC will conduct an inspection if a plant’s quarterly self-assessment identifies a serious security problem. Between 2000 and 2002, only 4 of the 104 plants reported security problems that required NRC to conduct a follow-up inspection. In 2000, each plant identified that equipment for controlling access to the plant’s protected area was often broken, requiring extra guards as compensation. None of the 104 plants’ self-assessments identified any security problems in 2001, 2002, or the first 6 months of 2003. Once every 3 months, NRC develops performance summaries for each of the nuclear power plants it regulates. In the security area, NRC uses each plant’s self-assessment performance indicators and its own annual inspections as the basis for each plant’s quarterly rating. The performance rating can range from “meeting security objectives” to “unacceptable.” The ratings are displayed on NRC’s Web site, which is the public’s main link to NRC’s assessment of the security at each plant. However, because of NRC’s extensive use of non-cited violations, the performance rating may not always accurately represent the security level of the plant. For example, the plant where the sleeping guard was found was rated as meeting security objectives for that period. NRC also rated security as meeting objectives at the plant where physical searches were not conducted for metal detected by scanners. NRC does not have a routine, centralized process for collecting, analyzing, and disseminating security inspections to identify problems that may be common to other plants or to identify lessons learned in resolving a security problem that may be helpful to plants in other regions. NRC headquarters only receives inspection reports when a licensee challenges the findings from security inspections. Following the inspection, the regional security specialist prepares a report that is then sent to the licensee for comment. If the licensee does not challenge the report’s findings, the report is filed at the region. If the licensee challenges the findings, a NRC headquarters security review panel meets to resolve the issue. At this point, headquarters security specialists may informally retain copies of the case, but, officially, headquarters returns the files to the region, which replies to the licensee. According to NRC headquarters officials, they do not routinely obtain copies of all security inspection reports because headquarters files and computer databases are insufficient to hold all inspection reports. In addition, some of the reports contain safeguards information and can only be transferred by mail, courier, or secure fax. Instead, headquarters only has a list of reports in its computer database—not the narrative details that include safeguards information. According to headquarters officials, regional NRC security specialists may maintain their own information about security problems and their resolution, but they have not done this systematically nor have they routinely shared their findings with headquarters or the other regions. From 1991 through 2001, NRC conducted force-on-force exercises, called OSREs, at the nation’s commercial nuclear power plants. Although these exercises have provided learning experiences for the plants and may have helped improve plant security, the exercises did not fully demonstrate the plants’ security preparedness. The exercises were conducted infrequently, against plant security that was enhanced by additional guards and/or security barriers, by simulated terrorists who were not trained to operate like terrorists, and with unrealistic weapons. In addition, the exercises did not test the maximum limits of the design basis threat, and inspectors often filed OSRE reports late. As a result, the exercises did not provide complete and accurate information on a power plant’s ability to defend against the maximum limits of the design basis threat, and permanent correction of problems may have been delayed. Furthermore, NRC has made only limited use of some available administrative and technological improvements that would make force-on-force exercises more realistic and provide a more useful learning experience. NRC was not required by law, regulation, or order to conduct OSRE exercises; however, NRC and the licensees believed that these exercises were an appropriate mechanism to test the adequacy of the plants’ security plans, and all licensees agreed to participate in these exercises. Since there is no requirement, NRC started the OSRE program without guidance on how frequently the exercises should be conducted at each plant. NRC conducted OSRE exercises at each commercial nuclear power plant about once every 8 years. Sixty-eight power plant sites have conducted one OSRE exercise and 12 sites have conducted two exercises. Like NRC, DOE conducts force-on-force exercises at its nuclear facilities. DOE’s regulations state that force-on-force exercises should be conducted at every facility once a year. According to DOE officials, annual inspections are important because DOE wants up-to-date information on security preparedness at each nuclear facility; and more frequent exercises require the facilities to maintain the quality of the security program because another drill is always only a few months away. According to NRC officials, they are planning to initiate a new force-on-force exercise program that will be based on ongoing pilot force-on-force exercises. They plan to conduct an exercise for each licensee every 3 years, which will require additional regional security inspectors. According to NRC officials, they provided the licensee with up to 12 months’ advance notice of OSRE exercises so that it could assemble a second team of security guards to protect the plant while the exercise was being conducted. However, the advanced notification also allowed licensees to enhance security prior to the OSRE exercises, and they were not required to notify NRC of any enhancements to their security plan. As a result, according to NRC officials, during the exercises, many plants increased the number of guards that would respond to an attack; added security barriers, such as additional fencing; and/or added defensive positions that they did not previously have. According to our review of all 80 OSRE reports, at least 45—or 56 percent—of the exercises were conducted against plant defenders who had received additional training for the exercise or against enhanced plant security features, such as additional guards or defensive positions or barriers. Figure 2 shows the number of OSRE reports that stated that the exercises were conducted against (1) guard forces that were larger than those provided for in the security plan; (2) increased defensive positions or barriers; (3) guards that had received additional training; and (4) guard forces that were larger than those provided for in the security plan, guards that had received additional training, or plants that had enhanced defensive positions or barriers. Although we found 11 instances in which plants had increased the number of security guards for the OSRE exercises, an NRC official told us that the number was actually higher but was not reported in the OSRE reports. According to this official, 52 of the first 55 OSREs conducted used more guards than provided for in the plants’ security plans. For these plants, the number of guards used exceeded the number called for in the security plan by an average of 80 percent. According to this official, using additional guards impaired the realism of the exercise because in the event of an actual attack, only the number of guards specified in the security plan would protect the plant. Plants that used increased numbers of guards, increased training, or increased defensive positions or barriers fared better in the OSREs than those that used the plant defenses specified in the security plan. According to the OSRE reports, of the 45 plants that increased plant defenses beyond the level specified in the security plan, 10 (or 22 percent) failed to defeat the attackers in one or more of the exercises conducted during the OSRE. However, of the 35 plants that used only the security levels specified in the security plan, 19 (or 54 percent) failed to defeat the attackers in one or more exercises conducted during the OSRE. The increased training and preparation for the OSRE exercises provided an opportunity for the licensee to examine its security program and upgrade the program in areas found lacking. However, according to an NRC official, the licensee could decrease security to previous levels after the exercise. Consequently, the exercise only provided an evaluation of the “ramped up” security and provided little information on the plant’s normal day-to-day security. According to this official, NRC could not hold a licensee accountable for ramping down after the OSRE exercise because the enhanced training and additional barriers were not part of the licensee’s security plan, and NRC can only hold the licensee accountable for its security plan. NRC has not required that security enhancements implemented to prepare for OSRE exercises be included as part of the plants’ security plans. However, as of November 2000, NRC no longer allowed the licensee to increase the number of guards or add defensive positions or security barriers for OSRE exercises. Between November 2000 and the suspension of the program in September 2001, only eight OSREs were conducted. DOE—which also provides its facilities with advanced notice of a scheduled force-on-force exercise (up to 1 year) and allows the facility to upgrade its security for the exercise—requires that any enhancements to security that are implemented for the exercise become integrated into the facility’s security plan. DOE inspectors conduct follow-up visits to verify that the enhancements have been maintained. Licensees used off-duty guards, guards from other licensees, and management personnel as the simulated adversary force for OSRE exercises, but these forces may not have accurately simulated the dangers of an attack. The guards on the adversary force had training only in defending the plant, not in terrorist and offensive tactics or in the use of weapons that a terrorist might have. Furthermore, plant managers participating in the drill had little or no training or experience, even in defensive tactics. Finally, some members of the adversary force could have a vested interest in having the licensee’s guard force successfully defeat them in attempting simulated radiological sabotage, thereby demonstrating an adequate security program. In contrast, DOE uses a trained, simulated composite adversary force in all of its force-on-force exercises. This force includes guards from all departmental facilities. Team members are trained in offensive tactics and, according to DOE officials, have an “adversary” mind-set, which allows them to think and act like terrorists. According to NRC officials, as part of the pilot program, they are assessing the characteristics, training, and selection of the adversary force. They said that they also have reviewed DOE’s composite adversary team methods, attended DOE’s adversary training school, and are assessing the DOE program’s relevance to NRC activities. Adversary and plant defensive forces generally used rubber weapons during OSRE exercises. Although under some circumstances, such as very confined spaces, rubber weapons would be the most practical, in general, rubber weapons do not simulate actual gunfire or provide real-time experience. Licensee employees (controller judges) had to determine whether a guard or adversary member’s weapon hit its intended target. This led to unrealistic exercises. For example, in one OSRE exercise, the controller judges reported that they could not determine when weapons were “fired” or if a person was hit. DOE usually uses Multiple Integrated Laser Equipment to simulate weapon fire and provide real-time experiences. Multiple Integrated Laser Equipment consists of weapons-mounted laser transmitters and laser sensors on the guard forces and adversary team members. When a laser gun is fired and hits a target, an alarm registers the hit, thereby allowing the participants to simulate weapon fire and participate in real-time exchanges. A few NRC OSRE exercises used Multiple Integrated Laser Equipment. According to one OSRE report, the use of laser guns provided realistic scenarios and simulated the stress of an actual engagement. Consequently, the exercise showed results that “significantly helped in evaluating the effectiveness of both the defensive strategy and the officers executing the strategies.” NRC officials said that they are conducting a $1.4 million assessment of the use of Multiple Integrated Laser Equipment. NRC never tested several aspects of the design basis threat in the OSRE exercises. As a result, NRC could not determine the plants’ capability to defend against the maximum credible terrorist attack. According to the NRC official who was in charge of the OSRE program, NRC did not use and test certain adversary capabilities because the exercises would have been too rigorous, would have resulted in too many exercises in which the adversaries achieved their objectives, and thus may have resulted in the elimination of the OSRE program. The second round of OSRE exercises, begun in 2000, was originally planned to include all of the adversary capabilities. However, from the beginning of the second round of OSREs to the suspension of the program in September 2001, none of the OSREs included all adversary capabilities. DOE tests the full adversary capabilities of the design basis threat and often goes beyond those capabilities. DOE officials believe it is important to test the licensee’s security against all of the adversary capabilities so that DOE can determine how secure the facility is and what improvements are needed. NRC had a program goal of issuing OSRE reports 30 to 45 days after the end of the exercise, but 46 of 76 reports (60 percent) were not issued within the required time. Delays in releasing a report to the licensee may have affected the timeliness of permanent corrective actions and diminished the effectiveness of feedback on the exercise. On average, NRC issued OSRE reports to the licensees 98 days after the end of the exercises. The OSRE reports addressed any problems that needed to be corrected and specified how long the licensee had to correct the problem. NRC communicated the results of the exercise to the licensee at a closeout meeting. If a concern was severe and made the licensee vulnerable to security breaches, the licensee was required to provide temporary protection to address that concern until it implemented a permanent correction. However, the OSRE reports have specified an average of 51 days to permanently correct a concern after the report was issued. As a result, nearly 5 months elapsed between when the exercise was completed and when the report was issued and a permanent correction was required. Commercial nuclear power plants face challenges in securing their plants against intruders because federal and state laws limit security guards’ ability to defend these plants. Federal law generally prohibits private ownership of automatic weapons, and there is no exemption in the law for security guards at commercial nuclear power plants. As a result, no nuclear power plants use automatic weapons in their defense. However, terrorists attacking a nuclear power plant could be armed with automatic weapons or other advanced weapons. NRC officials believe that a terrorist attacking a nuclear power plant could obtain and use any weapon that can be purchased on the black market, while guards generally have to rely on semiautomatic pistols, rifles, or shotguns. As a result, guards at nuclear power plants could be at a great disadvantage in terms of firepower, if attacked. According to NRC officials, the use of fully automatic weapons would provide an important option to plants as they make security decisions about a number of factors, such as the number of plant guards, the positioning of guards at the facilities, and the quality and capabilities of surveillance equipment. According to these officials, plants will have more options in developing the appropriate combination of security elements if guards have the authority to carry automatic weapons. NRC recognizes, however, that some plant sites face special conditions under which fully automatic weapons might not be beneficial or practicable. Commercial nuclear power plants also face security challenges because of the absence of nationwide legal authority and clear guidance on when and how guards can use deadly force in defending these plants. According to NRC’s regulations, a guard should use deadly force in protecting nuclear power reactors against sabotage when the guard has a reasonable belief that such force is necessary for self-defense or the defense of others. However, in general, state laws govern the use of deadly force by private sector persons, and these laws vary from state to state. For example, under New Hampshire statutes, guards may not use deadly force if they can safely retreat from the encounter. In contrast, Texas statutes allow guards to use deadly force in defense of private land or property, which includes nuclear power plants, without retreating, if such action is necessary to protect against another’s use of unlawful force. In still other states, such as Virginia and Michigan, no state statutes specifically address the issue, and the courts decide whether deadly force was appropriate in a given situation. NRC officials believe that guards—concerned about their right to act— might second-guess, hesitate, delay, or fail to act appropriately against an attacker, thereby increasing the risk of a successful attack on the nation’s nuclear power plants. During OSRE exercises, NRC officials presented guards with various scenarios that could involve the use of deadly force. In 7 of the 80 OSRE reports we reviewed (about 9 percent) NRC found that the guards did not understand or did not properly apply its guidance on the use of deadly force. Finally, guards at nuclear power plants do not have nationwide legal authority and clear guidance on when and how to arrest and/or detain intruders at the nation’s plants. NRC officials believe that there is a question about whether federal authority can be directly granted to private security guards who are not deputized. State laws governing this authority vary. For example, in South Carolina, private security guards’ authority to arrest and/or detain intruders on plant property is similar to local law enforcement officials’ authority. However, in most states, these guards have only the arrest authority afforded every U.S. citizen. To enable nuclear power plants to better defend against attacks, NRC has sought federal legislation that would authorize the use of deadly force to protect the plants. Legislation has not been enacted but is currently pending on arrest and detain authority. NRC has taken several actions to respond to the heightened risk of attack following the September 11, 2001, terrorist attacks and, in April 2003, issued a new design basis threat that the commercial nuclear power plants must be prepared to defend against. However, NRC’s past methods for ensuring that plants are taking all of the appropriate defensive measures— the annual security inspections and the force-on-force exercises—had significant weaknesses. As a result, NRC’s oversight of these plants may not have provided the information necessary for NRC to ensure that the power plants were adequately defended. In particular, NRC’s past use of non-cited violations for security problems that appear to be serious is detrimental to ensuring the plants’ security because NRC did not require follow-up to ensure that a non-cited violation was corrected. Lack of follow-up reduces the likelihood that needed improvements will be made. Moreover, NRC may have overstated security levels when it provided a “meeting security objectives” rating to some plants having non-cited violations that appear to have serious security implications. NRC could not have known whether some non-cited violations, such as guards found asleep on duty or failure to physically search for metal detected by scanners, were vulnerabilities that could have been exploited. However, accepting such vulnerabilities post-September 11, 2001, opens the power plants to undue risk. Furthermore, NRC may be missing opportunities to better oversee and improve security at the plants because it does not routinely collect, analyze, and disseminate information on security enhancements, problems, and solutions among the plants and within the agency. Such a mechanism may help other plants to improve their security. Similarly, the force-on-force exercises were not realistic enough to ensure the identification and correction of plants’ security vulnerabilities. Untrained adversary teams, temporarily enhanced defenses, and rubber weapons used in past force-on-force exercises simply do not compare with simulated attack exercises using technologically advanced tools that provide realistic, real-time experience. Furthermore, NRC was not required to conduct these exercises and has done so infrequently, thereby making plants even less prepared to address an attack. In addition, in the past, exercises have not addressed the full range of the design basis threat. Finally, delays in issuing reports on the OSRE exercises may have resulted in delays in the permanent correction of known security problems. NRC is in the process of revising both its security inspection program and its force-on-force exercise program. What these programs will consist of when they are revised is currently unknown. NRC expects its security inspection program to be restored by 2004 and will decide the future of its force-on-force program after completing its pilot program—at a date yet to be determined. Revisions of these programs provide NRC with an opportunity to use the lessons learned from the suspended programs to strengthen them and make them more relevant to the post-September 11, 2001, environment. Until these programs are restored, NRC is relying on plants’ self- assessments and the force-on-force pilot program as its mechanisms to oversee security at the nation’s nuclear power plants. The self-assessments rely on the licensees to identify problems, which then prompts NRC to conduct security inspections. Since the inspection program was curtailed in 2001, the plants have not identified any serious security problems in their self-assessments. Therefore, it is critical for NRC to revise and restore promptly its annual security inspections and force-on-force exercises to fulfill its oversight responsibilities. To strengthen NRC’s security inspection program, we recommend that the NRC Commissioners ensure that NRC’s revised security inspection program and force-on- force exercise program are restored promptly and require that NRC regional inspectors conduct follow-up visits to verify that corrective actions have been taken when security violations, including non-cited violations, have been identified; ensure that NRC routinely collects, analyzes, and disseminates information on security problems, solutions, and lessons learned and shares this information with all NRC regions and licensees; and make force-on-force exercises a required activity and strengthen them conducting the exercises more frequently at each plant; using laser equipment to ensure accurate accounts of shots fired; requiring the exercises to make use of the full terrorist capabilities stated in the design basis threat, including the use of an adversary force that has been trained in terrorist tactics; continuing the practice, begun in 2000, of prohibiting licensees from temporarily increasing the number of guards defending the plant and enhancing plant defenses for force-on-force exercises, or requiring that any temporary security enhancements be officially incorporated into the licensees’ security plans; and enforcing NRC’s requirement that force-on-force exercise reports be issued within 30 to 45 days after the end of the exercise to ensure prompt correction of the problems noted. We provided a draft of this report to NRC for its review and comment. NRC stated that our report did not provide a balanced or useful perspective of its role in ensuring security at commercial nuclear power plants. NRC believed that our report was “of a historical nature,” focusing on NRC’s oversight of power plants before September 11, 2001, and that our report failed to reflect the changes NRC has made to its program since September 11. Furthermore, NRC commented that our characterization of non-cited violations as minimizing the significance of security problems is a serious misrepresentation. NRC said that the “anecdotal” issues noted in the draft report were “relatively minor issues” and that it treated them appropriately. We agree that NRC has taken numerous and appropriate actions since September 11, 2001, and that additional security procedures have been, and are being, put in place to increase power plant operators’ attention to enhancing security. Our draft report had discussed many of these actions, and we have added additional language to the report to more fully reflect these actions. We note that most of these actions were advisories or requirements for the licensee to enhance plant physical security and did not relate to NRC’s oversight activities. With respect to NRC oversight of security at the nuclear power plants, NRC has suspended the two primary elements of its oversight program, the security inspection program and the OSRE exercises and has not yet resumed them. NRC’s oversight actions since September 11 have been interim in nature; it has conducted ad hoc inspections and some force-on-force exercises as part of a pilot program. NRC said that it plans to reinstitute the security inspection and the force- on-force exercise programs in the future, but it does not now know what the revised programs will consist of. As a result, we remain convinced that it was appropriate to examine NRC’s security oversight program before September 11. In the absence of any formal post-September 11 oversight program, this was the only way to systematically assess the strengths and weaknesses of NRC’s oversight. Our recommendations are directed at strengthening the oversight programs and making NRC’s oversight more relevant to the post-September 11 environment. In that regard, while the NRC comments reference numerous efforts and enhancements, we note that, with one exception, these actions were designed to enhance power plant security and not to improve or enhance NRC’s oversight program, which is the subject of this report. The one exception is NRC’s force-on-force evaluation program, a major element in NRC’s oversight program. In its comments, NRC stated that we failed to adequately reflect NRC’s enhanced force-on-force evaluation program, including the increased frequency and greater degree of realism of the exercises. We disagree. NRC has not yet instituted a new force-on-force program, and our report reflects NRC’s current force-on-force efforts. NRC suspended its old OSRE program after September 11, 2001, and is currently conducting pilot force-on-force exercises, which we describe in this report. NRC has not determined when a permanent program will be instituted or what it will consist of when it is reinstituted. NRC plans to use the results of the pilot exercises to help formulate a new, permanent program. We also disagree that the “anecdotal” issues cited in the draft report were “relatively minor issues” and do not believe that the continued extensive use of non-cited violations will achieve the best oversight. Sleeping guards, unauthorized access to protected areas, disabled alarms in the vital area, and failure to inspect visitors who set off alarms on metal detectors are all serious security problems that warrant NRC attention and oversight. NRC’s belief that it should rely on the licensees to self-identify and correct these types of problems is troubling. Instead of discounting problems that are, on their face, quite worrisome, NRC should aggressively determine the root cause of the problems, formulate corrective actions, and follow up to ensure that the approved corrective actions have been implemented and that the implemented actions have corrected the problems. The use of non- cited violations delegates these activities and responsibilities to the licensees. NRC believes that such delegation is appropriate and that the use of non-cited violations contributes to an environment in which the licensee self-identifies and corrects problems, a behavior that NRC said it encourages. However, in the cases we cited, the delegation of responsibility for identifying and correcting security problems was not effective because all were security problems that the licensee failed to identify, but instead were found by NRC security inspectors. Finally, NRC stated that its process requires it to review a sampling of the licensees’ corrective actions to ensure that the licensees are implementing the corrective actions. NRC failed to note, however, that the requirement cited is part of the baseline security inspection program that was suspended after September 11, 2001, and that has not been reinstated. In addition, when NRC was conducting baseline security inspections, the program required corrective action checks only every 2 years, and the sample selected for checks included all corrective actions—safety and emergency preparedness, as well as security. As a result, NRC had no assurance that any security corrective actions would be selected for follow- up. Licensees should be involved in identifying and correcting problems. However, we believe that by delegating these functions to the licensee, NRC is abandoning its oversight responsibilities and, as a result, cannot guarantee that problems are identified and corrected. NRC did not comment on our recommendations for reinstituting and improving its baseline inspection and force-on-force exercise programs. Nevertheless, we hope that NRC decides to implement our recommendations as it fulfills its 31 U.S.C. 720 requirement to submit a written statement of the actions taken on our recommendations. This statement is to be submitted to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later than 60 days after the date of this report’s release, and to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after that same date. In addition to its overall comments and observations (see app. III), NRC provided a number of technical comments and clarifications, which we incorporated in this report as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to interested congressional committees, the Chairman of the Nuclear Regulatory Commission, and the Director of the Office of Management and Budget. We will make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841 or contact me at [email protected]. Key contributors to this report are listed in appendix IV. Our objectives were to review (1) the effectiveness of the Nuclear Regulatory Commission’s (NRC) inspection program to oversee security at commercial nuclear power plants and (2) legal challenges currently affecting physical security at the power plants. To meet these objectives, we visited NRC’s Headquarters in Rockville, Maryland, and Region I in King of Prussia, Pennsylvania; obtained NRC advisories, orders, regulations, Operational Safeguards Response Evaluation (OSRE) reports, and annual security inspection reports; and interviewed officials who were knowledgeable about NRC’s physical security requirements for nuclear power plants. We also visited the Limerick, Oyster Creek, and Calvert Cliffs power plants; obtained licensee documents and requirements regarding their security procedures; and interviewed licensee officials who were knowledgeable about the facilities’ security plans, procedures, and NRC’s nuclear power plant physical security regulations. During our visits, we observed the security measures that were put in place to reflect NRC’s advisories and orders since the terrorist attacks of September 11, 2001. To determine the extent of NRC’s oversight of nuclear power plant security, we held discussions with NRC Region I security inspectors and officials in NRC’s Office of Nuclear Security and Incident Response, Office of General Counsel, and Office of the Executive Director for Operations. We also held discussions with licensee officials at the Limerick, Oyster Creek, and Calvert Cliffs power plants on their security procedures and mechanisms and on their interaction with NRC security inspectors. In addition, we collected information on nuclear security from all NRC regional security offices. To determine how NRC assesses the quality of daily security procedures and mechanisms against the licensees’ security plans, we obtained and reviewed all 49 NRC inspection reports that contained a finding that was judged to be of moderate significance or higher. We also had discussions with officials in NRC’s Office of Nuclear Security and Incident Response regarding the methods for conducting and reporting annual inspections and in NRC’s Office of Enforcement regarding how security violations are administered. To determine how NRC tests licensees against the design basis threat, we interviewed NRC officials to understand both the process for OSRE exercises and report writing and the follow-up procedures for any concerns found during an OSRE exercise. We also examined all OSRE reports from each NRC licensee. We designed a data collection instrument in order to organize specific elements that were extracted from 80 OSRE reports. Two GAO analysts followed procedures to ensure the completeness of all data collection instrument entries. The data collection instrument data were entered into a spreadsheet file for analysis. To detect potential coding and keying errors, the accuracy of the data entered into the spreadsheet file was verified. We also held discussions with Department of Energy officials to (1) determine how they conduct force-on-force exercises at the department’s nuclear facilities and (2) determine if there are any promising practices that might be applied to NRC’s OSRE program. To determine NRC’s views on federal and state laws and on NRC institutional policies (i.e., regarding the use of automatic weapons, the authority to use deadly force, and the authority to arrest and detain) that could impact a licensee’s ability to adequately secure commercial nuclear power plants, we discussed these issues with officials from NRC’s Office of Nuclear Security and Incident Response and Office of General Counsel. Additionally, we discussed these same issues with industry officials who were specifically knowledgeable about these areas. We examined existing federal and state laws, and we also examined federal and state bills that have been proposed or are pending legislative passage. In addition to those named above, Jill Ann Roth Edelson, Kevin L. Jackson, William Lanouette, J. Addison Ricks, Carol Herrnstadt Shulman, and Barbara R. Timmerman made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The September 11, 2001, terrorist attacks intensified the nation's focus on national preparedness and homeland security. Among possible terrorist targets are the nation's nuclear power plants--104 facilities containing radioactive fuel and waste. The Nuclear Regulatory Commission (NRC) oversees plant security through an inspection program designed to verify the plants' compliance with security requirements. As part of that program, NRC conducted annual security inspections of plants and force-on-force exercises to test plant security against a simulated terrorist attack. GAO was asked to review (1) the effectiveness of NRC's security inspection program and (2) legal challenges affecting power plant security. Currently, NRC is reevaluating its inspection program. We did not assess the adequacy of security at the individual plants; rather, our focus was on NRC's oversight and regulation of plant security. NRC has taken numerous actions to respond to the heightened risk of terrorist attack, including interacting with the Department of Homeland Security and issuing orders designed to increase security and improve plant defensive barriers. However, three aspects of its security inspection program reduced NRC's effectiveness in overseeing security at commercial nuclear power plants. First, NRC inspectors often used a process that minimized the significance of security problems found in annual inspections by classifying them as "non-cited violations" if the problem had not been identified frequently in the past or if the problem had no direct, immediate, adverse consequences at the time it was identified. Non-cited violations do not require a written response from the licensee and do not require NRC inspectors to verify that the problem has been corrected. For example, guards at one plant failed to physically search several individuals for metal objects after a walk-through detector and a hand-held scanner detected metal objects in their clothing. The unchecked individuals were then allowed unescorted access throughout the plant's protected area. By making extensive use of non-cited violations for serious problems, NRC may overstate the level of security at a power plant and reduce the likelihood that needed improvements are made. Second, NRC does not have a routine, centralized process for collecting, analyzing, and disseminating security inspections to identify problems that may be common to plants or to provide lessons learned in resolving security problems. Such a mechanism may help plants improve their security. Third, although NRC's force-on-force exercises can demonstrate how well a nuclear plant might defend against a real-life threat, several weaknesses in how NRC conducted these exercises limited their usefulness. Weaknesses included using (1) more personnel to defend the plant during these exercises than during a normal day, (2) attacking forces that are not trained in terrorist tactics, and (3) unrealistic weapons (rubber guns) that do not simulate actual gunfire. Furthermore, NRC has made only limited use of some available improvements that would make force-on-force exercises more realistic and provide a more useful learning experience. Even if NRC strengthens its inspection program, commercial nuclear power plants face legal challenges in ensuring plant security. First, federal law generally prohibits guards at these plants from using automatic weapons, although terrorists are likely to have them. As a result, guards at commercial nuclear power plants could be at a disadvantage in firepower, if attacked. Second, state laws vary regarding the permissible use of deadly force and the authority to arrest and detain intruders, and guards are unsure about the extent of their authorities and may hesitate or fail to act if the plant is attacked.
Traffic safety data are the primary source of knowledge about crashes and how they are related to the traffic safety environment, human behavior, and vehicle performance. Most states have developed traffic safety data systems and manage these data through the initial reporting of crashes by law enforcement officers through data entry and analysis. Figure 1, which is based on NHTSA’s Traffic Records Highway Safety Program Advisorydepicts a model state traffic safety data system, including the collection and submission of data, the processing of these data into state safety data systems, and the potential uses for quality crash information. These data are often not housed in a single file or on just one computer system; however, users should have access to crash information in a useful form and of sufficient quality to support the intended use. At the state level, state agencies use traffic safety data to make highway safety planning decisions and to evaluate the effectiveness of programs, among other uses. In those states where quality crash data on a range of crashes are not available, officials use federal data such as that from NHTSA’s Fatality Analysis Reporting System (FARS) to make programming decisions. FARS data, while useful for some purposes, are limited because they only include information about fatal crashes, thus preventing decision making based on a range of crash severity or the entirety of a state’s crash situation. At the federal level, NHTSA provides guidelines, recommendations, and technical assistance to help states improve their crash data systems and is responsible for overseeing state highway safety programs. Under TEA-21, NHTSA awarded $935.6 million in highway safety incentive grants to improve safety. In 2003, NHTSA made improving traffic safety data one of the agency’s highest priorities. Since the early 1980s, NHTSA has been obtaining crash data files from states, which in turn have been deriving the data from police crash reports. These statewide crash data files are referred to as the SDS program. Participation by states is voluntary, with 27 states currently participating. These data include some of the basic information for the analyses and data collection programs that support the NHTSA mission of identifying and monitoring traffic safety problems. One of NHTSA’s grant programs was specifically aimed at improving traffic safety data. Administered through its 10 regional offices around the country, the program provided about $36 million to states for improving their crash data systems. This grant program was authorized under TEA-21 and was known as the “411 grant program” after the relevant section of the U.S. Code. NHTSA administers a number of other grant programs besides the 411 grant program; however, it was the only incentive grant program that was specifically directed at improving state traffic safety data systems. The grant program required states to establish a foundation for improving their traffic safety data systems by first completing three activities: Establish a coordinating committee of stakeholders to help guide and make decisions about traffic safety data: The committee would ideally include stakeholders from agencies that manage the various data files (e.g., representatives from the state department of transportation responsible for roadway information, and from the state department of motor vehicles responsible for the management of vehicle licensing information). Conduct an assessment of the current system: The assessment would evaluate a state’s system by identifying strengths and weaknesses and providing a baseline from which the state could develop its strategic plan to address data system needs. Develop a strategic plan that prioritizes traffic safety data system needs and identifies goals: The strategic plan is to provide the “map” specifying which activities should be implemented in order to achieve these goals. As with the assessment, the focal point for developing the strategic plan, if a state did not already have one, would be the coordinating committee. The level of funding available to a state was dependent on whether states had already put these requirements in place. Additionally, states were required to contribute matching funds of between 25 and 75 percent, depending on the year of the grant. Three types of grants were awarded: A state received a start-up grant if it had none of the three requirements in place. This was a one-time grant of $25,000. A state received an initiation grant if it had established a coordinating committee, had completed or updated an assessment within the previous 5 years, and had begun to develop a strategic plan. This grant was a one-time grant of $125,000, if funds were available. A state received an implementation grant if it had all three requirements in place and was positioned to make specific improvements as indicated in its strategic plan. This grant was at least $250,000 in the first year and $225,000 in subsequent years, if funds were available. The Congress has extended TEA-21 until May 2005, and new House and Senate bills will likely be introduced during the next congressional session. The most recent House and Senate bills under consideration, which were not passed in the 2004 session, included proposals to reauthorize the 411 grant program in a similar, but not identical, form to the original program. The proposals included funding up to $270 million, which is over six times the original funding amount. They also included (1) additional requirements for documentation from states describing how grant funds would be used to address needs and goals in state strategic plans and (2) a requirement that states demonstrate measurable progress toward achieving their goals. The proposals, however, did not include one of the original program requirements—that states have an assessment of their traffic safety data systems that is no more than 5 years old when they applied for the grant. The 9 states we examined in detail varied considerably in the extent to which their traffic safety data systems met NHTSA’s recommended criteria for the quality of crash information. NHTSA’s six criteria (shown in table 1 below, along with an explanation of each criterion’s significance) appear in the agency’s Traffic Records Highway Safety Program Advisory, the guide used by NHTSA when it carries out traffic records assessments at the request of state officials. These assessments are a technical assistance tool offered to state officials to document state traffic safety data activities, note strengths and accomplishments, and offer suggestions for improvement. In addition, NHTSA released the report Initiatives to Address Improvement of Traffic Safety Data in July 2004, which emphasized these data quality criteria and provided recommendations to states. We examined all six criteria for the 9 case-study states, and our review of 17 states that participated in NHTSA’s SDS program provided additional information for three of these six criteria. None of the 9 states in our case-study review met all six criteria, and most had opportunities for improvement in many of the criteria. The sections below discuss each criterion. data available. Data processing times for the 9 states ranged from less than 1 month in 2 states to 18 months or more in 2 others. For example, to develop their 2005 highway safety plans during 2004, 4 of the 9 states used data from 2000, 2001, or 2002, and the remaining 5 states used 2003 data. (See fig. 2.) For 6 of the 9 states, three factors accounted for their not meeting the timeliness criterion: slow data entry, data integration delays, and lengthy data edits. As a result, the state safety plans are unable to take recent crash trends into account in these states. Generally, those states submitting data electronically from local law enforcement agencies to the state traffic safety data system had much faster entry of crash information into centralized databases. In contrast, states that processed reports manually by keying in information from paper forms at the state level had longer data entry time frames. The availability of data was also sometimes delayed by inefficient data completion processes. In states where this is not done automatically, crash data and location information are often manually entered into the traffic safety data system. In addition, checks for accuracy also delayed data availability. For example, 1 of the states that had to use data from 2000 to develop its highway safety plan, had used electronic methods to enter more recent data, but detailed edit checks delayed the data’s release considerably. Seven of the 9 states we visited had crash forms that could be used to collect data across all jurisdictions within the state, helping to ensure that data collected within the state are consistent. However, no state had forms that met all of the consistency criteria recommended in the Model Minimum Uniform Crash Criteria (MMUCC) guidelines that were developed collaboratively by state and federal authorities. These guidelines provide a recommended minimum set of data elements to be collected for each crash, including a definition, attributes, and the rationale for collecting each element. While variation in the crash data collected by states can be attributed to varying information needs, guidelines help to improve the reliability of information collected and also assist in state-to­ state comparisons and national analyses. The variation between states can be seen among the 17 states we analyzed that contribute to NHTSA’s SDS program. For example, the MMUCC guidelines recommend reporting on whether alcohol was a factor in the crash by indicating the presence or absence of an alcohol test, the type of test administered, and the test results. However, several of the states collected information on impaired driving without specifying the presence of an alcohol test, the test type, or the test result; thereby making it difficult to determine whether alcohol-use contributed to the crash. In addition, the states were not uniform in collecting and reporting the VIN, another element recommended in the MMUCC. A VIN is a unique alphanumeric identifier that is applied to each vehicle by the manufacturer. The VIN allows for an effective evaluation of vehicle design characteristics such as occupant protection systems. As figure 3 shows, data about VINs were not available for all 17 states in crashes for any year between 1998 and 2002. For example, although every state had submitted crash data for 1998 and 1999, crash data for 6 of the 17 states did not include VINs. The lack of consistency limits the use of state crash data for most nationwide analyses. For example, in a recent National Center for Statistics and Analysis (NCSA)Research Note on child safety campaigns, only 3 states met the criteria to be included in the analysis, not nearly enough data to statistically represent the nation. The criteria necessary for inclusion in the report included collecting data on all vehicle occupants, including uninjured occupants, and VINs for the relevant years (1995-2001). If state systems matched the MMUCC, they would include this information. Similarly, only 5 states qualified for use in a NCSA analysis of braking performance as part of the New Car Assessment Program because only these states collected VINs and had the necessary variables for the years involved in the study. There is evidence that as states redesign their crash forms, they are following the MMUCC guidelines more closely. Remaining differences from the suggested guidelines often reflect the needs of individual states. Among the 9 states we visited, 5 had redesigned their crash forms since 1997. All 5 used the guidelines as a baseline, although each of them tailored the form to a degree. One state, for example, collected no data about the use of seat belts for uninjured passengers, while another chose to collect additional state-specific attributes, such as describing snow conditions (e.g., blowing or drifting). Among the remaining 4 states we visited, 2 states are currently using the MMUCC guidelines to redesign their forms. One factor affecting the degree of completeness is state reporting thresholds—that is, standards that local jurisdictions use to determine whether crash data should be reported for a particular crash. These thresholds include such things as the presence of fatalities or injuries or the extent of property damage. Although all 9 of the states we visited had reporting thresholds that included fatalities and injuries, the thresholds for property damage varied widely. For example, some states set the property damage threshold at $1,000, while 1 state did not require reporting of property-damage-only crashes. In addition, it was not possible to determine the extent that all reportable crashes had been included in the traffic safety data system. Officer discretion may play a role. For example, capturing complete documentation of a crash event is often a low priority when traffic safety data are not perceived as relevant to the work of the law enforcement officer or other public safety provider. In 1 state, for example, the police department of a major metropolitan area only reported crashes involving severe injuries or fatalities, although the state’s reporting threshold included damage of $1,000 or more. Variation in thresholds among states is not the only factor that affects the completeness of crash data. For the crash information that does make it into the state database, there are often gaps in the data, as we learned from evaluating the records of 17 states participating in NHTSA’s SDS program. For 5 of these states, we analyzed data coded “unknown” and “missing” for 24 data elements. The percentage of data coded as unknown or missing was frequent for several key data elements, such as the VIN; the results of alcohol or drug testing; and the use of seat belts, child car seats, and other restraint devices. For example, the percentage of data coded as unknown or missing for the use of seat belts and other restraints ranged between 1.5 and 54.8 percent for 4 of the 5 states. Such data can be inherently difficult to collect.For example, when officers arrive at the scene of a crash, drivers and passengers may already be outside their vehicles, making it impossible to know if they were wearing seat belts. Asked if they were wearing a seat belt, those involved in the crash may not tell the truth, especially if the state has a law mandating seat belt use. Six of the 9 states we visited made use of quality control methods to help ensure that individual reports were accurate when they were submitted to the traffic safety data system. Of these 6 states, for example, 4 linked crash reports to other traffic safety data, including driver or vehicle files, to verify or populate information on crash reporting forms. Table 2 contains examples of other tools and checks that the states used to help ensure accuracy. Four of the 9 states did quality checks at the aggregate level—that is, when crash reports are analyzed in batches to identify abnormalities in reporting that may not be apparent looking at individual reports. Of these 4 states, for example, 1 had staff analyze the reports to identify invalid entries and data miscodings, while another conducted edit checks each year to check for invalid vehicle types or other problems. Such aggregate-level analysis can be useful to identify systematic problems in data collection that may lead to erroneous investigation or false conclusions, such as when officers report one type of collision as another. For instance, officers in 1 state were found to be characterizing some car-into-tree crashes as head-on collisions. Once identified, such data collection problems can often be resolved through officer training. To test data accuracy, we analyzed crash data submitted by the 17 states to NHTSA and found relatively few instances of data that had been coded as “invalid”—generally 3 percent or less. Data classified as invalid were most often for elements more likely to be transposed or miscopied, such as VINs. However, because we could not observe crash-scene reporting and did not examine or verify information on source documents (such as police accident reports), we cannot assume that the other 97 percent of data were accurately reported and entered correctly. Invalid data entries are a good starting point for measuring the accuracy of a data system, but they are only one indication of the accuracy of state traffic safety data. All 9 states produced crash information summaries, although some were based on data that were several years old—a factor that limited their usefulness. In addition, 8 states provided law enforcement agencies or other authorized users with access to crash information within 6 months of crashes. Such access was often via the Internet, and data analysis tools were typically limited to a certain number of preestablished data reports. Thus, any in-depth analysis was limited to the tools available online. Three states had analysts available to provide information or complete data queries upon request. In another state, which had the capability to conduct data collection electronically, local law enforcement agencies had access to analysis tools to use with their own data. If users wanted direct access to completed data for more detailed analysis, they often had to wait somewhat longer, given the need for additional data entry or the completion of accuracy checks. In 1 state, for example, there was a 2- to 3-month delay due to the transfer of preliminary crash data from the state police database to the state department of transportation where location information was added to complete the data. Only 1 of the 9 states integrated the full array of potential databases—that is, linked the crash file with all five of the files typically or potentially available in various state agencies: driver, vehicle, roadway, citation/conviction, and medical outcome. All 9 of the states we visited integrated crash information with roadway files to some degree, but only a few integrated these data with driver or vehicle licensing files, or with the conviction files housed in state court systems. (See table 3.) In addition, 7 of the 9 states participated in NHTSA’s Crash Outcome Data Evaluation System (CODES) program, which links crash data with medical information such as emergency and hospital discharge data, trauma registries, and death certificates. Technological challenges and the lack of coordination among state agencies often posed hurdles to the integration of state data. In 1 state, for example, crash files were sent from the central traffic records database kept by the state department of safety to the state department of transportation for manual entry of location information from the roadway file. Once the state department of transportation completed these records, however, there was no mechanism to export that information back into the central database. Also, in some states data integration was limited because data were not processed with integration in mind. In 1 state, for example, state department of transportation officials noted that the new crash system had been developed for state police use, and that efforts were still under way to develop an interface to bring crash data into the department’s system. In contrast, a state official in another state noted that the housing of several agencies involved in the traffic safety data system—including those responsible for the driver, vehicle, and roadway files—in the state department of transportation had facilitated the direct sharing of information and the full integration of data. In support of these quality criteria and improved traffic safety data systems, NHTSA released a report in July 2004 detailing steps that could be taken by federal and state stakeholders to improve traffic safety data. The report, Initiatives to Address Improvement of Traffic Safety Data, was issued by NHTSA and drafted by an Integrated Project Team that included representatives from NHTSA, the Bureau of Transportation Statistics, the Federal Highway Administration, and the Federal Motor Carrier Safety Administration. The report articulates the direction and steps needed for traffic safety data to be improved and made more useful to data users. It makes a number of recommendations under five areas, including improving coordination and leadership, improving data quality and availability, encouraging states to move to electronic data capture and processing, creating greater uniformity in data elements, and facilitating data use and access. Along with these recommendations, the report also outlines initiatives that NHTSA and other stakeholders should implement. For example, under the area of data quality and availability, the report indicates that states—under the guidance of their coordinating committees—should encourage compliance by law enforcement with state regulations for obtaining blood-alcohol concentration and drug use information and should also strive to capture exact crash locations (using latitude and longitude measures) in their traffic safety data systems. States reported carrying out a range of activities with funding made available under the 411 grant program. However, relatively little is known about the extent to which they made progress in improving their traffic safety data systems for the years of the grant. When applying for follow-on grants, states were required to report to NHTSA’s regional offices on the progress they were making in improving their traffic safety data systems during the prior year. However, the required documents filed with NHTSA yielded little or no information on what states had achieved. We were able to discern from the 8 states we reviewed in detail that those states had indeed used their grants for a variety of projects and showed varying degrees of progress. Regardless of whether states concentrated their grant funds on one project or funded a number of activities, the level of progress was influenced by the effectiveness of state coordinating committees. Forty-eight states applied for and received grant awards under the 411 grant program. As table 4 shows, most states (29) began their participation at the implementation grant level—that is, most of them already had the three basic requirements in place, including a coordinating committee, an assessment of their data system, and a strategic plan for improvement. Those states receiving start-up or initiation grants were expected to put the three requirements in place before beginning specific data-related improvement projects. By the 4th year of the grant, 44 states were still participating, and all but 1 was at the implementation grant level. The 4 states that were no longer participating by the 4th year reported that they discontinued participation mainly because they could not meet grant requirements. All three basic program requirements were useful to states to initiate or develop improvements in their traffic safety data systems. By meeting these grant requirements, states were able to “jump start” their efforts and raise the importance of improving state traffic safety data systems. The assessments, which were required to be conducted within 5 years of the initial grant application, provided benchmarks and status reports to NHTSA and state officials and included information on how well state systems fared in regard to NHTSA’s six recommended quality criteria. Officials with whom we spoke generally agreed that these assessments were excellent tools for systematically identifying needed state improvements. Similarly, strategic plans generally appeared to be based on the state assessment findings and helped states identify and prioritize their future efforts. The establishment of the traffic records coordinating committees to guide these efforts was also key to initiating improvements, since traffic safety data systems involve many departments and their cooperation is essential in developing and implementing improvements to a state traffic safety data system. Documentation of state progress was limited and of little use in assessing the effect of traffic safety data improvement efforts. To qualify for grants beyond the first year, each state had to (1) certify that it had an active coordinating committee and (2) provide documentation of its efforts through updated strategic plans, separate progress reports, or highway safety annual evaluation reports. We reviewed these documents when available and found that they contained a variety of activities, ranging from completing the basic requirements (such as conducting assessments and developing strategic plans) to identifying specific projects (such as outsourcing data entry services and redesigning crash forms). Figure 4 lists examples of these types of reported activities. The grant documentation NHTSA received provided few details on the quality of the state efforts. For example, although states certified the existence of a coordinating committee, they were not required to report on what the committee did or how well it functioned. Also, while states for the most part identified efforts to improve their data systems, we found it difficult to assess their progress because the reports lacked sufficient detail. For example: One state reported using grant funds on alcohol testing devices to collect more alcohol impairment data on drivers. However, the progress reports did not indicate who received these devices and how data collection was improved. One state used funds to hire data entry staff to reduce the backlog of old crash reports. However, the state provided no indication of whether the increase in staff had reduced the backlog and how any reduction in the backlog could be sustained in the longer term. One state reported using funds on multimillion dollar information technology projects, but it is unclear how the grant funds were used in these projects. Our visits to 8 of the states that participated in the 411 grant program yielded additional information and documentation about their grant activities, the nature of their efforts, and the extent of progress made. These states expended funds on a variety of activities, ranging from completing the basic requirements of assessments and strategic plans to implementing specific projects. As figure 5 shows, in the aggregate, these activities translated into two main types of expenditures—equipment, such as computer hardware and software, and consultant services, such as technical assistance in designing new data systems. The 8 states either concentrated funding on one large project or used funding on a variety of activities, including data entry, salaries, training, and travel. Four of the 8 states focused on a single project related to improving their data systems mainly by enhancing electronic reporting. One state reengineered its files to better integrate them with other data systems; 1 piloted an electronic crash data collection tool; and the remaining 2 created new electronic data systems, which were upgrades from their previous manual systems. These states also improved the tools used by law enforcement officers to input data into their crash systems, such as software for mapping and graphing traffic crashes or laptop computers for patrol cars so that law enforcement officers could collect and transmit crash data electronically to statewide repositories. The remaining 4 states used funding on multiple activities, such as obtaining technical support, adding capability for more data entry, or attending conferences. Some also conducted pilot projects. For example, 1 state created a project that enabled electronic uploads of traffic citation data from local agencies to the state department of motor vehicles. According to state officials, this project helped considerably with both timeliness and completeness in the uploading of conviction information to driver files. In another example, the state used funding to pilot a project to capture data about crashes electronically. States made improvements under both the single- and multiple-project approaches. One state that focused on a single project, for example, developed a new statewide electronic crash system that officials said had improved data timeliness and completeness. Similarly, of the states that spread funding among multiple activities, 1 state used funding for a data project on driver convictions—paying for traffic records staff’s salaries and hiring consultants to map crashes to identify roadway issues. As a result, the quality and completeness of crash data improved overall, according to a state official. One factor that affected state progress was the relative effectiveness of the state’s coordinating committee. In those states, where the state coordinating committee did not actively engage all stakeholders or where its level of authority was limited, projects did not fully address system needs. For example, 1 state established a coordinating committee that included few stakeholders outside the state police, and this committee decided to concentrate funding on a new electronic crash data system. The new system, acknowledged by many stakeholders as improving the timeliness and completeness of crash data, resulted in a useful resource allocation and crash-reporting tool for the state police to allocate resources and report on crashes. According to officials at the state department of transportation, however, improvements in the crash information did not effectively serve to facilitate the state’s use of crash data to identify unsafe roadways because the state department of transportation was not fully engaged in the coordinating committee’s process. Similarly, in another state, the coordinating committee lacked the authority needed to fully implement its efforts. The coordinating committee created two subcommittees—a technical committee and an executive committee. While the executive committee was made up of higher level managers from various agencies, the coordinating committee did not have the legislative authority to compel agencies to participate in the process or to even use the newly created statewide crash data system. To date, the state does not have all key stakeholders participating in the process and is continuing to have difficulty persuading the largest municipality in the state to use the newly developed statewide electronic reporting system. As a result, the municipality continues to lag behind other communities in having its crash information entered into the state crash system. In contrast, another state’s coordinating committee had the authority to approve or reject proposals for data system improvements as well as funding. This state was able to complete several agreed-upon projects, including implementing an electronic driver citation program, which improved the completeness and timeliness of the state crash data. NHTSA did not adopt adequate regulations or guidelines to ensure states receiving 411 grants submitted accurate and complete information on progress they were making to improve their traffic safety data systems. In addition, the agency did not have an effective process for monitoring progress and ensuring that grant monies were being spent as intended. We found some examples where states did not report their progress accurately. NHTSA, while beginning to take some actions to strengthen program oversight, must be more proactive in developing an effective means of holding states accountable under this program. In our previous discussion about activities being carried out under the grant program, we described how state documentation of progress often contained too little detail to determine anything about the progress being made as a result of activities being funded with program grants. Reasons for this lack of information, in our view, were NHTSA’s limited regulatory requirements and inconsistent guidance about what information states should submit. Regulations for the 411 grant program required states to submit an updated strategic plan or a progress report, but did not specify how progress should be reported. Further, NHTSA’s regulations required states to report on progress as part of their 411 grant application, which in effect meant that states did not have to report specifically on 411 activities after fiscal year 2001. According to NHTSA regulations, states were to include information on progress through their highway safety plans and annual evaluation reports after fiscal year 2001, which are part of the reporting for all of NHTSA’s highway safety grants. However, our analysis of these documents found that they lacked the detail needed to adequately assess state activities undertaken with 411 funds. Further, while NHTSA officials told us they also informally obtained information about progress after fiscal year 2001, the available information about what the activities actually accomplished was limited. Limitations in the information regarding states activities were particularly significant given that states spent most of their grant funds after fiscal year 2001. NHTSA regional offices supplemented the regulatory requirements with their own guidance to states, but the guidance varied greatly from region to region. Some of the regional offices said that their contact with states about these requirements was informal, and that their primary contact with states (1) was over the telephone or by e-mail and (2) was generally in regards to technical assistance, such as training or referring states to existing guidelines. Other regional office staff said they had additional contact with states through participation in meetings of state coordinating committees, where they were able to provide additional assistance. However, we found this participation occurred most often for states in proximity to NHTSA regional offices. Few regional offices provided written guidance to states with specific direction on what to include in their progress reports. For the regions that did so, the requested information included documentation indicating how states intended to use the current year grant funds, a list of projects implemented in the past fiscal year, a brief description of activities completed, an account of problems encountered, and the status of allocated funds. Without consistent and clear requirements and guidance on the content of progress reports, states were left to their own devices. We found that even in regions where NHTSA officials outlined the information that should be included in the progress reports, states did not necessarily provide the level of information needed for NHTSA to adequately track state progress. For example, in 1 region, states were to provide NHTSA with documentation that included a list of projects and a description of progress made. However, 1 state in that region did not provide the list of completed projects; it only provided a brief description of projects completed during 1 of the 4 years of the grant. We also found a wide variation in how states reported their activities. For example: Some states provided brief descriptions of the activities completed or under way, while others did not. States that provided brief descriptions of their activities did not always include the same information. For example, some states indicated how they were intending to use the current grant funds but did not list projects implemented in the past year. Some states did not indicate the status of their allocated funds for ongoing activities. None of the states in our review indicated problems that were encountered in implementing projects or activities. Under the 411 grant program, NHTSA’s oversight process for monitoring state progress and ensuring that funds were spent in line with program intent was limited. In fact, NHTSA was unable to provide copies of many of the documents that states were required to submit to qualify for the 411 grant program. We requested these documents beginning in February 2004, and NHTSA was only able to provide us with complete documentation for half of the states participating in the program. When we visited 8 states that participated in the program, we were able to compare expenditure reports obtained from the states with activities that were reported to NHTSA. We found instances in which documentation of state reported activities provided by NHTSA did not match information provided directly to us by the states. In documentation submitted to NHTSA, 1 state reported using grant funds on alcohol breath test devices. However, documents available at the state level indicate that nearly all of the funds were expended on a single project to redevelop a crash data system. Officials we spoke with also indicated that the money had gone for redeveloping the data system. In a report to NHTSA, 1 state we visited had reported undertaking four projects, but we found that two of them were actually funded by a different federal grant. The degree to which NHTSA monitored state 411-funded activities was difficult to determine. NHTSA officials told us that they were not required to review state 411-funded activities in detail. A few regional office officials told us that they verified state reported activities by linking them to objectives identified in state strategic plans; however, no documentation of these reviews was provided. NHTSA has taken several steps to improve its oversight and assist states in improving their traffic safety data systems; however, more efforts are needed. As we were completing our work, NHTSA released a report, Initiatives to Address Improvement of Traffic Safety Data, that provides the status of data systems in five areas, including coordination and leadership, improving data quality and availability, encouraging states to move to electronic capture and processing, creating greater uniformity in data elements, and facilitating data use and access. It also provides recommendations and initiatives in support of NHTSA’s efforts to improve state traffic safety data systems. Although the report outlines (1) steps to be taken, (2) stakeholder responsibilities for each recommendation, and (3) the general outcomes expected, the extent to which actions will occur as a result of the report is unclear. The report is limited to a description of conditions and needs for traffic safety data improvements and does not include an implementation plan with milestones or timelines. The report acknowledges that due to limited funding, NHTSA will focus primarily on recommendations that are feasible given current resources. According to NHTSA, the report was issued as a fact-finding status report and, therefore, no timelines or milestones were included. However, beginning October 2004, a newly created National Traffic Records Coordinating Committee is developing an implementation plan for the goals identified in the report. NHTSA also recently enhanced its oversight tools for all safety grants. It has mandated management reviews every 3 years and also expanded its existing regional planning documents for the areas of occupant protection and impaired driving, with three additional areas, including traffic safety data.The first of these regional action plans aimed at data improvements are being initiated fiscal year 2005 and include goals, objectives, and milestones. Mandating management reviews that encompass the broad array of grant programs every 3 years is an improvement over the inconsistent application of these reviews in the past. Also, by establishing traffic safety data improvements as part of the regional action plans, NHTSA will have more uniform tracking of state data improvements and also better information on state progress. While these newly initiated efforts are positive steps to improving oversight, it is too soon to tell how effective they will be for monitoring and ensuring accountability under the 411 grant program, should the Congress chose to reauthorize it. NHTSA’s oversight of the 411 grant program may be strengthened under reauthorized legislation. Proposed reauthorization bills that were considered by the Congress in 2004 included additional requirements that states (1) demonstrate measurable progress toward achieving goals in their strategic plans and (2) specify how they will use grant funds. These additional provisions would be important steps in addressing the too-vague reporting requirements of the current program and would be helpful in addressing congressional and other inquiries about what the program is accomplishing. As the previous proposed bills were drafted, however, they omitted one requirement that will be important in tracking state progress—the requirement for a state to have an assessment of its traffic safety data system no more than 5 years prior to participating in the 411 grant program. Assessments are used mainly to establish the status of state efforts, but state and NHTSA officials suggest that updated assessments could also help in tracking state progress. During our review, we found some assessments submitted by states that were nearly 10 years old. We also found that assessments based on recent information reflected the dynamic and often-changing reality of state systems. For example, 1 of our case­ study states had recently conducted an assessment in 2002. When we compared the information we had collected during our site visit, we found much of the information from our visit reflected in the assessment. Updating these assessments at least every 5 years would allow NHTSA to track state progress. According to NHTSA officials, these assessments were valuable starting points in helping states take stock of the strengths and weaknesses of their entire systems. Updated assessments would take into account changes made as a result of the new 411 grant program and other efforts to improve the system since previous assessments were conducted. The states and the federal government base significant roadway-related spending and policy decisions on traffic safety data, ranging from deciding to repair particular roadways to launching major safety campaigns. The quality of such decisions is tied to the quality of these data. Our review indicates that there were opportunities for states to improve crash data. However, because NHTSA exercised limited oversight over the 411 grant program, it is difficult to say what the program as a whole specifically accomplished or whether there was a general improvement in the quality of these data over the program’s duration. Nevertheless, information we obtained from the 8 states we visited suggests the premise that the 411 program did help states improve their traffic safety data systems. Based on our work in these 8 states, we believe that states undertook important improvements in their data systems with the federal grant funds. The potential reauthorization of the grant program and NHTSA’s recent study of state safety data provide an opportunity to include assurances that states use these grants on effective and worthy projects. Furthermore, the reauthorization may provide greater funding and, therefore, greater opportunity for states to improve their traffic safety data systems. However, a larger program would come with a greater expectation regarding what states will accomplish as well as with a need to effectively track the progress states are making. NHTSA’s inability to provide key grant documentation and its deficiencies in monitoring state progress with 411 grant funds could be minimized if NHTSA (1) better managed grant documents, (2) had clearer requirements and guidance for the grant program, and (3) had an effective oversight process in place to monitor activities and progress. Requiring more specific information on the improvements states are making in their data systems would begin to address the problems we identified with regard to inadequate reporting on the program. If the program is reauthorized, NHTSA should develop an oversight process that does a better job of (1) tracking state activities to their strategic plans and assessments, (2) providing information about progress made in improving safety data, and (3) ensuring that NHTSA can adequately manage the documentation it is requiring. In addition, if NHTSA develops a plan to implement the recommendations in its recent Integrated Project Team report on traffic safety data systems, it could incorporate these recommendations through improved oversight efforts. Finally, one requirement present in the earlier program—up-to-date assessments of state traffic safety data systems—was not included in recent proposals to reauthorize the 411 grant program. These assessments proved a valuable tool to states in developing and updating their strategic plans and activities for the 411 grant program. They also provide NHTSA with valuable information, including the current status of state traffic safety data systems organized by NHTSA’s own recommended quality criteria. In considering the reauthorization of the traffic safety incentive grant program, the Congress should consider including the requirement that states have their traffic safety data system assessed or an update of the assessment conducted at least every 5 years. If the Congress reauthorizes the traffic safety data incentive grant during the next session, we recommend that the Secretary of Transportation direct the Administrator, National Highway Traffic Safety Administration, to do the following: Ensure better accountability and better reporting for the grant program by outlining a process for regional offices to manage and archive grant documents. Establish a formal process for monitoring and overseeing 411-funded state activities. Specifically, the process should provide guidance for submitting consistent and complete annual reporting on progress for as long as funds are being expended. These progress reports should, at a minimum, include the status of allocated funds, documentation indicating how states intend to use the current year grant funds, a list of projects implemented in the past fiscal year, brief descriptions of activities completed, and any problems encountered. Establish a formal process for ensuring that assessments, strategic plans, and progress reports contain the level of detail needed to adequately assess progress and are appropriately linked to each other. Agency Comments and We provided a draft of this report to the Department of Transportation for Our Evaluation its review and comment. Generally, the department agreed with the recommendations in this report. Department officials provided a number of technical comments and clarifications, which we incorporated as appropriate to ensure the accuracy of our report. These officials raised two additional points that bear further comment. First, officials voiced concern regarding the use of data quality criteria from NHTSA’s Traffic Records Highway Safety Program Advisory to review the quality of data or the performance of states. The department emphasized that these criteria are voluntary and states are not required to meet them; therefore, states should not be judged against them. We acknowledge that these criteria are voluntary and clarified the report to emphasize this point more fully. However, we used the criteria as a framework for providing information on the status of state systems and view this analysis as appropriate since these criteria are used by NHTSA in conducting assessments of state traffic safety data systems. Second, department officials noted that their oversight of the 411 grant program was in accordance with the statutory requirements. Although we recognize that there were minimal requirements for the 411 grant program specifically, we believe the department should carry out more extensive oversight activities so that NHTSA can monitor the progress states are making to improve their traffic safety data systems and better ensure that states are spending the grant monies as intended. We will send copies of this report to the interested congressional committees, the Secretary of Transportation, and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-6570. Key contributors to this report are listed in appendix IV. The objectives in this report were to identify (1) the quality of state crash information; (2) the activities states undertook using 411 grant funds to improve their traffic safety data systems, and progress made using the data improvement grants; and (3) the National Highway Traffic Safety Administration’s (NHTSA) oversight of the grant program, including what changes in oversight, if any, might help encourage states to improve traffic safety data systems and ensure accountability under a reauthorized program. To address these objectives, we conducted case-study visits to 9 states, analyzed state crash data, interviewed key experts, reviewed 411 grant program documentation, and interviewed NHTSA officials regarding their oversight and guidance to states in improving their traffic safety data systems. To provide information on the quality of state crash data and state efforts to improve these data, we conducted site visits to 9 states, including California, Iowa, Kentucky, Louisiana, Maine, Maryland, Tennessee, Texas, and Utah. The case-study states were chosen on the basis of a variety of criteria, including population, fatality rates, participation in the 411 grant program, the level of funding received through the program, and participation in the State Data System (SDS) program and the Crash Outcome Data Evaluation System (CODES). We adopted a case-study methodology for two reasons. First, we were unable to determine the status of state systems from our review of 411 documents. Second, while the results of the case studies cannot be projected to the universe of states, the case studies were useful in illustrating the uniqueness and variation of state traffic safety data systems and the challenges states face in improving them. During our case-study visits, we met and discussed the status of state traffic data systems with a variety of traffic safety data officials. These discussions included gathering information on NHTSA’s criteria, state objectives, and the progress made with 411 grant funds. In addition to these case-study visits, we analyzed data for 17 states that currently participate in NHTSA’s SDS program to identify variations in data structure and quality. We selected a number of elements to assess the quality of data as they related to completeness, consistency, and accuracy for 5 of the 17 states that were part of the SDS program and also part of our case-study visits. We based the analysis on data and computer programs provided by NHTSA. We reviewed the programs for errors and determined that they were sufficiently accurate for our purposes. (See app. II.) Finally, we interviewed key experts who use traffic safety data, including consultants, highway safety organizations, and researchers. In order to describe the activities that states undertook to improve their traffic safety data systems and the progress made under the data improvement grant, we reviewed 411 grant documentation for all 48 participating states, including 8 of our 9 case-study states. Our review included examining required documents states submitted to NHTSA, including their assessments, strategic plans, and grant applications and progress reports. We obtained these documents from NHTSA regional offices. For the case-study states, we also obtained additional documentation, including 411 grant expenditure information, in order to (1) describe state activities and progress made and (2) compare actual expenditures with the activities states reported to NHTSA. To review NHTSA’s oversight of the 411 grant program, we interviewed NHTSA officials responsible for oversight and administration of the program. Our interviews were conducted with NHTSA program staff at headquarters and in all 10 NHTSA regional offices. We also discussed program oversight with state officials in 8 of our 9 case-study states. We reviewed NHTSA guidance and policy, including regulations for the 411 grant program and rules issued by NHTSA for the program. We also reviewed previous House and Senate bills that were introduced reauthorizing the 411 grant program. Finally, in order to understand NHTSA’s broader role in oversight, we spoke with NHTSA staff and reviewed NHTSA’s response to our recommendations that it improve its oversight. We conducted our review from January 2004 through October 2004 in accordance with generally accepted government auditing standards. Because an examination of data quality was one of the objectives of this report, we also conducted an assessment of data reliability. Appendix II contains a more complete discussion of data reliability. As part of our work, we examined data quality for 17 states that participate in NHTSA’s SDS program. The body of our report presents several examples of the kinds of limitations we found; this appendix contains additional examples. The examples discussed below relate to two of NHTSA’s quality criteria—data consistency and data completeness. The extent to which a state captures information about various data elements has much to do with the standards or thresholds it sets for what should be reported in crash reports. NHTSA’s Model Minimum Uniform Crash Criteria (MMUCC) recommends that every state have reporting thresholds that include all crashes involving death, personal injury, or property damage of $1,000 or more; that reports be computerized statewide; and that information be reported for all persons (injured and uninjured) involved in the crash. We found these thresholds differed from state to state. Two thresholds, in particular, create variation in the data: (1) criteria for whether a crash report must be filed and (2) criteria for whether to report information about uninjured occupants. Determining Which Crashes The states varied greatly in their policies on when a police report must be Require a Crash Report filed. Fourteen of the 17 states set a property damage threshold, but the threshold varied from less than $500 to as much as $1,000 (see fig. 6). Among the other 3 states, 1 left the reporting of property-damage-only crashes to the officer’s discretion, and 2 stipulated that no report is to be filed unless at least one vehicle has to be towed from the scene. Thus, a crash involving $900 of damage to an untowed vehicle would be reported in some states but not in others. Similarly, some states did not collect information about uninjured passengers involved in crashes. (See fig. 7.) While all 17 states collected information about uninjured drivers (such as whether he or she was wearing a seat belt), 5 did not collect such information about uninjured passengers. Such information could potentially be important, for example, in assessing the degree to which seat belt use helped prevent injuries from occurring. Even for states that collected information about uninjured passengers, the information may be incomplete. NHTSA officials said they thought that in these states, some officers left seat belt information blank or coded it as “unknown,” either because reporting officers did not know the information or because collecting it was too time-consuming. Alcohol and drug data also showed state-to-state differences, both in consistency and completeness. Alcohol and drug data are important in addressing a major safety issue—impaired driving. In 2000, crashes in which drivers had blood-alcohol levels above .08 (.08 recently became the threshold for being legally impaired in all 50 states) accounted for an estimated 2 million crashes that killed nearly 14,000 people and injured nearly 470,000 others. Alcohol-related crashes in the United States that year cost an estimated $114.3 billion. To assess the quality of these data in the SDS program, we selected 5 states for detailed review. The states, chosen because they were also visited as part of our case studies, were California, Kentucky, Maryland, Texas, and Utah—although they are not identified by name in the results below. We looked at the degree to which they conform to guidelines recommended in the MMUCC with regard to the consistency and completeness of their data. Information collected about alcohol- and drug-impaired driving varied from state to state and was not consistent with MMUCC guidelines. Table 5 provides examples of this variation by comparing crash information submitted by states with the recommended guidelines. The table shows MMUCC’s recommended guidelines for four elements—two elements each for alcohol and drugs. One element relates to whether the officer suspects alcohol or drug use, and the other to an actual test for alcohol or drugs. All 5 states collected some type of information on suspected alcohol or drug use, but each state differed from the others to some degree. Three states, for example, collected this information as part of a broader element that includes suspected alcohol and drug use as one attribute in a list of causes that might have contributed to the crash. For alcohol and drug testing, 1 state did not report such testing at all, and the 4 others differed both from each other and from MMUCC guidelines. To determine the completeness of state data files regarding impaired driving, we looked at alcohol test result data that were coded as “missing” or “unknown.” Figures 8 and 9 show the results for the first and last years we reviewed. The percentage of data recorded as missing varied from 0 percent to more than 12 percent, while the percentage of data recorded as unknown varied from 0 percent to more than 6 percent.In addition, the 2 states with the most data in these two categories were almost mirror images of each other: that is, state D showed no data as missing but had the highest percentage of data classified as unknown, while state E showed virtually no data as unknown but had the highest percentage of data classified as missing. These variations could reflect differences in how states classify and record information. For example, NHTSA officials said some states may code an alcohol test result that comes back indicating no alcohol in the driver’s blood stream as missing or unknown, rather than “negative” or “.00.” Because the alcohol and drug data in SDS are subject to so many problems with completeness and consistency, many researchers and state policy makers use alcohol and drug data from the Fatality Analysis Reporting System (FARS) database instead. This database, which is also administered by NHTSA, contains information on crashes involving fatalities that occur within 30 days of the crash. FARS is generally seen as a reliable data source, with quality control measures and personnel that do as much follow-up as possible to fill in data gaps by contacting hospitals, medical offices, and coroners’ offices to obtain accurate and complete information. However, FARS contains information only on fatal crashes—about 1 percent of all crashes. Thus, while the FARS data may be more complete and consistent for those crashes that are included, the vast majority of alcohol- and drug-related crashes are not included. Further, NHTSA imputes some of the alcohol information because even with follow-up there are often gaps in data. Federal Motor Carrier Safety Administration (FMCSA) The Commercial Vehicle Analysis and Reporting Systems is a cooperative effort between NHTSA and FMCSA to improve collection of bus and truck data. Its aim is to improve the national data system for all crashes involving commercial motor vehicles and to develop a national analytical data system similar to the Fatality Analysis Reporting System for commercial vehicles. Federal Highway Administration (FHWA) The Highway Safety Information System (HSIS) is a 9-state database that contains crash, roadway inventory, and traffic volume data. Under contract with FHWA, the University of North Carolina Highway Safety Research Center and LENDIS Corporation operate the system. The HSIS uses state highway data for the study of highway safety. The system is able to analyze a large number of safety problems, ranging from more basic "problem identification" issues to identify the size and extent of a safety problem to modeling efforts that attempt to predict future accidents from roadway characteristics and traffic factors. American Association of State Highway and Transportation Officials (AASHTO) The Transportation Safety Information Management System (TSIMS) is a joint application development project sponsored by AASHTO to enable states to link crash data with associated driver, vehicle, injury, commercial carrier, and roadway characteristics. TSIMS is an enterprise safety data warehouse that will extend and enhance the safety analysis capabilities of current state crash records information systems by integrating crash data with other safety-related information currently maintained by each state. Association of Transportation Safety Information Professionals (ATSIP) ATSIP aims to improve traffic safety data systems by (1) providing a forum on these systems for state and local system managers, including the collectors and users of traffic safety data; (2) developing, improving, and evaluating these systems; (3) encouraging the use of improved techniques and innovative procedures in the collection, storage, and uses of traffic safety data; and (4) serving as a forum for the discussion of traffic safety data programs. In addition to those individuals named above, Nora Grip, Brandon Haller, Molly Laster, Dominic Nadarski, Beverly Ross, Sharon Silas, Stan Stenersen, and Stacey Thompson made key contributions to this report.
Auto crashes kill or injure millions of people each year. Information about where and why such crashes occur is important in reducing this toll, both for identifying particular hazards and for planning safety efforts at the state and federal levels. Differences in the quality of state traffic data from state to state, however, affect the usability of data for these purposes. The National Highway Traffic Safety Administration (NHTSA) administers a grant program to help states improve the safety data systems that collect and analyze crash data from police and sheriff's offices and other agencies, and the Congress is considering whether to reauthorize and expand the program. The Senate Appropriations Committee directed GAO to study state systems and the grant program. Accordingly, GAO examined (1) the quality of state crash information, (2) the activities states undertook to improve their traffic records systems and any progress made, and (3) NHTSA's oversight of the grant program. States vary considerably in the extent to which their traffic safety data systems meet recommended criteria used by NHTSA to assess the quality of crash information. These criteria relate to whether the information is timely, consistent, complete, and accurate, as well as to whether it is available to users and integrated with other relevant information, such as that in the driver history files. GAO reviewed systems in 9 states and found, for example, that some states entered crash information into their systems in a matter of weeks, while others took a year or more. While some systems were better than others, all had opportunities for improvement. States reported carrying out a range of activities to improve their traffic safety data systems with the grants they received from NHTSA. Relatively little is known about the extent to which these activities improved the systems, largely because the documents submitted to NHTSA contained little or no information about what the activities accomplished. The states GAO reviewed used their grant funds for a variety of projects and showed varying degrees of progress. These efforts included completing strategic plans, hiring consultants, and buying equipment to facilitate data collection. NHTSA officials said their oversight of the grant program complied with the statutory requirements, but for two main reasons, it does not provide a useful picture of what states were accomplishing. First, the agency did not provide adequate guidance to ensure that states provided accurate and complete data on what they were accomplishing with their grants. Second, it did not have an effective process for monitoring progress. The agency has begun to take some actions to strengthen oversight of all its grant programs. If the Congress decides to reauthorize the program, however, additional steps are needed to provide effective oversight of this particular program. GAO also noted that in proposing legislation to reauthorize the program, one requirement was omitted that may be helpful in assessing progress--the requirement for states to have an up to date assessment of their traffic data systems.
This report is based on our reviews of 24 major agencies’ strategic plans that were formally submitted to Congress and OMB by September 30, 1997. To do these reviews, we used the Results Act supplemented by OMB’s guidance on developing the plans (Circular A-11, part 2) as criteria to determine whether the plans contained the six elements required by the Act. As agreed, we focused our reviews on the progress of agencies’ strategic planning efforts, specifically their efforts to improve their strategic plans, with particular attention to the key planning challenges that are most in need of sustained attention. Agencies included in our analysis are listed in appendix I, and our observations on individual agencies are summarized in appendixes II through XXV. To gather information on how annual performance planning and measurement could be used to address the critical planning challenges we observed in our reviews of the September plans, we relied on our recent report on critical challenges needing sustained attention, our report on governmentwide implementation of the Results Act, our guidance for congressional review of Results Act implementation, and our guidance on effectively implementing the Act. We reviewed individual agency plans from September 30, 1997, through November 1997. Our work was conducted in accordance with generally accepted government auditing standards. We provided a draft of this report for comment to the Director of OMB on January 5, 1998; a discussion of OMB’s comments appears at the end of this letter. In addition, we provided drafts of the appendices we prepared on individual agency plans to the relevant agencies for comment. The comments from those agencies are summarized in the relevant appendixes. The Results Act is the centerpiece of a statutory framework that Congress put in place during the 1990s to help resolve the long-standing management problems that have undermined the federal government’s effectiveness and efficiency and to provide greater accountability for results. In addition to the Results Act, the framework comprises the CFO Act and information technology reform legislation, including the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996. Congress enacted the CFO Act to remedy decades of serious neglect in federal financial management by establishing chief financial officers across the federal government and requiring the preparation and audit of annual financial statements. The information technology reform legislation is based on the best practices used by leading public and private sector organizations to manage information technology more effectively. Under the Results Act, strategic plans are the starting point and basic underpinning for performance-based management. In our report on agencies’ draft strategic plans, we noted that complete strategic plans were crucial if they were to serve as a basis for guiding agencies’ operations and be used to help congressional and other policymakers make decisions about activities and programs. The Act requires that an agency’s strategic plan contain six key elements. These elements are (1) a comprehensive agency mission statement; (2) agencywide long-term goals and objectives for all major functions and operations; (3) approaches (or strategies) and the various resources needed to achieve the goals and objectives; (4) a description of the relationship between the long-term goals and objectives and the annual performance goals; (5) an identification of key factors, external to the agency and beyond its control, that could significantly affect the achievement of the strategic goals; and (6) a description of how program evaluations were used to establish or revise strategic goals and a schedule for future program evaluations. Building on the decisions made as part of the strategic planning process, the Results Act requires executive agencies to develop annual performance plans covering each program activity set forth in the agencies’ budgets. The first annual performance plans, covering fiscal year 1999, are to be submitted to Congress after the President’s budget is submitted, which is approximately February 1998. Each plan is to contain an agency’s annual performance goals and associated measures, which the agency is to use in order to gauge its progress toward accomplishing its strategic goals. OMB is to use the agencies’ performance plans to develop an overall federal government performance plan that is to be submitted with the President’s budget. The performance plan for the federal government is to present to Congress a single cohesive picture of the federal government’s annual performance goals for a given fiscal year. Agencies’ September plans appear to provide a workable foundation for the continuing implementation of the Results Act. These plans represent a significant improvement over the draft plans we reviewed last summer.In those reviews, we found that all but six of the draft strategic plans were missing at least one required element, and about a third were missing two of the six required elements. In addition, just over a fourth of those plans failed to cover at least three of the required elements. Moreover, we found that many of the elements included in the plans contained weaknesses—some that were more significant than others. The agencies, on the whole, made a concerted effort during August and September to improve their plans. For example, all of the September plans we reviewed contained at least some discussion of each element required by the Act. And, in many cases, those elements that contained weaknesses were substantially improved by September. For example: The Department of Transportation explained more clearly how its mission statement is linked to its authorizing legislation. The Small Business Administration (SBA) improved its ability to assess progress toward its strategic goals by stating when specific performance objectives would be met. The Nuclear Regulatory Commission (NRC) better explained the scope of its crosscutting functions by identifying major crosscutting functions and interagency programs and its coordination with those agencies. The Department of Education improved its discussion of external factors that could affect its achievement of strategic goals by describing agency actions to mitigate against those factors. Appendixes II through XXV contain our observations on the progress and remaining challenges of individual agencies’ strategic planning efforts. Although the September plans appear to provide a workable foundation for the continuing implementation of the Results Act, we found that critical planning challenges remain. Among the remaining critical challenges are (1) clearly establishing a strategic direction for agencies by improving goal-setting and measurement; (2) improving the management of crosscutting program efforts by ensuring that those programs are appropriately coordinated to avoid duplication, fragmentation, and overlap; and (3) ensuring that agencies have the data systems and analytic capacity in place to better assess program performance and costs, improve management and performance, and establish accountability. The forthcoming annual performance planning and measurement processes offer agencies an opportunity to make progress in addressing these challenges. By improving on their draft strategic plans, agencies’ September plans undertook the first steps toward setting a strategic direction for their programs and activities. However, we found that the September plans often lacked clear articulation of the agency’s strategic direction: (1) strategic goals and objectives were not as measurable and results oriented as possible, (2) linkages among planning elements were not clear, and (3) strategies for achieving those goals and objectives were incomplete or underdeveloped. However, the performance planning and measurement phase of the Results Act offers agencies an opportunity to continue to refine their strategic directions. In our reviews of agencies’ September plans, we found that some agencies have begun to address the challenge of setting a strategic direction. For example: The most notable improvement in the plan for the Department of Health and Human Services (HHS) is the inclusion of an outline of strategic objectives for accomplishing the Department’s six strategic goals. Those objectives are largely focused on outcomes and are defined in measurable terms. This plan also identifies for each strategic objective the key measures of progress. For example, one measure of progress for the outcome-oriented objective of “reducing the use of illicit drugs” is “death rate of persons aged 15 to 65 attributed to drug use.” The September plans of the Departments of Agriculture, Education, and the Treasury now include helpful matrixes to link various planning elements, such as goals, objectives, measures, and programs or responsible organizational components. These matrixes are also useful in assessing a plan’s underlying logic, determining programmatic accountability, and identifying crosscutting programs and potential duplication and overlap among program efforts. For example, Treasury’s September plan contained an appendix that identified which bureau or office is responsible for achieving its Department-wide goals and objectives. The September plan for the Department of Energy (DOE) better explains how it plans to accomplish many of its goals. The plan provides greater specificity on the money, staff, workforce skills, and facilities that the agency plans to employ to meet its goals. For example, to support its national security goal, DOE’s plan says it will need to change the skills of its workforce and how it constructs new experimental test facilities. Although improvements were not isolated to these agencies, we also found that agencies need to further clarify their strategic directions if the Results Act is to be effective in guiding the agencies and informing congressional and other decisionmakers. The goals and objectives of many agencies could be more results oriented and expressed in a manner that will better allow for a subsequent assessment of whether the goals and objectives have been achieved. For example, the plan for the Department of Veterans Affairs (VA) contains the following objectives supporting the goal for its compensation and pension area: “(1) evaluate compensation and pension programs and (2) modify these programs, as appropriate.” Also, although the first goal in the Social Security Administration’s (SSA) September plan “o promote valued, strong, and responsive social security programs and conduct effective policy development, research, and program evaluation” sets a strategic direction for the agency, it could be stated in more measurable terms to better enable the agency to make a future assessment of whether it is being achieved. Another challenging area for agencies in setting strategic direction in the September plans was to establish linkages among planning elements, such as goals, objectives, and strategies. For example, Treasury’s plan says that the Internal Revenue Service (IRS) has a role in three law enforcement objectives—to reduce counterfeiting, money laundering, and drug smuggling. However, the IRS plan contained no specific strategy to help achieve any of those objectives. In another example, the September plan for the Federal Emergency Management Agency (FEMA) included lists of objectives and strategies under each goal with no explanation of how the strategies would contribute to achievement of the objectives. Another weakness of agencies’ September plans was incomplete and underdeveloped strategies for achieving long-term strategic goals and objectives. More specifically, we found that agencies did not always provide an adequate discussion of the resources needed to achieve goals. For example, SBA’s September plan did not contain any discussion on the resources, such as human resources and information technology, needed to achieve its goals. Although other plans we reviewed discussed resources, the discussions were incomplete. For example, few plans discussed the physical capital resources, such as facilities and equipment, needed to achieve their goals. Although many agencies may not rely heavily on physical capital resources, even the plans of some of those that do, such as the General Services Administration and the National Park Service, a component of the Department of the Interior, did not provide a focused discussion of their capital needs and usage. The role that information technology played, or can play, in achieving agencies’ long-term strategic goals and objectives was generally neglected in the September plans. The government’s track record in employing information technology is poor, and the strategic plans we reviewed often contained only limited discussions of technology issues. For example, most of the Department of Defense’s (DOD) strategic goals are fundamentally linked to information technology. However, we have placed DOD’s management of critical information management processes on our high-risk list. We believe DOD’s strategic plan would be significantly enhanced if it more explicitly linked its strategic goals to a strategy for improving management and oversight of information technology resources. Additionally, DOD should recognize the dramatic impact the Year 2000 problem will likely have on its computer operations, including the mission-critical applications identified in its strategic plan. The Department of State’s September plan also does not specifically address the serious deficiencies in State’s information and financial accounting systems. Rather, the plan notes, in more general terms, that it will take State several years to develop performance measures and related databases in order to provide sufficient information on achievement of its long-term goals. The lack of such a discussion in many of the plans is of particular concern because, without it, agencies cannot be certain that they are (1) addressing the federal government’s information technology problems and (2) better ensuring that technology acquisition and use are targeted squarely on program results. Strategic planning—setting a strategic direction for agency operations—did not end with the submission of a strategic plan to Congress last September. Performance-based management, as envisioned by the Results Act, is not a linear, sequential process but, rather, an iterative one in which strategic and performance planning cycles will result in subsequent revisions to both strategic and annual performance plans. Each cycle of strategic planning and performance planning, particularly in the first few years of governmentwide implementation of the Results Act, will likely result in agencies making significant changes and improvements in those documents. Consequently, agencies can continue to address the critical planning challenges associated with setting a strategic direction as they develop their first annual performance plans. For example, the process of defining targeted levels of performance within set time frames and providing baselines against which to compare actual performance will likely produce opportunities for agencies to revisit and improve upon their strategic goals and objectives so that those goals are as results oriented and measurable as they can be. If successfully developed, those annual performance goals can function as a bridge between long-term strategic planning and day-to-day operations, thereby assisting agencies in establishing better linkages among planning elements. For example, agencies can use performance goals to show clear and direct relationships in two directions—to the goals in the strategic plans and to operations and activities within the agency. By establishing those relationships, agencies can (1) provide straightforward roadmaps that show managers and staff how their daily activities can contribute to attaining agencywide strategic goals, (2) hold managers and staff accountable for contributing to the achievement of those goals, and (3) provide decisionmakers with information on their annual progress in meeting the goals. As agencies gain experience in developing these annual performance goals, they likely will become better at identifying and correcting misalignment among strategic goals, objectives, and strategies within their plans. The importance of clearly showing how strategies are linked to goals is underscored by the Results Act requirement that annual goals are to be based on budgetary program activities. Unlike previous federal reform initiatives, the Results Act requires agencies to plan and measure performance using the same program activity structures that form the basis for their budget requests. However, we have found that the relationships among the budget structures, performance plans, and strategic plans will require coordinated and recurring attention by Congress, OMB, and agencies as they move to implement the annual performance planning and measurement phase of the Act. This attention is important because the wide variability of the budget structures indicates that the suitability of those structures for the Results Act’s performance planning and measurement will also vary. For example, we reported in 1997 that agency officials we spoke with confirmed the varying suitability of their program activity structures for the Results Act’s purposes. One agency successfully worked through its recent performance-planning process using its existing program activities. A second agency had a program activity structure that reflected its organizational units—a structure that is useful for traditional accountability purposes, such as monitoring outputs and staff levels—but less useful for results-oriented planning. Still other agencies separated performance planning from program activity structures, believing it necessary to first establish appropriate program goals, objectives, and measures before considering the link to the budget. These agencies planned to rely on the Results Act’s provision to aggregate, disaggregate, or consolidate program activities in constructing their annual performance plans. In addition, annual performance planning can be used to better define strategies for achieving strategic and annual performance goals. For example, annual performance plans provide agencies with another opportunity to further discuss strategies for information technology investments and the operational improvements expected from those investments. The annual performance plans should also provide annual performance measures that Congress and other decisionmakers can use to determine if those investments are achieving the expected improvements. Thus, annual performance planning and measurement can provide decisionmakers with an early warning of information investment strategies that need to be revisited. A focus on results, as envisioned by the Results Act, implies that federal programs that contribute to the same or similar results should be closely coordinated to ensure that goals are consistent and, as appropriate, program efforts are mutually reinforcing. We have found that uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. This suggests that federal agencies are to look beyond their organizational boundaries and coordinate with other agencies to ensure that their efforts are aligned and complementary. Agencies’ September plans show progress in this area, but coordination of crosscutting programs continues to be a strategic planning challenge. During the summer of 1996, in reviewing early strategic planning efforts, OMB alerted agencies that augmented interagency coordination was needed at that time to ensure consistency among goals in crosscutting programs areas. However, the draft strategic plans we reviewed during the summer of 1997 often lacked evidence that agencies in crosscutting program areas had worked with other agencies to ensure that goals were consistent; strategies were coordinated; and, as appropriate, performance measures were similar. Agencies’ September plans better described crosscutting programs and coordination efforts. Some plans, for example, contained references to other agencies that shared responsibilities in a crosscutting program area or discussed the need to coordinate their programs with other agencies. For example, as noted earlier, NRC better explained its crosscutting functions in its September plan. In addition, the Environmental Protection Agency’s (EPA) plan contains an appendix that lists the federal agencies with which EPA coordinated. This appendix describes the major steps in the coordination process and lists by strategic goal the agencies with which EPA believes greater integration and review of efforts will be needed. Similarly, the plan for the Department of Transportation contains a table that shows the contributions of other federal agencies to each of its major mission areas. NRC’s, EPA’s and Transportation’s plans illustrate the kind of presentation that could be especially helpful to Congress and the administration in identifying program areas to monitor for overlap and duplication. These presentations, and similar ones in other agencies’ September plans that identify agencies with crosscutting programs, also provide a foundation for the much more difficult work that lies ahead—undertaking the substantive coordination that is needed to ensure that those programs are effectively managed. For example, in an improvement over its draft plan, the Department of Labor’s September plan refers to a few other agencies with responsibilities in the area of job training programs and notes that Labor plans to work with them. However, the plan contains no discussion of what specific coordination mechanism Labor will use to realize efficiencies and possible strategies to consolidate job training programs to achieve a more effective job training system. Our work has shown that the next phases of the Results Act’s implementation will offer a structured framework to address crosscutting issues. For example, the Act’s emphasis on results-based performance measures as part of the annual performance planning process should lead to more explicit discussions concerning the contributions and accomplishments of crosscutting programs and encourage related programs to develop common performance measures. As agencies work with OMB to develop their annual performance plans, they can consider the extent to which agency goals are complementary and the need for common performance measures to allow for cross-agency evaluations. Also, the Results Act’s requirement that OMB prepare a governmentwide performance plan that is based on the agencies’ annual performance plans can be used to facilitate the identification of program overlap, duplication, and fragmentation. Our work also indicates that if agencies and OMB use the annual planning process to highlight crosscutting program efforts and provide evidence of joint planning and coordination of those efforts, the individual agency performance plans and the governmentwide performance plan should help provide Congress with the information needed to identify agencies and programs addressing similar missions. Once these programs are identified, Congress can consider the associated policy, management, and performance implications of crosscutting program efforts and whether individual programs make a sufficiently distinguishable contribution to a crosscutting national issue. This information should also help identify the performance and cost consequences of program fragmentation and the implications of alternative policy and service delivery options. These options, in turn, can lead to decisions concerning department and agency missions and the allocation of resources among those missions. Our previous work has shown that agencies need to have reliable data during their planning efforts to set realistic goals and later, as programs are being implemented, to gauge their progress toward achieving those goals. In addition, in combination with an agency’s performance measurement system, a strong program evaluation capacity is needed to provide feedback on how well an agency’s activities and programs contributed to achieving its goals and to identify ways to improve performance. However, our work has also found serious shortcomings in agencies’ ability to generate reliable and timely data to measure their progress in achieving goals and to provide the analytic capacity to use that data. The Results Act’s requirement that annual performance plans discuss the verification and validation of data provides agencies with an opportunity to be forthcoming about data limitations and to show how those limitations will be addressed. Verified and validated performance information, in conjunction with augmented program evaluation efforts, will help ensure that agencies are able to report progress in meeting goals and identify specific strategies to improve performance. The absence of both sound program performance and cost data and the capacity to use those data to improve performance is a critical challenge that agencies must confront if they are to effectively implement the Results Act. Efforts under the CFO Act have shown that most agencies are still years away from generating reliable, useful, relevant, and timely financial information, which is urgently needed to make our government fiscally responsible. The widespread lack of available program performance information is equally troubling. For example, in our June report on a survey of managers in the largest federal agencies, we found that fewer than one-third of those managers said that results-oriented performance measures existed to a great or very great extent for their programs. Our work also suggests that even when performance information exists, its reliability is frequently questionable. For example, our work has shown that the reliability of performance data currently available to a number of agencies is suspect, because the agencies must rely on data collected by parties outside the federal government. In a recent report, we noted that the fact that data were largely collected by others was the most frequent explanation offered by agency officials for why determining the accuracy and quality of performance data was a challenge. In our June 1997 report on the implementation of the Results Act, we also reported on the difficulties that agencies were experiencing as a result of their reliance on outside parties for performance information. Agencies are required under the Results Act to describe in their annual performance plans how they will verify and validate the performance information that will be collected. This section of the performance plan can provide important contextual information for Congress and agencies to address the weaknesses in this area. For example, this section can provide an agency with the opportunity to alert Congress to the problems the agency has had or anticipates having in collecting needed results-oriented performance information. Agencies can also use this section to alert Congress to the cost and data quality trade-offs associated with various collection strategies, such as relying on sources outside the agency to provide performance data and the degree to which those data are expected to be reliable. The discussion in this section can also provide Congress with a mechanism for examining whether the agency currently has the data to confidently set performance improvement targets and will later have the ability to report on its performance. More broadly, continuing efforts to implement the CFO Act also are central for ensuring that agencies resolve their long-standing problems in generating vital information for decisionmakers. In that regard, the Federal Accounting Standards Advisory Board (FASAB) has developed a new set of accounting concepts and standards that underpin OMB’s guidance to agencies on the form and content of their agencywide financial statements. As part of that effort, FASAB developed managerial cost accounting standards that were to be effective for fiscal year 1997. These standards are to provide decisionmakers with information on the costs of all resources used and the costs of services provided by others to support activities or programs. Such information would allow for comparisons of costs across various levels of program performance. However, because of serious agency shortfalls in cost accounting systems, the Chief Financial Officers Council—an interagency council of the CFOs of the major agencies—requested an additional 2 years before the standard would be effective. FASAB recommended extending the date by 1 year, to fiscal year 1998, with a clear expectation that there would be no further delays. Under the Results Act, another aspect of performance planning is a requirement for agencies to discuss the use and planned use of program evaluations that can provide feedback on how well an agency’s activities and programs contributed to the achievement of its goals and to assess the reasonableness and appropriateness of those goals. However, our recent report on agencies’ draft plans stated that 16 of the 27 draft plans did not discuss program evaluations. Although all the September plans included discussions of program evaluations, we continued to find weaknesses in those discussions. However, this is not surprising because agencies that had not undertaken program evaluations prior to the preparation of the first cycle of strategic plans would not likely be able to discuss in their September plans how they used program evaluations to help develop the plans. Of greater concern, many agencies, including the Departments of Health and Human Services, Justice, and Labor, also did not discuss how they planned to use evaluations in the future to assess progress or did not offer a schedule for future evaluation as required by the Results Act. In contrast, the National Science Foundation’s September plan contains a noteworthy exception to this trend. The plan discusses how the agency used evaluations to develop key investment strategies, action plans, and its annual performance plan. It also discusses plans for future evaluations and provides a general schedule for their implementation. Over the longer term, the program performance information that agencies are to generate under the Results Act should be a valuable new resource for Congress to use in its program authorization, oversight, budget, and appropriation responsibilities. As we have noted before, to be most useful in these various contexts, that information needs to be consolidated with budget data and critical financial and program cost data, which agencies are to produce and have audited under the CFO Act. This consolidated program performance, cost, and budget information, in conjunction with the annual performance plans, should provide congressional and other decisionmakers with a more complete picture of the results, operational effectiveness, and costs of agencies’ operations. Agencies, on the whole, made significant progress in improving their plans during August and September 1997. The strategic plans they formally submitted to Congress and OMB in September 1997 appear to provide a workable foundation for the continuing implementation of the Results Act. Nonetheless, the critical planning challenges that we found demonstrate that the effective implementation of performance-based management and accountability, as envisioned by the Results Act, is still, as to be expected, very much a work in progress. Since performance-based management is not a linear, sequential process but, rather, an iterative one, each subsequent strategic and performance planning cycle can, and likely will, result in revisions to preceding planning documents. Therefore, Congress, OMB, and agencies’ senior managers can use the next stage of performance-based management —performance planning and measurement—to ensure that agencies continue to address the critical planning challenges as well as maintain momentum on the implementation of the Results Act. On January 5, 1998, we provided a draft of this report to the Director of OMB for comment. We provided drafts of the appendixes we prepared on individual agency plans to the relevant agencies for comment, and the comments from those agencies are summarized in the relevant appendixes. On January 13, 1998, a senior OMB official provided us with OMB’s comments on this report. He generally agreed with our observations and said that the report was a useful compilation of our work on agencies’ September strategic plans. The official also said that this report underscores that the implementation of the Results Act will be an ongoing, iterative process in which agencies will learn from their initial experiences in developing strategic plans and can then apply those lessons learned as they continue to develop strategic planning processes. In addition, the official provided technical comments that were incorporated in this report. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Minority Leader of the House; the Ranking Minority Members of your Committees; other appropriate congressional committees; and the Director, Office of Management and Budget. We will also make copies available to others on request. If you have any questions concerning this report, please contact me on (202) 512-8676. On July 10, 1997, we issued a report on the U.S. Department of Agriculture’s (USDA) May draft strategic plan (Results Act: Observations on USDA’s Draft Strategic Plan, GAO/RCED-97-196R). USDA’s publicly issued strategic plan was submitted to the President and Congress on September 30, 1997. As requested, we have reviewed USDA’s September strategic plan and compared the results of our assessment with our observations on the draft plan, as reported in July. On October 17, 1997, we briefed your staffs on our assessment of the September strategic plan. The key points from that briefing are summarized herein. USDA’s May 1997 draft strategic plan included a Department-wide strategic overview as well as 30 plans for the mission areas, agencies, and staff offices that make up the Department. We reviewed the overview and the 16 agency plans that are directly related to accomplishing USDA’s mission and implementing its programs. We also reviewed the plans for the offices of the Chief Financial Officer and the Chief Information Officer. We observed that the May draft strategic plan did not fulfill the requirements of the Results Act. USDA’s overall mission and goals were contained in the Department-wide strategic overview; the overview then referred the reader to the agencies’ plans for information on the six required elements. However, only one of the agencies’ plans we reviewed contained all six required elements. The draft strategic plan also fell short in several other areas necessary for achieving the purposes of the Results Act. Among other things, the draft strategic plan lacked an emphasis on externally focused goals and objectives, adequate quantifiable performance measures, and good linkages between the agencies’ goals and the Department’s goals. We also reported that we could not determine the extent to which coordination with other federal agencies, both within and outside the Department, occurred in the formulation of the draft strategic plan. It was also unclear whether agencies’ goals and objectives had been assessed for duplication and complementary functions. USDA’s Department-wide strategic overview acknowledged the role of USDA agencies that carry out similar and/or complementary functions but did not recognize the role of other federal agencies. Many of the agencies’ plans generically recognized the roles of other federal agencies in accomplishing their missions. However, there was little evidence in either the Department-wide strategic overview or the agencies’ plans to suggest that the agencies coordinated with other agencies—internally or externally—when developing their goals and objectives. USDA’s draft plan addressed some, but not all, of the high-risk issues and management problems we had previously identified. Generally, information on how USDA planned to address these high-risk issues and management problems, such as the need to reduce losses in the farm loan program, was included as goals and objectives in the agencies’ plans. However, USDA’s draft plan did not address some management issues, such as the need to reform milk marketing orders, improve the management of agricultural trade programs, and strengthen financial controls under credit reform. In addition, we have identified significant, long-standing Department-wide problems in information technology, accounting, and financial management. However, USDA’s draft strategic plan did not adequately recognize and address these problems. For example, the plan for the Office of the Chief Information Officer lacked time frames and milestones and the resources needed to accomplish the stated goals. We also noted that it lacked an explanation of how the goals were specifically linked to the agencies’ plans. USDA made significant improvements in its September strategic plan. This plan incorporates many changes that make it more responsive to the requirements of the Results Act. The strategic plan complies with the six elements required by the Results Act and includes many of the key attributes necessary for a quality plan. It also includes information on some management challenges that we identified in the past. While all 16 agencies’ plans contain the six required elements, the clarity of information presented varies across the plans. For example: Most of the agencies’ plans have comprehensive and concise mission statements. However, the mission statements for two agencies’ plans—concerning the Agricultural Marketing Service and Rural Development—are stated so broadly that it is difficult to determine what the basic purpose of the agency is or how it differs from that of other agencies. For example, it is unclear how the mission of the Agricultural Marketing Service differs from the missions of the Grain Inspection, Packers and Stockyards Administration and the Foreign Agricultural Service. Most of the agencies’ plans have results-oriented goals and objectives. However, some plans—those of the Farm Service Agency, Food and Consumer Service, Animal and Plant Health Inspection Service, and the Forest Service—have too many goals and objectives structured around existing programs and activities rather than the ultimate results that these agencies should achieve. For example, the Farm Service Agency’s plan has four goals that we believe could be combined under two that would fulfill the agency’s mission—(1) improving the economic viability of the agriculture sector and (2) protecting the environment. All of the agencies’ plans provide more detailed strategies and improved information on the resources needed for achieving goals and objectives, compared with the information provided in the May draft plan. All of the agencies’ plans provide a detailed discussion of the external factors beyond the control of the agency that could affect the achievement of the goals. However, the linkages between external factors and their impact on specific goals could be improved in some plans, such as the plans for the four research agencies. Unlike the May draft, in which only 1 of the 16 agencies’ plans included information on the relationship between annual performance goals and strategic goals, all of the agencies’ September plans include this information. However, the quality of the descriptions provided in this section of the agencies’ plans varies by agency. For example, some plans, such as those for the Food and Consumer Service, Farm Service Agency, Food Safety and Inspection Service, and Center for Nutrition Policy and Programs, easily allow the reader to envision how the annual performance goals relate to the strategic goals; other agencies’ plans, such as those for the Economic Research Service and the Risk Management Agency, are less clear. Most of the agencies’ plans provide greater detail than they did in the May draft on how program evaluations were used to develop the strategic plan and how they will be used in the future. However, two plans—those for the Food Safety and Inspection Service and the Agricultural Research Service—state that program evaluations were not used to develop the strategic plan, although information on program evaluations planned for the future is included; and the plan for the Agricultural Marketing Service states that program evaluations were not used to develop the plan and are not planned for the future. While these agencies state that they did not use formal program evaluations when developing their plans, the information provided in the plans indicates that the results of relevant studies and assessments were actually used to help develop the plans—which in our opinion meets the requirements of the Results Act. Consequently, we believe that these agencies may be using too narrow a definition for the term “program evaluation.” According to an August 7, 1997, letter, sent by the House Majority Leader to the Director, OMB, program evaluations should include all significant evaluations relevant to the development and future assessment of an agency’s plan. The letter suggested that this definition include reviews by the Inspector General, GAO, and others that deal with program implementation and operating policies and practices. Moreover, we found that many of the key attributes necessary for a quality plan, which were missing in the May draft plan, have been included in the September strategic plan. These include clear linkages between the agencies’ goals and their statutory authorities as well as the Department-wide goals; a better focus on external goals rather than internal processes (the result of a separation of strategic goals from management initiatives); and a more complete discussion of relevant performance measures, although some agencies are still developing baseline information and targets. For targets included in the plans, it is sometimes unclear whether they are annual or 5-year targets. Some of the management challenges facing USDA that we raised in the past have been included in the September plan. For example, reform of the milk marketing orders is included in the Agricultural Marketing Service’s plan as an objective. Similarly, USDA revised its strategic plan to address certain accounting and financial management issues that the draft plan did not adequately address. For example, the strategic plan reflects USDA’s efforts to strengthen controls for establishing and reestimating loan subsidy costs, as required under credit reform. Also, the strategic plan recognizes that additional staff and resources may be needed to ensure that USDA can accomplish the goals set out in the plan for the Office of the Chief Financial Officer. In addition to the suggestions that we have made herein to improve the clarity of some agencies’ plans, some more significant aspects of the strategic plan could be further improved. These improvements include (1) explaining interagency coordination for crosscutting issues and (2) addressing previously identified management problems. USDA’s September strategic plan provides more detailed information about other agencies—both internal and external to the Department—that share responsibilities for achieving the stated goals and objectives. The Department-wide strategic overview now includes links to agencies outside of the Department that are important partners to USDA agencies. In addition, the agencies’ plans not only identify the agencies that they coordinate and consult with, in some cases they also identify the specific roles of these other agencies. However, we still could not determine from the information provided in most of the agencies’ plans whether consultations actually took place with these agencies to resolve crosscutting issues. Moreover, we could not determine whether an assessment of duplicative or complementary programs and activities was performed when the agencies were developing their goals and objectives. In addition, we found that while many agencies’ plans explain that stakeholders were consulted during the plan’s development, they usually do not clearly identify the stakeholders. Although this information is not required to be included in the strategic plan by the Results Act, we believe that including information in the agencies’ plans that clearly identifies all stakeholders would be helpful. In addition, the September plan still does not include two management issues that we identified in the past. In particular, the Foreign Agricultural Service’s plan still does not address the numerous problems we have identified in agricultural trade programs. Furthermore, there is little evidence to suggest that substantial progress has been made in addressing our concerns about information technology. Although USDA has added time frames for completing the 14 objectives appearing in its Office of Chief Information Officer’s plan, each time frame has a completion date “through FY 2002.” We are concerned about the absence of earlier time frames, or at least interim ones, for resolving major Department-wide information technology problems, such as the Year 2000 issue. By establishing such time frames, it is not clear what priority USDA is really placing on solving its information technology problems or whether the Department has adequate strategies for doing so. In addition, although the Office of Chief Information Officer’s plan includes a number of goals and objectives to better manage its $1 billion in annual investments for information technology, we remain concerned about the lack of information in the plan on the resources needed to accomplish these goals and objectives and how they link to the agencies’ plans. We provided a draft of our observations on USDA’s strategic plan for the Department’s review and comment. We met with USDA’s Acting Chief Financial Officer and the Director, Planning and Accountability Division, Office of Chief Financial Officer, who told us that they were pleased that we had recognized the significant improvements made to the strategic plan and that the additional comments made by us would help them as they continue to refine and enhance the plan. In addition, USDA made the following observations: USDA disagreed with our statement that program evaluations were not used to develop the Animal and Plant Health Inspection Service’s plan. While we agree that this plan recognizes the importance of using program evaluations to set performance goals, it does not clearly identify how the results of program evaluations were used to develop the strategic plan. Consequently, we have deleted this statement from our report to reflect the agency’s comment, but we would suggest that the Animal and Plant Health Inspection Service add language to clarify how program evaluations were used to develop its plan. USDA noted that while there is no duplication of services between the Agricultural Marketing Service and the Grain Inspection, Packers and Stockyards Administration and the Foreign Agricultural Service, the mission statement of the Agricultural Marketing Service would be clarified, in future versions of the plan, to distinguish it from the mission statements of the other two agencies. In connection with our observation about the Food and Consumer Service’s plan having too many goals that were structured around current programs rather than results, USDA told us that the Food and Consumer Service had considered structuring its plan around a smaller number of generic goals. However, the agency chose to establish six goals corresponding to its existing programs because it believed that a plan structured in this manner would be more meaningful to all interested parties, including external partners and program participants. While we agree that setting up goals around familiar programs and activities may make the plan easier to understand, this approach may ultimately defeat the purpose of the Results Act—which is to require agencies to focus on outcomes by reevaluating what they do and why they do it. Therefore, we would suggest that the Food and Consumer Service consider restructuring the goals in its plan around broader outcomes rather than current programs. USDA disagreed with our statement that the Foreign Agricultural Service plan still does not address the numerous problems that we have identified in the past relating to agricultural trade programs. For example, USDA believes that the Foreign Agricultural Service has addressed the concerns outlined in our report entitled U.S. Department of Agriculture: Foreign Agricultural Service Could Benefit From Better Strategic Planning (GAO/GGD-95-225, Sept. 28, 1995) by including information on agency resource allocation, overseas priorities, and trade opportunities under the management initiatives section of the plan. According to USDA, other issues raised by us, such as streamlining the agency’s foreign service, will be addressed in the annual performance plan. Although we agree that some issues that we have raised in the past can be appropriately addressed by including them in the annual performance plan, others cannot. Over the past decade, we have issued a series of reports that raise serious concerns about the fundamental operations of the Foreign Agricultural Service’s export programs, such as the Foreign Market Development Program, Market Access Program, P.L. 480 Program, and Export Credit Guarantee Program. We believe that solutions to these problems will require long-term planning that has not been adequately addressed in the strategic plan. Finally, USDA stated that it did not believe that the Office of Chief Information Officer’s strategic plan had to be the medium to address specific solutions to the individual agencies’ issues identified in previous audits. Our observations on the Office of Chief Information Officer’s plan, however, did not discuss the need for specific solutions; rather, we noted that the plan lacked sufficient information on time frames, resources, and how the goals and objectives were linked to other USDA agencies’ plans. We believe that such information is essential to clearly identify what priority USDA is placing on solving its information technology problems and determining whether the Department has adequate strategies for addressing these issues. This is especially important given the Secretary of Agriculture’s May 1997 direction to subcabinet officials that fixing USDA’s long-standing, pervasive information technology management problems must be a top priority. USDA also disagreed with our statement that there is a perceived lack of attention on the Year 2000 issue. While we recognize that the plan discusses the Year 2000 issue, we are concerned about the stated time frames for completion for this objective. By stating a “through FY 2002” completion time frame for the Year 2000 problem, we believe that the plan does not present an adequate strategy for resolving one of USDA’s most pressing information technology management problems and one that must be solved within the next 2 years. Robert A. Robinson, Director, Food and Agriculture Issues; Resources, Community, and Economic Development Division, (202) 512-5138. On July 14, 1997, we issued a report on the Department of Commerce’s draft strategic plan (The Results Act: Observations on Commerce’s June 1997 Draft Strategic Plan, GAO/GGD-97-152R). Commerce’s formally issued strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we reviewed Commerce’s revised strategic plan, compared it with the earlier draft version that we reported on in July, and identified significant changes or improvements that Commerce made in the areas covered by our July report and areas or required plan elements where additional improvements still could be made as the plan evolves. We briefed your staffs on our findings on October 17, 1997. Our findings are summarized herein. Commerce’s draft strategic plan was inadequate and incomplete in several respects. Of the six elements required by the Results Act, four were included in the draft plan—a mission statement, goals and objectives, strategies for achieving goals and objectives, and a discussion of key external factors—but each of these had weaknesses, some more significant than others. For example, the mission statement included the core functions of the Department and mentioned the role of businesses and universities but not the important role also played by other government entities. While there were useful linkages among themes, goals, objectives, and responsible components, the goals and objectives were not as results oriented as they could be. The draft plan identified the Commerce bureau responsible for each goal and objective but did not adequately discuss strategies for achieving those goals and objectives or include required information describing the operational processes, staff skills, and technologies, as well as the human, capital, information, and other resources needed to achieve them. Many but not all key external factors were discussed, but the factors that were identified appeared to be used to justify programs rather than to show how those factors could affect the achievement of goals. Commerce’s draft strategic plan did not explicitly discuss the other two elements required by the Results Act—the relationship between long-term goals and objectives and annual performance goals and the description of program evaluations used to establish general goals and objectives and a schedule for future program evaluations. The draft plan said that relating long-term goals and objectives to annual performance goals will more appropriately be done in the Department’s future annual budget requests. The draft plan made limited references in various sections to a few past studies of Commerce programs, but those references did not describe how the studies were used to establish general goals and objectives, and the draft plan did not provide a schedule for future program evaluations. Concerning other plan components, the draft plan provided much useful information on Commerce’s statutory authorities. However, the draft plan could have been more useful to Commerce, Congress, and other stakeholders if it had provided a more explicit discussion of crosscutting activities, the major management challenges the Department faces, and the Department’s ability to provide reliable financial and other management and program information to measure achievement of its goals. Commerce’s publicly issued strategic plan incorporated improvements in several areas and now addresses, to some extent, all of the elements required by the Results Act. The improvements that the Department made are steps in the right direction and address some but not all of the weaknesses discussed in our July 1997 report on an earlier draft of the plan. The plan’s discussions of strategic goals have been expanded to briefly indicate Commerce’s strategy for achieving each goal. For example, under the theme of keeping America competitive with cutting-edge science and technology, the National Oceanic and Atmospheric Administration (NOAA) has a goal to “predict and assess decadal to centennial change.” The plan now describes how NOAA will approach this goal by addressing questions dealing with air quality, ozone depletion, greenhouse warming, and climate change. Also, the plan now more explicitly acknowledges the need to link strategic goals and objectives to annual performance goals and includes an illustrative performance measure for each of the objectives under the three strategic themes. For example, the illustrative performance measure for two of the Patent and Trademark Office’s (PTO) objectives is “reduced pendency time.” This illustrative performance measure is one of several that addresses PTO’s goal of granting exclusive rights for limited times to inventors for their discoveries. Similarly, the plan’s three strategic theme chapters now more strongly emphasize the importance of external factors that could affect achievement of Commerce’s strategic goals and identify more key external factors. Under the economic infrastructure strategic theme, for example, the plan now includes a reference to the International Trade Administration’s (ITA) strategy to identify obstacles to U.S. exports and plans for removing such obstacles and marshaling U.S. government resources to eliminate barriers. Commerce’s revised strategic plan includes new sections on program evaluations, interagency linkages, and major management challenges. The new section on the role of program evaluations discusses current evaluations as well as future evaluation plans, provides examples, highlights the difficulties in specifying the level and focus of future evaluations because of year-to-year competition for funds, and states that future evaluations for many Commerce bureaus will be included in annual performance plans and budgets. The new section on interagency linkages acknowledges the importance of close interagency ties and emphasizes the Department’s commitment to strengthen those ties by reaching out to other federal agencies with complementary responsibilities. In addition, the partnership sections of the three strategic theme chapters now more fully identify and discuss Commerce’s shared mission responsibilities with other federal agencies. Under the economic infrastructure theme, for example, the plan now emphasizes those aspects of Commerce’s mission that are complementary. It points out that Commerce chairs the Trade Promotion Coordinating Committee (TPCC), a 20-member interagency task force charged by the President and Congress with developing and implementing the National Export Strategy. The new section on management challenges recognizes and discusses three of the key management challenges facing the Department that were highlighted in our July report—weather service modernization, Census 2000, and financial management systems. Finally, the usefulness of the plan has been improved by the addition of an index or matrix, which shows which Commerce bureaus are responsible for which strategic themes and goals; and an appendix, which provides clearer and more comprehensive information on, and consolidates in one place in the plan, the statutory and other authorities for the Department and its bureaus, themes, and goals and objectives. While the overall quality of Commerce’s strategic plan has been improved since we reported in July 1997 on an earlier draft of the plan, further improvements still could be made in each of the elements required by the Results Act. As we indicated in our July report, the mission statement could be made more complete by explicitly recognizing that several other federal agencies as well as state and local governments also play major roles in the areas covered by Commerce’s three strategic themes. In the export controls area under the economic infrastructure theme, for example, the plan acknowledges that Commerce shares mission responsibilities with the Departments of Defense, Energy, and State and the Arms Control and Disarmament Agency, but the mission statement does not recognize this or other shared responsibilities. Similarly, the treatment of crosscutting functions could clarify Commerce’s role in the three strategic theme areas, specify how the Department’s efforts intersect with or complement the efforts of the other participants, and identify which other government entities Commerce coordinated its plan with and the results of that coordination. The Department’s September 30, 1997, letter transmitting the revised plan to Congress said that Commerce consulted with stakeholders, provided them and congressional committees with copies of its draft plans, and responded to stakeholder and congressional comments. According to Commerce’s transmittal letter, there were no unresolved contrary views concerning its plan. The strategies for achieving each strategic goal could be further expanded to specify how Commerce will hold its bureaus and managers accountable for meeting strategic goals and the resources that will be required to meet them. The linkages between long-term strategic goals and objectives and annual performance goals could be improved by (1) making the illustrative performance measures more outcome oriented, such as by using the “number of counseling sessions” as a measure of ITA’s economic infrastructure objective to “increase trade assistance targeted to small and medium-sized businesses,” or (2) showing how the performance measures that were added cause results. The discussion of external factors could identify and discuss more key factors beyond Commerce’s direct control that could affect achievement of its goals, such as congressional concerns about the Census Bureau’s plans for conducting Census 2000, and specify how the external factors that are identified will be addressed or mitigated. The discussion of program evaluations could indicate more specifically how evaluations were used to establish goals/objectives and performance measures. Finally, the discussions of Commerce’s major management challenges and its capacity to provide reliable data on performance could acknowledge and discuss more of the major management challenges and data capacity problems that we emphasized in our July 1997 report, such as managing modern information technology and the “year 2000 computer problem.” Also, the plan could relate identified management challenges, including performance measurement limitations, to Commerce’s strategic goals and objectives, discuss their implications for achievement of its strategic goals and objectives, and indicate more specifically how and when the Department expects to overcome these challenges. The plan could be made more useful to stakeholders and would better meet the intent of the Act if it identified and discussed these types of problems as well as other material weaknesses or high-risk areas, such as NOAA’s fleet for acquiring marine data, that are disclosed in Commerce’s Federal Managers’ Financial Integrity Act reports or financial statements. Given the diversity of its programs and activities and its bureaus’ independence, Commerce faced an especially formidable challenge in developing its strategic plan. The Department developed a “thematic” strategic plan that covers its major functions and activities; is consistent with relevant statutory and other authorities; and addresses, to some extent, the various elements required by the Results Act. The plan’s readability, usefulness, and overall effectiveness as a planning and oversight tool could be enhanced by streamlining its organization and content to eliminate many of the details that do not relate directly to the Act’s requirements, thus reducing its 178-page length. We provided a copy of a draft of this briefing document to the Department of Commerce for review and comment. On October 17, 1997, the Director for Budget, Management and Information and Deputy Chief Information Officer provided us with written comments. He characterized our review as balanced and fair and said that the Department clearly agrees that it needs to do more planning with other agencies and crosscutting programs and that this is a very high departmental priority. In this regard, he said that the Department has stepped forward as the lead agency to link with the National Academy of Public Administration (NAPA) in forming the Performance Consortium and that a dozen other federal departments and agencies have joined Commerce in this effort to develop common planning activities and elements. The Director also said that the Department disagrees with our suggestion that its plan could be improved by providing additional information in certain areas and eliminating many of the details that do not relate directly to the Act’s requirements. He said that the Department made a specific decision to have a single, integrated strategic plan that covers all its bureaus. The Department believes that its plan demonstrates clearly how the Commerce bureaus fit together and provide critical service to the nation and that it addresses some of the administration’s key priorities and secures the buy-in of its bureaus. As Commerce’s strategic plan evolves, we continue to believe that its readability and specificity could be improved by streamlining its organization, content, and presentation. L. Nye Stevens, Director, Federal Management and Workforce Issues; General Government Division, (202) 512-8676. On August 5, 1997, we issued a report on DOD’s draft strategic plan (The Results Act: Observations on DOD’s Draft Strategic Plan, GAO/NSIAD-97-219R). The Department of Defense’s formally issued Results Act strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we reviewed the strategic plan and compared its changes to DOD’s draft plan. On October 17, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. Our prior evaluation revealed that DOD’s draft plan included discussions of each of the six critical components required in strategic plans but that some were of higher quality than others. We noted, for example, that DOD’s draft plan contained a succinct mission statement and general goals and objectives that cover its major functions and operations and reflect its broad statutory defense responsibilities, but it did not include schedules for initiating and completing significant actions to achieve its goals. We also noted that, although DOD included some discussion of other elements, such as formidable management problems, these discussions could be more complete. And, we suggested several improvements to the draft plan, including that DOD (1) more completely state strategies for achieving its goals and include schedules of significant actions; (2) link and discuss how external factors could affect its ability to achieve its goals; (3) discuss how program evaluations were used in developing its goals and identify key issues for future evaluations; (4) discuss planned or ongoing actions to resolve persistent management problems, including time frames and required resources; and (5) identify and discuss coordination efforts for programs that crosscut with other agencies’ programs. Finally, we suggested that DOD develop one clear and succinct document to serve as its strategic plan. DOD revised its general goals and objectives to provide a clearer presentation. In line with the revision, DOD also rearranged its description of how the performance goals it is developing will be related to the general goals in an effort to improve the description’s clarity. It also defined some terms and included some additional information in the rearranged description. DOD’s general goals and objectives as reworded still cover its major functions and operations and reflect its broad statutory defense responsibilities. DOD also included a table listing the major management problems that we have identified as high-risk areas and the documents, such as the DOD Logistics Strategic Plan, that address each of the high-risk areas. However, it did not include an explanation of what actions will be taken to address the high-risk areas and when the problems in these areas are expected to be corrected. Additionally, DOD did not adopt our other suggested improvements, nor did it consolidate the strategic plan into one succinct document. We believe that DOD’s strategic plan could be further improved by adopting the suggestions we made in our August 5, 1997, report (summarized herein). We believe that addressing these areas would provide decisionmakers and stakeholders the information necessary to ensure that DOD has well-thought-out strategies for resolving ongoing problems, achieving its goals and objectives, coordinating crosscutting activities, and becoming more results oriented, as expected by the Results Act. DOD officials reiterated that the QDR, alone, has been the Department’s finalized strategic plan since it was issued in May 1997. They also stated that they included a table listing the underlying plans that address the high-risk management problems noted in our August 5 report but did not attach or include significant detail from the underlying plans because that would have made their submission too voluminous. They said that they did not include details in summary fashion because that would not have provided enough information. They noted that DOD is working to address its management problems and said that those interested in seeing how the problems are being addressed should read the underlying plans. Additionally, the officials noted that although coordination of programs and activities that crosscut other agencies’ programs are not discussed in DOD’s strategic plan, DOD coordinates and cooperates extensively with other federal agencies as part of its ongoing strategic planning process. Finally, DOD officials commented that in congressional consultations, the only change suggested was that DOD reword a couple of its general goals and objectives. David R. Warren, Director, Defense Management Issues; National Security and International Affairs Division, (202) 512-8412. On July 18, 1997, we issued a report on the Department of Education’s draft strategic plan (The Results Act: Observations on the Department of Education’s June 1997 Draft Strategic Plan, GAO/HEHS-97-176R). Education’s formally issued strategic plan was submitted to OMB and Congress on October 3, 1997. As requested, we have reviewed the publicly issued strategic plan and compared it with the observations in our July 18 report. On October 16, 1997, we briefed your staffs on further observations on the strategic plan. The key points from that briefing are summarized herein. The Department’s June 17, 1997, draft plan generally complied with the Results Act. Overall, it is a useful document and included all but one of the six elements required by the Act—it did not discuss how the agency’s long-term goals and objectives will be related to its annual performance goals. The plan’s long-term goals and objectives were succinct and logically linked to its mission statement, and the quality of the goals and objectives reflected the Department’s thoughtful deliberation in its efforts to comply with the Results Act. In addition, the plan addressed in some form all of the Department’s major statutory responsibilities. Although the plan presented a logical and fairly complete description of how the Department intends to achieve its mission, we identified a few areas in the draft plan that could be improved. We observed that the plan could benefit from more information, clarity, and context in some of its components. The plan should have included an explanation of the relationship between its long-term goals and objectives and its annual performance goals as well as a complete description and schedule of program evaluations. It could also have better addressed the Department’s major statutory responsibilities. The Department has the primary responsibility for implementing federal education policy and programs, but several other federal agencies also provide education-related programs and services. In our past work, we have identified opportunities for consolidating programs in certain areas, such as job training and early childhood education, to eliminate inappropriate duplication. The draft strategic plan did a good job of identifying crosscutting program activities in elementary and secondary programs, but it did not identify or discuss activities for postsecondary programs that require coordination. By discussing the agencies and activities involved with the Department’s higher education programs, the strategic plan could provide Congress with a more complete picture of the scope of the Department’s coordination activities. In its discussion of core strategies for achieving its strategic goals and objectives, the Department identified several management challenges it will face in the coming years, but it provides little detail about these challenges and how it will meet them. This type of information could help the Department and its stakeholders identify major management problems that could impede the Department’s efforts to achieve its goals and objectives. Further, stakeholders could benefit from knowing what the Department has done, is doing, or plans to do to address such problems. The Department’s strategic plan issued October 3, 1997, included several significant improvements that make it more responsive to the requirements of the Results Act than its draft plan. The Department’s plan now addresses all six elements required by the Results Act. The plan addressed the relationship between the agency’s long-term goals and objectives and its annual performance plan—the only element missing from its draft plan—by including a matrix linking long-term goals and objectives in the strategic plan with fiscal year 1997 appropriation information and agency programs. The matrix indicated where programs have a significant number of activities or products supporting an objective. Though the strategic plan does not specifically describe how the Department intends to measure the performance of its programs each year, the matrix and the supplemental information on the Department’s performance indicators (shown in appendix A of the plan) provided a better understanding of the relationship between the Department’s strategic and annual performance plans. The Department states that the strategic plan was based, in part, on objectives and indicators in draft program performance plans prepared for key programs in the winter of 1997. According to the plan, the annual performance plan (which includes budget and performance plans for each of the Department’s programs) will further clarify this linkage. The Department’s October 1997 plan also provided a description of the program evaluations and assessments that were used to develop each of its four strategic goals as well as evaluations that will help to “inform the implementation” of the plan and provide data for the performance indicators supporting the goals. For example, at the end of the narrative describing Goal 2 (build a solid foundation for learning for all children), the plan stated that early evaluations of the Even Start program and crosscutting evaluations of Goals 2000 and the reauthorized elementary and secondary education programs were used to develop this goal. In addition to the evaluations highlighted in the introduction of each goal, appendix B of the plan described 57 key program evaluations and other studies, including information on when the evaluation data were or will be collected and, in many instances, how often the data will be collected in the future. The Department’s strategic plan also identified agency efforts that will help to avoid duplication among its evaluation efforts and reduce respondent burden. In addition, the narrative supporting the Department’s mission statement now encompasses the Department’s major statutory responsibilities. Our review of the Department’s draft plan indicated that it had failed to address the agency’s statutory requirements for basic education for adults, vocational rehabilitation, education of individuals with disabilities, and school-to-work opportunities. In its October 1997 plan, the Department addressed this weakness by including as one of its key agency functions “providing grants for literacy, employment, and self-sufficiency.” The plan more clearly addressed the Department’s civil rights function within the goals and objectives sections. During our review of the Department’s draft plan, we observed that the agency’s civil rights function, although reflected in its mission statement, was not addressed in the plan’s long-term goals or objectives. In support of two objectives related to goals 1 and 4, the Department added strategies for addressing its civil rights function. Objective 1.5 is to get families and communities fully involved with schools and school improvement efforts. In support of this objective, the plan states that the Department will create collaborative partnerships among parents, community groups, and other stakeholders that ensure equal educational opportunity, and provide civil rights training and technical assistance to build these linkages. Objective 4.2 is to provide Education’s partners the support and flexibility they need without diminishing accountability. In support of this objective, the plan adds the following strategy: to build civil rights partnerships to achieve shared civil rights objectives and secure timely improvements for students. As required by the Results Act, the Department described in its draft plan several factors outside the agency’s program scope and responsibilities that could negatively affect its ability to achieve its strategic goals. The Department strengthened this discussion in the plan by describing agency actions intended to mitigate against seven key external factors that could affect the achievement of its long-term goals. For example, the plan states that school systems will need to undertake long-term investments in professional development and other capacity-building activities if education reforms are to succeed. Yet, pressures outside of the Department’s control may encourage school systems to focus instead on demonstrating short-term gains. To counter these pressures, the plan states that the Department will (1) work with program and technical assistance providers to highlight the importance of sustained professional development aligned with the standards and (2) emphasize the importance of professional development in its performance indicators. Consistent with our suggestions in our July report, the Department’s strategic plan also addressed several other issues. The plan specifically identified coordination activities related to the Department’s postsecondary education programs and activities. It listed interagency coordination and data matches with, for example, the Social Security Administration, the Immigration and Naturalization Service, and the Selective Service as a strategy for ensuring that postsecondary student aid delivery and program management are efficient (objective 3.3). Core strategies to achieve this objective also included working with the Internal Revenue Service on tax refund offsets and address matches and the Department of the Treasury on administrative offsets to increase defaulted student loan collections. In addition, the Department has taken the important step to revise the date for its Year 2000 conversion performance indicator. The plan established 1998, rather than 1999, as the year all of its relevant computer systems will be Year 2000 compliant, thus allowing more time for system testing and validation. While the Department had previously included the Year 2000 conversion effort in its draft plan, it established in its current plan December 31, 1999, as the deadline for repairing seven mission critical systems. As we pointed out, the Year 2000 problem is not technically challenging; however, it is massive and complex. With about 800 days before the Year 2000 deadline, the current plan’s performance indicator of assuring that all systems have been evaluated and, where necessary, converted to make them Year 2000 compliant by December 31, 1998, is a major improvement. In recognition of the critical challenge facing federal agencies in dealing with this issue, GAO has added the Year 2000 problem as one of its high-risk areas. The Department has made significant strides in its October 1997 plan in recognizing major management challenges facing the Department. In our review of its draft strategic plan, we discussed the Department’s particularly difficult challenge in improving its information systems for the student aid program. We discussed the problems that the lack of an integrated student financial aid system creates. We also discussed the Department’s reengineering effort, known as “Easy Access for Students and Institutions (EASI),” which was being developed to redesign the entire student assistance program delivery system. In its draft strategic plan, the Department identified EASI as an important part of its core strategy for integrating its aid systems. However, as we pointed out, the project had a history of false starts. We subsequently recommended in another report that the Department should first develop a systems architecture to address system integration deficiencies before proceeding with new major systems development. The Department’s plan eliminated EASI from its core strategies and adopted the broader core strategy of (1) developing an “integrated, accurate, and efficient student aid delivery system” and (2) ensuring that systems are mission-driven and consistent with the Department’s information technology architecture. In our July 18, 1997, report we also highlighted problems with the Department’s management, systems, and processes that affect its ability to ensure financial accountability, particularly among its student financial aid programs. The Department recognized these problems in its strategic plan and listed the following as its most important challenges: (1) student aid systems that are incompletely integrated, (2) financial data from aid programs that are only partially consolidated at the student level, and (3) too many contractors who use different operating systems. The plan stated that correcting this situation will require the redesign and modernization of the federal student financial aid system using the latest information engineering and computer system technology. To address these and other issues, the Department included under Goal 3 a new, separate objective for the management of its postsecondary student financial aid programs: “Postsecondary student aid delivery and program management is efficient, financially sound, and customer-responsive” (objective 3.3). The Department identified with specificity numerous strategies and performance indicators that will help to address and track agency efforts to achieve this objective. Postsecondary program management. To improve efforts in this area, the strategic plan states that the Department will develop and utilize a risk management system to target compliance and enforcement activities on poorly performing institutions while reducing burdens on high performing ones. Responding to a recommendation from its fiscal year 1996 Department-wide financial audit, the Department is currently developing this new risk analysis system to better utilize its limited monitoring resources towards the highest risk institutions. However, this system will not be fully implemented until fiscal year 1998. Another Department strategy to improve the management of its student financial aid programs involves expanding the use of the case management approach to maximize the effectiveness of institutional oversight. According to the plan, this approach encompasses review of recertification applications, compliance audits, financial statements, risk management system inputs, and program reviews. Financial integrity. The Department stated in its draft strategic plan that poor data from the Federal Family Education Loan Program (FFELP) have prevented it from obtaining an unqualified audit opinion on its annual financial statements for the past 4 years. The Department’s plan included several core strategies for addressing this data integrity problem, such as integrating the multiple student aid databases based on student-level records and improving contract performance for major information systems by increased use of performance-based contracting. The Department also added to its plan an indicator to track the accuracy and integrity of data supplied by applicants, institutions, lenders, and guaranty agencies. Data from these sources have been problematic in the past. In addition, the Department’s October 1997 strategic plan included a new performance indicator related to the financial integrity of the Department’s postsecondary financial aid programs. It states that: “There will be no material internal weaknesses identified in the student aid programs’ portions of the Department-wide financial statement audit and no student aid program issues that prevent the Department from receiving an unqualified opinion on the financial statements.” This indicator is linked to and supports indicator 26, which now definitively states that auditors will issue a clean opinion on the Department-wide financial statements every year. Although, in general terms, the plan better specifies how the Department will address this critical financial management weakness, it still has not completely clarified how it will resolve the data integrity issues for FFELP or accurately estimate the government liability that has prevented the Department from obtaining an unqualified opinion. On October 21, 1997, the Department provided written comments on a draft summary of our observations of its October 3, 1997, strategic plan. The Department generally had no objections to our observations but wanted to clarify several issues we raised in the draft. To measure the performance of Department programs, the agency will include program performance plans in its detailed annual plan currently being prepared by Department staff in conjunction with OMB. The individual program plans will be linked directly to “budget activity lines” and will accompany the Department’s fiscal year 1999 budget justification to Congress in February 1998. The Department submitted 17 draft program plans to Congress in March 1997 that, among other things, identified each program’s goals and objectives, key performance indicators, program evaluations and other data sources, the year the performance indicator data will first be available, and key strategies for achieving the objectives. These performance plans were developed by the program offices and have been reviewed extensively internally—some have been shared with stakeholders. Program performance plans covering the Department’s approximately 100 program activities will include essentially the same information as the 17 draft plans and will be reviewed and updated this fall for inclusion in the agency’s annual plan. The Department’s Chief Information Officer has contracted with Lockheed to work with the agency to ensure that it meets its Year 2000 performance indicator target of December 31, 1998. This activity will be monitored at the highest levels within the agency, and progress will be reported at least quarterly through the strategic planning tracking process. The Department is engaged in several activities that should help to resolve the data integrity issues for FFELP and accurately estimate the government’s liability for this program. Specifically, the Department (1) has developed a workplan, approved by the independent accounting firm of Price Waterhouse, to address concerns about the government’s liability estimate in time for the Department’s fiscal year 1997 audit; (2) is comparing data from the National Student Loan Data System (NSLDS) with audited data submitted by selected guaranty agencies; and (3) is working with E-Systems, Inc., and direct loan origination and servicing contractors to ensure the accuracy and timeliness of direct loan data submitted to NSLDS. Carlotta C. Joyner, Director, Education and Employment Issues; Health, Education, and Human Services Division, (202) 512-7014. On July 11, 1997, we issued a report on the Department of Energy’s (DOE) draft strategic plan dated June 16, 1997 (Results Act: Observations on the Department of Energy’s Draft Strategic Plan, GAO/RCED-97-199R). DOE formally submitted its strategic plan to OMB and Congress on September 30, 1997. As requested, we have reviewed this strategic plan and compared it with the observations in our July report. On October 14, 1997, we briefed your staffs on our further observations on DOE’s strategic plan. The key points from that briefing are summarized herein. As we reported in July 1997, the draft plan did not meet all the requirements of the Results Act. It fully addressed two of the six required elements of the Results Act—the mission statement and goals and objectives—partially addressed a third, and acknowledged that three others needed to be completed for the September plan. Furthermore, the draft plan did not expressly link its missions, goals, objectives, and strategies with DOE’s relevant major statutory responsibilities, although we noted that the missions and activities defined in DOE’s draft plan were generally supported by legislation and that the draft plan accurately reflected all of DOE’s major legislative requirements. However, we observed that DOE’s missions have evolved from those that Congress envisioned when it created the Department in 1977 and that the Results Act provides a forum through which Congress can review the appropriateness of these missions. Our July 1997 report also noted that the draft plan did not identify programs and activities that are crosscutting or similar to those of other federal agencies. In addition, some of the draft plan’s measures addressing management challenges appeared limited in scope or were unclear. Finally, we noted several weaknesses in the information system that DOE uses to track performance measures. In addition to our July report, the Secretary of Energy requested our continued involvement in refining DOE’s plan. On September 2, 1997, we provided the Department with our comments on its revised draft strategic plan—dated August 15, 1997 (Results Act: Observations on the Department of Energy’s August 15, 1997, Draft Strategic Plan, GAO/RCED-97-248R). In that report, we noted that the revised draft plan was much improved over the earlier draft. Specifically, the revised plan included all six elements required by the Results Act. However, we reported that some of the strategies and many of the measures still did not appear to be results oriented. DOE’s September 30, 1997, strategic plan incorporated several improvements that make it more responsive to the requirements of the Results Act than was the June draft plan. In July, we observed that DOE’s draft plan fully addressed two of the six required elements of the Results Act—the mission statement and goals and objectives—partially addressed a third, and acknowledged that three others needed to be completed for the September plan. The September plan complies with the Results Act requirements by including the remaining three sections and fully developing the third by adding a discussion of resource requirements. In describing its resource requirements, the September plan states that the Department assumed budget appropriations consistent with the administration’s and Congress’ agreed-upon 5-year budget deficit reduction targets through fiscal year 2002. Our July report also observed that the draft plan did not expressly link its missions, goals, objectives, and strategies with DOE’s relevant major statutory responsibilities. The September plan now shows the linkage between the Department’s business line objectives and its relevant major statutory responsibilities. Furthermore, DOE’s strategic plan now acknowledges—in its discussion of key external factors—that the Department participates in some crosscutting government functions and initiatives that are beyond the mission of any one agency. While the plan does not describe how DOE will work in concert with other agencies, it does acknowledge DOE’s commitment to work closely with other federal agencies, OMB, and Congress to ensure that its programs provide critical and unique contributions to these crosscutting efforts. DOE did not adopt all of the suggested improvements noted in our July and September reports. These suggestions were based on several of our past reports and represent areas in which we have had disagreements with DOE in the past. However, we still believe that if the Department made these suggested changes—as outlined in our July and September reports—the plan would better address the goals of the Results Act. One such example that we identified in our September report relates to an evaluation that we made concerning the vulnerability of U.S. oil supplies to disruptions. On the basis of that report, we believe that DOE’s measures for its objective to “reduce the vulnerability of the U.S. economy to disruptions in energy supplies” are not very useful indicators of how the Department’s programs will affect the economy’s vulnerability. DOE’s measures are based on six strategies: to (1) support activities capable of ending the decline in domestic oil production, (2) maintain an effective Strategic Petroleum Reserve, (3) diversify the international supply of oil and gas, (4) develop alternative transportation fuels and more efficient vehicles, (5) maximize the productivity of federal oil fields, and (6) take measures to avoid and respond to domestic energy disruptions. However, our report on the vulnerability of oil supplies observed that, in today’s world oil market, replacing oil imports with domestically produced oil would only marginally lower the potential costs of disruptions because oil prices are set in the global marketplace and the price for all oil rises during disruptions. While we agree that one of DOE’s strategies—diversify the international supplies—can lead to measures that contribute to reducing the vulnerability of the U.S. economy to disruptions in the energy supply, our vulnerability report offers five other factors that we believe would better focus DOE’s efforts in developing strategies and measures for its objective of reducing the vulnerability to energy supply disruptions: (1) excess world oil production capacity, (2) the oil intensity of the U.S. economy, (3) the oil dependency of the U.S. transportation sector, (4) world oil stocks, and (5) the dependence of the U.S. economy on oil imports. Finally, our July report noted several weaknesses in the information system that DOE uses to track performance measures. However, DOE’s September 1997 strategic plan makes no reference to these problems. We still believe that DOE will need to modify the information system it anticipates using to track the strategic plan’s performance measures and identify management problems. In addition, we also noted that the information used to update the tracking system depends on various other information systems that we and DOE’s Inspector General have found contain incomplete or inaccurate information. While DOE’s strategic plan is organized along four business lines—energy resources, national security, environmental quality, and science and technology—the agency is organized by program, and it is not clear from the plan which program offices are accountable for implementing the different sections of the plan. For example, several of the Department’s program offices have science missions, including the Office of Nuclear Energy, Science and Technology and the Office of Energy Research; the Office of Nonproliferation and National Security and the Assistant Secretary for Defense Programs have defense missions. However, the plan does not describe how the Department’s current organizational alignment is suited to the plan’s four business lines, nor does it provide a matrix showing which program offices will be held accountable for implementing each section of the plan. On October 10, 1997, we met with DOE officials, including the Acting Director, Office of Strategic Planning, Budget and Program Evaluation, to obtain the Department’s comments on our observations about its strategic plan. DOE officials made three points. First, they stated that development of performance measures is difficult—especially in the science area—and that they recognize the need to continually work to improve these measures. Second, in reference to the disagreements that they had with some of the policy positions of our past reports, they noted that these differences will continue; however, they do not believe that they are strategic planning differences. We disagree because such differences have an impact on the substance of the plan. For example, if DOE uses incorrect measures, it will not know if it has achieved its goals and objectives. Finally, the officials acknowledged that DOE’s strategic plan does not show program accountability but stated that the Department has developed a draft matrix document that provides a crosswalk between its performance measures and the programs. They also pointed out that after the Department-level matrix is completed, each program will need to cascade performance measure accountability to it subunits. Victor S. Rezendes, Director, Energy, Resources, and Sciences Issues; Resources, Community, and Economic Development Division, (202) 512-3841. On July 11, 1997, we issued a report on the Department of Health and Human Services’ (HHS) draft strategic plan (The Results Act: Observations on the Department of Health and Human Services’ April 1997 Draft Strategic Plan, GAO/HEHS-97-173R). HHS submitted its revised strategic plan to the Office of Management and Budget and Congress on September 30, 1997. As requested, we reviewed the revised plan and briefed your staffs on our observations. The key points from that briefing are summarized herein, together with a brief overview of our comments on the initial HHS plan. We found HHS’ draft strategic plan to be missing most of the key elements required by the Results Act and to be more a summary of current programs than a document projecting actions HHS might take in the next several years to achieve the goals of the Act. Although HHS had developed a mission statement that successfully captured the broad array of its activities, the draft plan did not define measurable goals and objectives, describe approaches to achieving these goals and objectives, describe the relationship between long-term goals and objectives and annual performance, identify key external factors beyond HHS’ control, or describe how program evaluations were used to establish or revise strategic goals. Furthermore, although the draft strategic plan recognized that many different HHS operating divisions and programs are responsible for meeting each of HHS’ goals, it did not discuss strategies for coordinating such efforts, nor did it discuss HHS’ need to coordinate its work with other federal agencies. Finally, we observed that HHS faces many major management challenges in carrying out both its program responsibilities and the type of strategic planning and performance measurement the Results Act requires. Two challenges that we highlighted were HHS’ reliance on state, local, and private agencies to carry out many programs for which it is responsible and HHS’ maintenance of financial management and program integrity. Although we believed HHS was aware of these challenges, its plan did not address them. By acknowledging these challenges in its plan, however, we pointed out that HHS could foster a more useful dialogue with Congress about its goals and the strategies for achieving them. HHS’ revised strategic plan incorporated many of the elements that were missing from its earlier draft, making it a more useful document and one that is more responsive to the requirements of the Results Act. The current strategic plan includes all six critical elements as required by the Act. The most notable improvement is in the plan’s outline of objectives for accomplishing HHS’ six strategic goals. The objectives are largely focused on outcomes, such as reducing the use of illicit drugs, and they are defined in measurable terms, such as increasing the percentage of the nation’s children and adults who have health insurance coverage. The plan also identifies for each strategic objective the key measures of progress. For example, the two measures to determine the reduction of tobacco use are the rate of tobacco use among the young and rate of smoking among adults. HHS also added descriptions of its efforts to coordinate both internally among its operating divisions and externally with other departments and agencies. It describes, for example, a range of approaches to improve internal coordination among the various operating divisions, such as special initiatives managed by two or more operating divisions and coordinating councils that integrate planning and policy development across HHS. The discussions of several strategic objectives include a recognition of the need to cooperate with other departments and agencies. For example, the plan indicates that HHS’ substance abuse treatment and prevention programs will work with the Health Resources and Services Administration as well as the Departments of Education and Justice to support an initiative to provide information to communities on the incidence of street and gang violence, domestic violence, and substance abuse and violence. The plan is also improved by HHS’ discussion of three types of challenges that could significantly affect its ability to achieve its strategic goals—external factors, management issues, and data administration. The plan discusses these issues as general obstacles to achieving the Department’s overall goals. Moreover, it describes the Department’s current status in improving performance with respect to these specific issues. As its strategic planning process evolves, HHS’ plan should continue to reflect its progress toward results-oriented management. In the meantime, however, we observed several opportunities for further improvements in the plan. The greatest opportunities for improvement, in our view, are in HHS’ discussion of its strategies for accomplishing its objectives. First, its strategies are not clearly linked to the attendant measures of success, making it difficult to determine how the strategies would contribute to the desired outcomes. For example, to increase the economic independence of families on welfare, the plan specifies three strategies—providing technical assistance, promoting employment, and improving access to child care. The four measures of success for economic independence, however, are all related to employment, with no apparent relationship to the strategies for child care or technical assistance. Second, the plan does not discuss the effectiveness of the outlined strategies, making no mention of either existing evaluations to indicate what is known about the effectiveness of these strategies or plans for future evaluation to determine their effectiveness. For example, some of the strategies were built around a common HHS approach to support state-administered programs: technical assistance, training, and identifying and disseminating best practices. Yet, we have found in our work on these programs that there have been problems in implementing such strategies: in some cases, HHS’ technical assistance was inadequate, the capacity of regional offices to provide assistance and training was limited, and the dissemination of research and best practices was lacking. In addition to drawing on past evaluations, HHS’ plans should identify future evaluations to determine how well its strategies are working. Third, the plan does not discuss the resources required to implement the strategies. For example, strategies to enhance the fiscal integrity of the Health Care Financing Administration (HCFA) programs include consolidation of Medicare payment systems to improve HHS’ ability to identify aberrant billing and improve payment accuracy. However, there is no mention of the resources necessary to implement such a strategy. Fourth, although the plan identifies key external factors that affect achievement of the strategic objectives, there is little discussion of how HHS intends to ameliorate these factors. For example, a key external factor to achieving a number of objectives is the state of the economy, yet the plan does not indicate how the strategies will adjust to changes in the economy. While the plan reflects a recognition of management and information challenges to achieving HHS’ goals, including those mentioned in our July correspondence, it provides little discussion of potential solutions. For example, the plan acknowledges HHS’ reliance on state, local, and tribal government organizations, contractors, and private entities and mentions the need to coordinate with them but is less specific on how it would do so. Similarly, HHS’ plan recognizes the importance of improving its financial management information. In July, we reported that HHS had not addressed its problems in complying with the Government Management Reform Act of 1994 (GMRA), which would furnish decisionmakers with reliable, consistent financial data. While the revised plan acknowledges that obtaining an unqualified or clean opinion on its financial statements is a fundamental and critical objective and challenge for HHS, it does not specify the corrective actions and timetables to address these concerns. With respect to information technology, we noted in our July correspondence that the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996 set forth requirements that promote more efficient and effective use of information technology to support agency missions and improve program performance. While the plan identified several information technology initiatives that may help HHS achieve some program objectives, the plan does not discuss how HHS intends to identify and coordinate information technology investments in support of overall Department-wide goals and missions. We provided HHS officials with a draft of this appendix. While they were pleased that we recognized the improvements made to the plan, they agreed that the plan can be further improved. In their view, strategic planning is a continuous process; ongoing assessments and updates will be needed to strengthen the plan and ensure that it continues to provide relevant direction for their program activities. Bernice Steinhardt, Director, Health Services Quality and Public Health Issues; Resources, Community, and Economic Development Division, (202) 512-7119. On August 8, 1997, we issued a report on the Department of Housing and Urban Development’s (HUD) draft strategic plan (Results Act: Observations on the Department of Housing and Urban Development’s Draft Strategic Plan, GAO/RCED-97-224R). HUD submitted its strategic plan to OMB and Congress on September 30, 1997. As requested, we have reviewed the strategic plan and compared it with the observations in our August report. On October 14, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. HUD’s draft strategic plan included five of the six components required by the Results Act. The plan was missing a description of how program evaluations were used in establishing the strategic objectives, including a schedule of future evaluations. Also, HUD’s treatment of the other five required components did not yet fully comply with the Results Act or OMB’s guidance. The draft included two separate mission statements, which did not define the agency’s basic purpose or focus on its core programs. One of the statements, which focused on restoring the public’s trust, was not clearly supported by HUD’s strategic objectives. While the strategic objectives covered HUD’s major program activities, they did not clearly describe how HUD would assess whether it was making progress toward achieving those objectives. Also, the discussion of HUD’s strategies to achieve its objectives and the relationship of annual performance goals to the strategic objectives was missing a discussion of the resources needed and the type of information needed for its performance goals. The draft strategic plan only partially met the requirements of the Results Act to describe key factors that are external to an agency and beyond its control that could significantly affect the achievement of its objectives. The plan also did not cover the time frames specified by the Results Act. The draft strategic plan generally reflected consideration of HUD’s key authorizing statutes. The draft also discussed HUD’s consultation process and its many community partnerships but did not reflect whether the Department coordinated with other federal agencies and did not identify programs or activities that were crosscutting or similar to those of other agencies. HUD’s draft strategic plan acknowledged that it faced significant management challenges and broadly described how these problems would be addressed. However, we observed that HUD could improve the plan by more fully integrating its management reform plan with the strategic plan and providing specific information about how the plan addressed the Department’s financial and management information weaknesses. HUD’s capacity to provide reliable information on the achievement of its strategic objectives was uncertain because the draft strategic plan had not yet been developed sufficiently to identify the types and sources of the data needed to evaluate progress. The plan identified some annual performance goals for which obtaining reliable data could be difficult because of the weaknesses associated with HUD’s current financial and management information systems. HUD’s September 30, 1997, strategic plan covers all six components required by the Results Act and incorporates many improvements that make it more responsive to the requirements of the Act. Specifically, the plan discusses past evaluations and refines HUD’s mission statement. The new mission statement clearly identifies HUD’s role in achieving the nation’s housing mission. However, the language remains very broad in terms of how HUD can empower communities and individuals to succeed. A mission statement related to the management reforms was reworked and is now included in the plan as the Secretary’s personal mission to emphasize the importance the Secretary places on these reforms. The strategic plan also links the strategic objectives and the annual performance goals, expands the discussion of external factors, and covers the appropriate time frame. Additionally, the revised plan addresses HUD’s consultation process and interagency coordination efforts. The strategic plan discusses HUD’s ongoing and planned coordination with the Departments of Health and Human Services and Labor. This coordination will give HUD the opportunity to identify in future plan updates any programs that complement or duplicate those administered by other federal agencies. HUD has also improved the discussion of the management problems it faces and the corrective actions it plans to take. The strategic plan now includes (1) an explanation of the agency’s current efforts to integrate its program and financial management systems and clean up the data in those systems, (2) a discussion of HUD’s plans to address the issues that led to a qualified opinion on the agency’s financial statements for fiscal year 1996, (3) a discussion of the reform efforts that will affect each objective, and (4) an appendix that lists the management reform goals to be completed in fiscal year 1998. The plan also includes a brief discussion of HUD’s efforts to ensure the quality of performance measurement data by requiring program offices to develop quality assurance plans that will be reviewed and approved by the Chief Financial Officer. However, the agency’s ability to accurately measure progress in achieving its strategic objectives is uncertain because doing so depends on completing its goal of integrating program and financial management systems, cleaning up the data in most of HUD’s existing systems, and receiving accurate reporting from local and federal entities. Despite the improvements in the discussion of HUD’s management problems, the plan lacks details on how the agency will address the internal control weaknesses reported by the Office of Inspector General in the agency’s financial statement audit report. Some elements of HUD’s strategic plan could be further improved to better meet the purposes of the Results Act. HUD will have an opportunity to address these issues as the strategic plan evolves further over time. While the plan includes a listing of the program evaluations under each objective, it does not describe how the evaluations were used to develop the strategic objectives and does not include a schedule of future evaluations. Although wording was added to the plan stating that evaluation schedules are determined on an annual basis, the plan does not include a schedule, which is required by the Results Act. HUD’s discussion of its strategies does not discuss the staff, capital, and technology resources needed to achieve the Department’s strategic objectives, as called for by the Results Act. This issue is a critical one for HUD because of its downsizing efforts and planned organizational changes. While the discussion of external factors was expanded, the plan does not discuss the impact on the strategic plan or on HUD’s programs if the legislative proposals discussed in the plan are not enacted. Additionally, some of the discussions indicate that the external factors may have such a great impact on the strategic objectives that HUD may not be able to achieve its objectives. For example, under the strategic objective to provide self-sufficiency opportunities for low-income individuals, the plan states that HUD has no direct control over the extent to which funds will be used to address this objective. Furthermore, the plan states that “realistically, relatively few people who have reached their 30s with little education, with families, and little work history, will achieve great success in this economy.” While HUD included additional information to aid in the assessment of the strategic objectives, it is not yet clear whether the achievement of a number of the objectives will be assessable. The evaluation component is not yet complete, the discussions of strategies omit significant information about resources, and the discussions of external factors indicate that HUD sees significant impediments to achieving its objectives. As HUD is developing future strategic plan updates and annual performance plans, additional consideration should be given to what each objective is intended to achieve and how that can best be assessed. We provided HUD with a draft of this section of the report for review and comment. We met with HUD officials from the Office of the Chief Financial Officer and the Office of Policy Development and Research, who generally agreed with our observations. They said that information in the annual performance reports and the next update of the plan, which should be available around the end of fiscal year 1998, together should address our observations. Additionally, they said HUD prefers to keep the strategic objectives broad so that the program offices maintain a long-term focus and continue to think of ways to achieve the objectives. Judy A. England-Joseph, Director, Housing and Community Development Issues; Resources, Community, and Economic Development Division, (202) 512-7631. On July 18, 1997, we issued a report on the Department of the Interior’s draft strategic plan (The Results Act: Observations on the Department of Interior’s Draft Strategic Plan, GAO/RCED-97-207R). Interior formally submitted its strategic plan to OMB and Congress on September 29, 1997. As requested, we have reviewed the publicly issued strategic plan and compared it with the observations in our July 18 report. On October 14, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. Interior’s draft strategic plan did not meet the requirements of the Results Act. The Department-wide strategic overview contained the Department’s overall mission and goals and referred to the plans of its eight components or subagencies for information on the six elements required by the Act. However, half of the eight subagency plans lacked at least two of the six required elements. Furthermore, the overall quality of the plan was not yet sufficient to achieve the purposes of the Act. Among other things, it did not provide clear linkages between the subagencies’ goals and objectives and the contributions of these goals and objectives to the Department’s major goals, and some of the goals and objectives in the subagencies’ plans were not stated in a manner to allow for a future assessment of whether the goals have been achieved. We pointed out that Interior has a number of crosscutting areas in which a more coordinated strategic planning process would help to provide Department-wide information on programs’ results. These include environmental protection and remediation, stewardship assets, Indian programs, land and natural resources management, and recreation programs. Although Interior identified information management resource goals in its strategic plan, how it plans to achieve and measure the success of those goals was not clearly delineated. Traditionally, Interior has allowed its subagencies to independently acquire and manage information technology. This culture has resulted in inefficiencies in technology investments and information sharing. We also noted that Interior needs to continue to address certain accounting and financial management internal control weaknesses, including, among other things, weaknesses in accounting for investments in fixed assets and project cost accounting controls. The September 1997 strategic plan—the Department’s strategic overview plan as well as each of the eight subagencies’ plans—incorporates several improvements that make it more responsive to the requirements of the Results Act than was the draft plan. As a whole, the plan provides a clearer presentation of how it covers the six required elements of the Act by providing explicit linkages between the requirements of the Act and the relevant parts of the plan. Furthermore, each of the four subagencies that had lacked a number of required elements in the draft plans has added or further developed many of these elements in the issued plan. In particular, each of these four subagencies—the National Park Service (NPS), Fish and Wildlife Service (FWS), Bureau of Indian Affairs (BIA), and Minerals Management Service (MMS)—added material to address the relationship between long-term goals and performance goals. In addition, both BIA and FWS have included discussions of the approaches or strategies they will use to achieve their respective goals and objectives—information that was not present in the draft plans for these subagencies. Also, additional information was added to the overview section of the plan to more fully explain the Department’s approach to program evaluations. In addition to more fully addressing several required elements, the September 1997 overview and subagencies’ plans now contain explicit linkages between the subagencies’ goals and objectives and the contributions of these goals and objectives to the Department’s goals and commitments. Also, many of the goals included in the issued plan have been restated in a quantitative manner. These are positive changes and will facilitate a future assessment of whether the goals have been or are being achieved. Consistent with the suggestions in our July report, Interior included a section in the departmental overview discussing its current efforts to address crosscutting issues throughout the Department and its strategy for further coordination. Interior’s strategic plan also includes a more aggressive goal for addressing internal control weaknesses. Additionally, a section has been added that specifically discusses accountability for personal, real, and museum property (fixed assets). The plan also discusses integrating the personal and real property systems with financial and procurement systems that would appear to represent progress toward attaining project cost accounting. There are a number of aspects of Interior’s plan that still can be improved to better meet the purposes of the Results Act. In particular, some of the subagencies—BIA, the Bureau of Reclamation (BOR), FWS, and NPS—need to more fully develop the program evaluation component of their plans. While each of these subagencies, as well as the departmental overview, has made revisions to its draft plan in this area, the revisions still do not provide a complete understanding of specifically how program evaluations were used in developing the plan or what future evaluations will be done and when for each of the subagencies. Including this kind of information is important because without it, it is difficult for both the subagencies and other users of the plan to have confidence that the goals are the correct ones and that the strategies will be effective. Furthermore, while the subagencies have made progress in restating a number of their goals and objectives in a more measurable way as we suggested in our July report, this area of the plan still can to be improved. Many of the goals and objectives are still process oriented, not results oriented, and/or expressed in a manner that will make meaningful performance measurement difficult. For example, one of the strategic goals in FWS’ plan states that: “By 2002, the current maintenance backlog will be reduced annually.” As stated, it is not clear what level of performance is expected or will be considered acceptable in achieving this goal. We observed similar difficulties in several of the subagencies’ plans. While the September 1997 plan now includes a discussion of ongoing efforts to coordinate a number of crosscutting issues facing the Department and identifies its future approach in this area, the plan still does not explicitly address the crosscutting issues identified in our July report. These included the Department’s environmental protection and remediation, stewardship assets, Indian programs, land and natural resource management, and recreation programs. In our July report, we noted that the plan needed to more fully address information management issues. This need still exists in the September plan. In the September plan, Interior has identified goals and actions needed to implement the provisions of the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996 but does not clearly describe how it plans to achieve, or to measure its success in achieving, its goals. Also, Interior needs to explain how it plans to address the Year 2000 problem as well as significant information security weaknesses—two issues that we have identified as high risk across the federal government. Furthermore, the September plan now states that Interior’s critical information systems will be Year 2000 compliant by September 30, 2000—9 months after the January 1, 2000, deadline. On October 10, 1997, we met with Interior officials, including the Deputy Assistant Secretary for Budget and Finance, to obtain the Department’s comments on our observations about its strategic plan. Interior believes that the September 1997 plan meets the requirements of the Results Act. However, the Department acknowledges that improvements can be made in several areas. Interior noted that the development of its strategic plan is an iterative process and that future versions of the plan will address areas in which we and others show a need for improvement. Furthermore, in connection with crosscutting issues, Interior commented that it believes that its current efforts and initiatives in this area are sufficient. However, in our view, focusing on results implies that federal programs’ contribution to the same or similar results should be closely coordinated to ensure that goals are consistent and that, as appropriate, program efforts are mutually reinforcing. In connection with information management issues, Interior commented that it has detailed plans to address Year 2000 issues and does not believe the level of detail that we suggested is necessary for inclusion in a strategic plan. We continue to believe that clear discussions of Year 2000 and information security issues would strengthen the strategic plan and provide linkages for its operational plans. This disclosure would help Congress, departmental customers, and the general public to better understand the Department’s goals, strategies, and measures. Victor S. Rezendes, Director, Energy, Resources, and Science Issues; Resources, Community, and Economic Development Division, (202) 512-3841. On July 11, 1997, we issued a report on the Department of Justice’s February draft strategic plan (The Results Act: Observations on the Department of Justice’s February 1997 Draft Strategic Plan, GAO/GGD-97-153R). On August 15, 1997, Justice revised its plan and we testified on September 30 on the plan’s compliance with the Act’s requirements (Results Act: Comments on Justice’s August Draft Strategic Plan, GAO/T-GGD-97-184). Justice’s formally issued strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we have reviewed the issued strategic plan and compared it with the observations in our July 11 report and September 30 testimony. On October 7, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. In response to comments on its February strategic plan, Justice revised its plan in August. The revised plan addressed many of the issues we raised in our July report. In our July report, we pointed out that of the six elements required by the Act, three—the relationship between long-term goals and the annual performance plans, the key external factors that could affect Justice’s ability to meet its goals, and a program evaluation component—were not specifically identified in the draft plan. The remaining three elements—the mission statement, goals and objectives, and strategies to achieve those goals and objectives—were discussed, but each had weaknesses. The most important of these were that the mission statement did not cover a major statutory responsibility, goals and objectives were not consistently as results oriented or measurable as they could have been, and strategies were not fully developed. In addition, we observed that the February draft plan could be more useful to Justice, Congress, and other stakeholders if it provided a more explicit discussion of (1) crosscutting activities, (2) major management challenges, and (3) Justice’s capacity to provide reliable information to manage its programs or determine if it is achieving its strategic goals. Recognizing crosscutting issues and the coordination required to address them is particularly important for Justice because, as the federal government’s attorney, it helps the various federal law enforcement agencies enforce the law in federal courts. Explicit consideration of major management challenges, including the capacity to produce reliable information for management decisionmaking, is important because these challenges could affect Justice’s ability to develop and meet its goals. In our September testimony, we pointed out that Justice’s August draft plan discussed, to some degree, five of the six required elements—a mission statement, goals and objectives, key external factors, a program evaluation component, and strategies to achieve the goals and objectives. The August draft plan did not include a required discussion on the relationship between Justice’s long-term goals/objectives and its annual performance plans. In addition, we noted that the August draft plan could have better addressed how Justice plans to (1) coordinate with other federal, state, and local agencies that perform similar law enforcement functions, such as the Defense and State Departments with regard to counter-terrorism; (2) address the many management challenges it faces in carrying out its mission, such as internal control and accounting problems; and (3) increase its capacity to provide performance information for assessing its progress in meeting the goals and objectives over the next 5 years. Justice’s issued strategic plan incorporated several improvements that make it more responsive to the requirements of the Results Act than were the February and August draft plans. Its September plan discusses each of the Act’s required elements. In particular, Justice added in its August plan a discussion of eight key external factors that could significantly affect achievement of its long-term goals, information that is helpful to Congress in its consideration of Justice’s plan. However, information about alternatives that could reduce the potential impact of these external factors was not provided. In addition, Justice’s August strategic plan included a discussion of the role program evaluation is to play in Justice’s strategic planning efforts. Justice recognized that it has done few formal evaluations of Justice programs in the past, but the plan acknowledged that sound program evaluation is an essential aspect of achieving the purposes of the Act and stated that Justice plans to examine its evaluation approach to better align evaluations with strategic planning efforts. Further, Justice pointed out that it will continue to improve its efforts to benefit from our evaluations. This element of the plan could be more helpful to decisionmakers if it identified future planned evaluations and their general scope and time frames, as encouraged by OMB strategic plan guidance. address (1) its process for managing its information technology investments, steps taken to provide security over its information systems, and strategy to ensure that computer systems accommodate dates beyond the year 2000; and (2) aspects of its internal control processes that identify management weaknesses and vulnerabilities. Justice also added a discussion on “accountability,” debt collection, and asset forfeiture. However, the plan would be more helpful if it included a discussion of corrective actions Justice has planned for significant internally and externally identified management weaknesses, as well as how it plans to monitor the implementation of such actions. In addition, the plan does not address how Justice will correct significant problems identified during the Inspector General’s fiscal year 1996 financial statement audits, such as inadequate safeguarding and accounting for physical assets and weaknesses in the internal controls over data processing operations. Several elements of Justice’s issued strategic plan could be further improved to better meet the purposes of the Results Act. In particular, some of the plan’s goals and objectives still were not stated in as results oriented or measurable a form as they could be, and some of the strategies to achieve the goals and objectives did not clearly explain how and to what extent Justice programs would contribute to achieving the goals, how its resources are to be utilized to achieve the goals, or how Justice plans to assess progress in meeting those goals. For example, Justice has a goal to maximize deterrents to unlawful immigration by reducing the incentives of unauthorized employment and entitlements. It is likewise unclear how Justice will be able to determine the effect of its efforts to deter unlawful immigration, compared to the effect of changes in the economic and political conditions in countries from which illegal aliens originated. In addition, Justice’s mission statement, which we observed in July as seeming to be incomplete because it omitted one of its largest budget items—detention and incarceration function—was not changed. reflects high level and crosscutting annual goals and indicators and (2) more detailed component and appropriation-specific performance information. Justice added that goals and indicators will be supportive of, and derived from, those set forth in the strategic plan. Recognizing that the linkage between the strategic plan and the annual performance plan is a critical element of the Act, Justice said that it has revised its internal processes to ensure that the strategic plan serves as the foundation for the development of annual budgets and performance plans. In our opinion, Justice’s September strategic plan could better meet the purposes of the Act by discussing, as contained in OMB guidance, (1) the type, nature, and scope of the performance goals to be included in its performance plan; (2) the relation between the performance goals and the general goals and objectives; and (3) the relevance and use of performance goals in helping determine the achievement of general goals and objectives. This information is important because the linkage between the goals and objectives and annual performance plan provides a basis for judging whether an agency is making progress toward achieving its long-term goals, not just its annual goals, which would be reflected in the annual performance plan. We observed that the February and August draft plans did not include a discussion of how Justice’s activities would be coordinated with other related crosscutting law enforcement activities. The issued strategic plan includes a goal to coordinate and integrate law enforcement activities wherever possible and to cooperate fully with other federal agencies. However, the plan could better serve the purposes of the Results Act by discussing how Justice plans to implement that goal and to measure and assess inputs, outputs, and outcomes to achieve crosscutting law enforcement goals. On October 14, 1997, we obtained oral comments from Justice officials, including the Director, Management and Planning Staff, on a draft of our analysis and observations of Justice’s issued strategic plan. They said that our analysis and observations fairly represent Justice’s strategic plan. Norman J. Rabkin, Director, Administration of Justice Issues; General Government Division, (202) 512-8777. On July 11, 1997, we issued a report on the Department of Labor’s draft strategic plan (The Results Act: Observations on Department of Labor’s June 1997 Draft Strategic Plan, GAO/HEHS-97-172R). Labor formally submitted its plan to OMB and Congress on September 30, 1997. As requested, we have reviewed this strategic plan and compared it with the observations in our earlier report. On October 16, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. To meet the Results Act requirements for a strategic plan, Labor submitted individual plans for 15 of its 24 component offices or subunits, which it supplemented with a “strategic plan overview.” In one case, one of Labor’s offices—the Employment Standards Administration (ESA)—did not submit a plan itself, but instead submitted plans for the four subunits under its responsibility. While OMB Circular No. A-11 provides agencies discretion to submit strategic plans that cover only major functions or operations, Labor provided no indication as to why its other offices or subunits did not provide plans. We reported that neither the overview nor the component plans fully met the Act’s requirements or OMB guidance. For example, the overview’s mission statement was not sufficiently descriptive of Labor’s basic purpose, and the overview did not include elements identified by the Act, such as strategies to achieve goals or evaluations used to establish goals. Further, a majority of the component plans did not include all of the elements required by the Act, such as the strategies to achieve the goals or key factors affecting goal attainment. We noted that the overview would be more useful if it included all of the elements identified by the Act and, regarding the mission statement specifically, if it communicated more about Labor’s purpose, referring to such basic responsibilities as job skills development, job placement, and worker protection. three broad programmatic categories that were not developed into goals. We observed that Department-wide goals enunciated by the Secretary in recent congressional testimony could serve as the basis from which to develop Department-wide goals that are results oriented and set out the long-term programmatic policy and management goals of the agency. We found that the goals in the overview and in the component plans were generally consistent with Labor’s statutory responsibilities, and the plans generally covered all of Labor’s major functions and operations. Regarding crosscutting issues, we reported that the strategic overview recognized the roles of other organizations in carrying out particular functions and the importance of establishing partnerships with these organizations to carry out such functions. However, we indicated that the overview could be improved if it recognized the importance and number of other participants—namely, the other 14 federal agencies—involved in one major area of responsibility—job training. In so doing, Labor could discuss how its programs fit in with a broader national job training strategy. We also found that the Labor officials responsible for preparing the plan and monitoring its progress had not consulted with congressional staff regarding the overview or the component plans. Finally, we reported that the strategic overview highlighted the need for information and data systems to ensure timely and sound evaluations to assess agency progress in meeting its goals. However, the overview did not describe Labor’s strategy for ensuring that this kind of information was collected and used to assess progress and performance. The overview also did not discuss how Labor planned to use information technology to achieve its mission, goals, and objectives, or to improve performance and reduce costs. We reported that the plan could be improved by including a discussion of Labor’s investment technology process, including how Labor planned to address the Year 2000 problem, or how Labor planned to comply with the Clinger-Cohen Act of 1996, which calls for agencies to implement modern technology management to improve performance and meet strategic goals. prepared draft plans previously (including 1 that has overall management responsibility for implementing the Results Act) and the 4 plans originally submitted by ESA’s subunits were consolidated into 1 ESA-level plan for a total of 15 component-level plans. In its overview, Labor provided a rationale for the components included, noting that “. . . strategic plans have only been required of the 15 program and management agencies of the Department. Several staff offices whose functions are in direct support of the Secretary’s office are not included.” Labor’s strategic overview and all but 1 of the 15 component unit plansinclude all 6 elements. Further, the overview’s mission statement now provides a more complete description of Labor’s basic purpose of “foster and promot the welfare of job seekers, wage earners, and retirees of the United States by improving their working conditions, advancing their opportunities for profitable employment, and protecting their retirement investments.” Moreover, discussions of strategies to achieve goals and external factors that could affect the achievement of goals are discussed alongside individual goals, which facilitates the understanding of how particular strategies and external factors are linked to each goal. The overview also appears to address Labor’s traditionally decentralized management approach, which has posed numerous management challenges for Labor in the past. For example, the overview now contains five clearly articulated Department-wide goals that are generally results oriented and that are consistent with those recently enunciated by the Secretary. The overview also includes a sixth Department-wide goal of maintaining a departmental strategic management process, which may be an indication of a renewed emphasis by Labor to develop a more strategic approach to departmental management. Other indications of this renewed approach to Department-wide leadership are evident in the similar organizational style of each of the component plans and the clear linkage between the strategic overview and the plans. For example, in the overview, the strategic goals of each of the units/offices are highlighted under the appropriate Department-wide goal; and in each of the plans for the offices/units, the office/unit strategic goals are categorized according to the Department-wide goal to which they correspond. Further, the overview now includes a discussion of the relationship between the goals in the annual performance plan and in the strategic plan. The overview and component plans we reviewed continue to describe all of Labor’s major functions, and the goals are consistent with relevant statutes. Although Labor has made significant improvements to its strategic plan overview, some sections in the overview may benefit from further elaboration. For example, the overview does not detail how information from evaluations was used to develop the plan, nor does it specify how future evaluations will help assess Labor’s success in achieving its stated goals. Instead, the overview discusses the fact that evaluations in the regulatory agencies have lagged behind those in the employment and training area. In that respect, it is even more important that the overview provide schedules or time lines for future evaluations, identify what evaluations will be done, and highlight how future program evaluations will be used to improve performance. Along those lines, we had earlier reported that the experiences of Labor’s Occupational Safety and Health Administration (OSHA) as a pilot could provide insight on how evaluations can be managed. OSHA has been involved in a number of activities geared toward making the management improvements intended by the Results Act. Although it is not a requirement of the strategic planning process, we continue to believe that a discussion in Labor’s overview related to the experiences gained from the OSHA pilot project—including lessons learned and whether best practices or other lessons could be applied Department-wide or in units with similar functions—may prove helpful. Labor could also improve the overview by continuing to enhance the discussion of crosscutting issues, such as coordination with others who have similar roles for particular functions. While the overview does make reference to a few other organizations with responsibilities in this area and notes that Labor will work with them, there is no discussion of what specific strategies Labor will use to realize efficiencies through coordination and possible consolidation of job training programs in order to achieve a more efficient employment training system. from additional discussion on how these agencies are working together to share information on efficient enforcement and public education strategies or measurement tools. The overview could also benefit from a more elaborate discussion of the strategies Labor will use to ensure that its information technology allows it to achieve its goals. While the overview continues to cite the vision of expanded use of technology across Labor and its component units, the plan does not adequately discuss the inclusion of a framework—sometimes called a systems architecture—that will serve as a blueprint for developing and maintaining integrated information systems. Such a framework would help ensure that the data being collected and maintained within Labor are structured and stored in a manner that makes them consistent, accessible, understandable, and useful. The overview also still does not include a clear, integrated, measurable Year 2000 strategy, which may be needed to adequately consider the multitude of system and information interfaces inside and outside of Labor that must be addressed prior to the millennium change. In an October 14, 1997, letter from Labor’s Acting Assistant Secretary for Administration and Management, Labor thanked us for acknowledging the substantial progress in Labor’s plan. The letter noted that there is more work to be done and that Labor will address the concerns we raised about the use of evaluations in developing plans and evaluating results, crosscutting issues, and internal coordination among safety and health agencies during the next revision. Labor also noted that it will expand its presentation in the strategic overview to provide additional information on its information technology. However, Labor noted that a more detailed discussion of its systems architecture and its Year 2000 compliant strategy are included in Labor’s separate Information Technology Strategic Plan and other documents. Additionally, it said its approach for addressing information technology in the overview was to describe the linkage and the importance of information technology in support of program agencies and the achievement of goals. While this approach is reasonable, and our preliminary review of the Information Technology strategic plan indicates that it tries to address many of the issues we outlined previously, the strategic overview could still benefit from clearer cross-referencing and linkage between the two plans. Additionally, the Information Technology Strategic Plan may benefit from clearer linkage between the components’ activities and Labor’s activities as a whole to enhance information technology. Carlotta C. Joyner, Director, Education and Employment Issues; Health, Education, and Human Services Division, (202) 512-7014. On July 18, 1997, we issued a report on the Department of State’s draft strategic plan (The Results Act: Observations on the Department of State’s May 1997 Draft Strategic Plan, GAO/NSIAD-97-198R). State issued its formal strategic plan and submitted it to OMB and Congress on September 27, 1997. As requested, we have reviewed the publicly issued strategic plan and compared it with the observations in our July 18 report. On October 20, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized in the following sections. State’s draft strategic plan was useful in setting and clarifying U.S. foreign policy goals, but it did not contain sufficient information to fully achieve the purposes of the Results Act and was incomplete in several important respects. In particular, the draft plan omitted two elements required by the Act: (1) components identifying the relationship between long-term goals/objectives and annual performance goals and (2) a description of how program evaluations were used to establish or revise strategic goals and a schedule for future program evaluations. To fully achieve the purposes of the Act, State’s draft plan needed to be more descriptive and consistent with OMB guidance. For example, the plan contained several sections labeled strategy for specific goals, but it did not specifically identify the actions and resources needed to meet the plan’s goals or include a schedule for taking significant actions. State’s strategies often focused on describing the Department’s role in various areas instead of describing how State’s programs and operations would help achieve the goals. We observed that State’s draft plan did not specifically discuss the likelihood that other agencies might have functions similar to or possibly duplicative of State’s role that could affect the formulation and implementation of strategies. administrative systems. These issues were discussed separately in a diplomatic readiness section of the strategic plan. We said that the draft plan would be strengthened if it better described how meeting these management challenges could affect achievement of the plan’s strategic goals. Furthermore, we noted that the draft plan would have been enhanced if it had included a discussion of how the proposed consolidation of State, U. S. Information Agency, and Arms Control and Disarmament Agency might affect goals, strategies, and resource requirements. In addition, we suggested that State’s plan would be easier to use if it contained a clearly labeled agency mission statement and included a discussion of the Department’s key legal authorities. We also observed that State’s capacity to provide reliable information about its operations and program performance was questionable because of long-standing deficiencies in the Department’s information and financial accounting systems. Successfully resolving a number of these material deficiencies in the Department’s financial and information management systems will be critical to implementing the plan. State’s September 1997 strategic plan incorporated some improvements that help make it more responsive to the requirements of the Results Act. With respect to the first element required by the Results Act that was missing in the draft plan, State introduced a separate section describing the relationship between the plan’s strategic goals and the goals and objectives in the Department’s performance plan. It used only one example to describe this relationship, discussing the linkages between operational and performance goals for achieving the strategic goal of eliminating the threat from weapons of mass destruction or destabilizing conventional arms. This example is helpful, but it would be more useful if it clearly described how the resource and performance measurement components will be handled in the Department’s annual program planning cycle. With respect to the second missing element, the plan now includes a section dealing with program evaluations. However, instead of including a description of the program evaluations used in developing goals and objectives, as required by the Results Act, the new section is largely a discussion of State’s rationale for not fully meeting this requirement. explaining how resources from various sources, and managed by different agencies, are established in the international affairs function of the President’s budget—the “150” account. State’s strategic plan focused on the Department’s mission and role in carrying out 16 strategic foreign policy goals. A few modifications were made in the strategic goals since our July review (for example, a goal of promoting broad-based economic growth in developing and transitional economies was added, and a goal of improving the well-being of the world’s poor was dropped), but the plan still did not consistently explain what results are expected from the Department’s major functions or when to expect the results. Some changes in the Department’s strategies were also made, but it remained unclear how some of the goals are to be achieved or what level of resources is required. State’s plan specifically acknowledged that more needs to be done to identify agencies’ capabilities and the resources needed to achieve the goals. The plan’s section on program evaluations is essentially an explanation of why the plan does not fully meet the requirements of the Results Act. The plan pointed out that no process existed for systematic evaluation of the foreign affairs goals. As a result, the plan did not identify any evaluations used for establishing or revising the strategic goals or include a schedule for future evaluations. It is State’s position that it should not be held strictly accountable for this and other requirements of the Act because of the complexities of foreign policy, the scope of the Department’s responsibilities that cover most other agencies, and the complexities of managing overseas missions. We recognize that program evaluations in the foreign affairs area are difficult, but we believe that an effective evaluation process will be critical to determining the extent to which State is successfully achieving/helping to achieve goals and what actions may be necessary to help improve performance. global programs, and international security functions. The plan also noted that the incorporation of management goals for the integration of other foreign affairs agencies awaits decisions concerning how the reorganization will proceed. The plan’s section on management issues emphasized the importance of the strategies for achieving diplomatic readiness but noted that this represents a first effort to set strategic goals for the Department’s major management responsibilities. It still did not address the serious management problems related to cost control, overseas embassy management, and financial management identified in our prior work and discussed in our July report. The plan’s discussion of data capacity did not specifically address the serious deficiencies in State’s financial accounting and information systems, but it noted in more general terms that it will take several years to develop performance measures and related databases in order to provide sufficient information on achievement of the goals. The Chief Financial Officers Act requires agencies to have accounting and financial accounting systems that provide for the development of cost information and systematic measurement of performance. Currently, State does not have a true cost accounting system, and, as a result, reliable cost information by function cannot be provided. In addition to developing the data capacity and information systems essential for measuring progress, State’s strategic plan also acknowledged that much more remains to be done to adequately develop the Department’s long-term strategic planning process. State’s plan identified several long-term actions as critical to the process, including the development of an agency performance plan and mission performance plans for each overseas embassy linking annual goals with long-term strategic goals, performance measures, and a process for conducting performance evaluations. State’s strategic plan cautioned that it will take substantial effort to develop a fully refined set of performance measures and a performance evaluation process. In discussing its efforts to develop an integrated planning process, State noted that a process linking overseas mission performance plans to the Department’s strategic and diplomatic readiness goals will first be in place for the Department’s fiscal year 2000 budget submission. As State’s strategic plan evolves over time, other matters will clearly require attention. For example, State’s performance plan will need to be definitive to compensate for the continued lack of specificity in State’s strategies concerning how the Department will achieve individual long-term goals and the level of resources needed for State’s activities. The overseas mission performance plans will also need to be high quality to be successfully integrated into State’s master plan, in view of (1) the billions of dollars in resources associated with State’s overseas operations and (2) the historical weaknesses we have identified in overseas embassy management. These weaknesses have included insufficient staff training, poor inventory controls, and questionable procurement practices. State’s lack of attention to the use of evaluations in setting and refining its long-term goals and the lack of a specific schedule for future evaluations are other areas that deserve attention. We believe that several areas may require evaluation to ensure that the Department’s strategic planning process is sound, including the adequacy of State’s performance and program planning processes, the extent to which State and other agencies’ functions may or may not be duplicative, and the adequacy of State’s overseas staffing decisions based on design and implementation of its new staffing model. We obtained oral comments from State officials responsible for the Department’s strategic planning efforts. They generally agreed with our description of the progress State made in its strategic plan and the issues that require further attention. However, they noted that in judging the quality of the plan, it is important to keep in mind the complexities of strategic planning in the foreign affairs area. These complexities include the scope of the strategic goals and the national interests involved, the numerous government agencies sharing responsibilities, and the lack of proven performance measures in key functional areas. Department officials also recognized that the strategic planning process has not yet paid sufficient attention to management and cost issues. However, they expect that the diplomatic readiness section of the plan will be revised in the spring of 1998, along with other parts of the plan, based on a series of seminars and stakeholder discussions scheduled to begin in November 1997. Benjamin F. Nelson, Director, International Relations and Trade Issues; National Security and International Affairs Division, (202) 512-4128. On July 30, 1997, we issued a report on the Department of Transportation’s (DOT) draft strategic plan (Results Act: Observations on the Department of Transportation’s Draft Strategic Plan, GAO/RCED-97-208R). DOT’s formally issued strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we have reviewed the September plan and compared it with the observations in our July report. On October 16, 1997, we briefed your staffs on our further observations on the strategic plan. Our key points are summarized herein. DOT’s draft strategic plan did not fulfill all of the requirements of the Results Act. The draft plan met the Results Act’s requirements for mission statement, long-term goals, and a description of program evaluations; however, each component had weaknesses that could be improved. The draft plan did not meet the Act’s requirements to describe strategies for achieving goals, a linkage between long-term goals and annual performance goals, and those key external factors that could significantly affect DOT’s achieving its goals. Overall, the draft was so general that it did not clearly identify the Department’s priorities. We reported that the quality of the draft plan could have been improved throughout by adhering more closely to OMB’s guidance for preparing strategic plans and including more detailed information. In addition, the draft plan did not (1) show evidence of coordination with other agencies that have programs and activities that are crosscutting or similar to DOT’s or (2) adequately address major management challenges and high-risk areas that we and others previously identified. We also observed that DOT’s ability to produce reliable performance information was uncertain because the draft plan was unclear about what information would be needed to measure performance. Finally, the draft plan reflected the Department’s key statutory authorities. contribute significantly to each goal. Finally, the discussion of program evaluations has been revised to include a table that lists future evaluations and, for each, describes the scope, methodology, key issues to be addressed, schedule, and relationship to the plan’s long-term goals. Moreover, the September plan meets two of three additional requirements of the Results Act that the draft plan did not meet. First, the revised plan now meets the Act’s requirements by discussing how the annual performance goals, which are being developed for the fiscal year 1999 budget submission, will link to DOT’s mission and long-term goals. The September plan also includes a table for each long-term goal showing examples of possible indicators that may be used to measure annual performance and the availability of data. Second, the discussion of those key external factors that could significantly affect DOT’s ability to achieve its goals has been rewritten and expanded to meet the Act’s requirements. The September plan identifies new factors—such as legislation to address long-term financing for the Federal Aviation Administration (FAA) and Amtrak—that will affect DOT’s ability to achieve its long-term goals. The plan also summarizes how economic, social, political, environmental quality, national defense and security, and technology trends affect each long-term goal; provides a few examples of activities needed to mitigate the effect of these trends; and explains each factor in greater detail in both a separate section and an appendix. security weaknesses; and (6) change computer systems to accommodate dates beyond the year 1999, that is, address the Year 2000 problem. Finally, the September plan continues to reflect DOT’s key statutory authorities in an appendix and includes minor clarifications. DOT’s September plan can be further improved in two areas. First, the plan’s discussion of strategies for achieving its long-term goals has improved in some areas but still does not meet all requirements of the Results Act. The revised plan describes corporate management strategies for implementing the plan that cut across the Department. These strategies provide useful information, for example, in explaining how long-term goals will be communicated to employees and how personnel will be assigned accountability for achieving the goals. However, the revised plan still does not describe the operational processes, skills, technology, and resources required to meet the long-term goals, as required by the Results Act. The general discussion of corporate management strategies does not meet these requirements, which should be addressed for each goal. Furthermore, the plan could be improved by following OMB’s guidance on strategic plans and providing additional detail when achieving a goal is predicated on a significant change in resource or technology levels. For example, we have reported that successful implementation of certain aviation security measures mentioned in the plan is contingent upon deciding who will finance the security improvements and developing the needed technology. In addition, the plan could be improved by following OMB’s guidance on including time frames for initiating or completing significant actions. The September plan contains time frames for some significant actions, such as addressing the Year 2000 problem and obtaining reliable financial statements by fiscal year 2000; but it does not include time frames for other significant actions, such as completing air traffic control modernization and improvements to Amtrak’s Northeast Corridor. these problems, it does not demonstrate a firm commitment to resolve them through specific strategies. For example, the plan mentions our concerns about the need to improve the oversight of highway and transit projects, which are continuing to incur cost increases, experience delays, and have difficulties acquiring needed funding commitments. The plan states that these concerns are addressed under corporate management strategies. The strategies, however, provide insufficient details to address the problems with these projects. As another example, the revised plan mentions Amtrak in an appendix but provides too little information to adequately address our concerns about the corporation’s very precarious financial condition, which threatens its survival. The plan could be improved by addressing Amtrak’s role in a national transportation framework and providing objectives concerning the future of Amtrak and strategies for meeting these objectives. The plan acknowledges the significance of financial management to the achievement of its long-term goals and is generally responsive to specific comments that we made about the draft plan. The revised plan includes a new section that discusses (1) general financial management and (2) the need for and plans to improve financial management of and accountability for the Department’s financial resources. However, while the revised plan acknowledges that unreliable accounting (including cost accounting) information exists at the program level, it does not provide specific strategies or timetables for resolving key problems. DOT has added specificity to the plan that greatly improves its overall quality. However, the plan still takes an “umbrella” approach—it is expansive enough to encompass all of DOT’s programs, but it does not describe the contributions from specific modes to implement the plan. The plan refers to the development of a “National Transportation Strategy” with subordinate strategies for air, surface, and maritime elements. This strategy might provide the missing link between the Department-wide goals and the programs throughout DOT. in two ways. First, in discussing each strategic goal, the plan includes a section entitled “How We Will Achieve the Strategic Goal” that describes the processes that DOT will employ to achieve the goal. Second, the Department stated that the plan meets this requirement in a section that describes six overarching management strategies—the “ONE DOT management philosophy,” human resources, customer service, resource and technology, information technology, and resource and business process management. We disagree that the sections of the plan mentioned by DOT fulfill the Act’s requirements. For the most part, the sections that discuss how DOT will achieve the goals are too general to do so. For example, the plan states that to achieve the mobility goal of ensuring an accessible, efficient transportation system, DOT will improve technical assistance. The plan does not explain the type of technical assistance, who will receive the assistance, or how the assistance will improve mobility. As another example, the plan states that to achieve its economic growth and trade goal, DOT will assess the performance of the transportation system as a whole. The plan does not explain how such an assessment will help the Department achieve this goal. Furthermore, as we mentioned, the plan’s corporate management strategies are also too general to meet the Act’s requirement, although they do provide useful information in certain areas, such as explaining how the goals will be communicated to employees. These strategies provide a philosophy for the Department to operate under, but not specific steps to achieve the goals. For example, the human resources management strategy states that the Department will “achieve its strategic goals with a workforce that is knowledgeable, flexible, efficient, and resilient.” The actions to accomplish this include redesigning human resources programs to “allow DOT to recruit, develop, and deploy a diverse workforce with those 21st Century competencies needed to achieve the DOT’s strategic goals.” The strategy does not explain what competencies are needed. Finally, DOT commented that achieving its strategic goals is not predicated on a significant change in resource or technological levels. We disagree. As we stated in our observations, successful implementation of certain aviation security measures mentioned under the national security goal is contingent upon deciding who will finance the security improvements and developing the needed technology. Phyllis F. Scheinberg, Associate Director, Transportation Issues; Resources, Community, and Economic Development Division, (202) 512-2834. On July 31, 1997, we issued a report on the Department of the Treasury’s draft strategic plan (The Results Act: Observations on the Department of the Treasury’s July 1997 Draft Strategic Plan, GAO/GGD-97-162R). Treasury’s formally issued strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we have reviewed the September 30 strategic plan and compared it with the observations in our July 31 report. On October 15, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from our July report and October briefing are summarized herein. Treasury’s July draft strategic plan was incomplete and did not meet all of the requirements of the Results Act. Of the six elements required by the Act, the Treasury draft plan included four. Of these four elements, two—the mission statement and key factors external to the agency that could significantly affect achievement of the strategic goals and objectives—generally met the Act’s requirements, but, as we stated, these could have been strengthened. The information contained in the plan on the two other elements—goals and objectives and the strategies to achieve them—was often too general and vague to be used effectively by Treasury management, Congress, and other stakeholders. We said these two elements could be improved if they were more specific, results oriented, and linked to the plans of Treasury’s bureaus and major program offices. Two elements—the relationship between long-term goals and objectives and annual performance goals, and a description of how program evaluations were used to establish or revise strategic plans—were missing from the draft plan we reviewed. The draft strategic plan did not adequately address crosscutting issues and made no mention of whether Treasury had coordinated with other federal departments and agencies that shared related functions. In addition, although a major part of the mission statement was focused on management, the draft plan did not adequately address some of the critical management problems facing Treasury that could affect its ability to achieve its strategic goals and objectives. Finally, we said that Treasury’s capacity to provide reliable information on the achievement of strategic and program performance was questionable. Treasury revised the four elements that were in its draft plan so that they better meet the requirements of the Results Act. In addition, the plan better addresses Treasury’s critical management problems and includes more information on how the Department plans to coordinate with other agencies on crosscutting issues. Treasury’s plan is now presented as an overview with more detailed information provided in the plans of its 17 bureaus and major program offices. Taken together, the 18 plans comprise Treasury’s strategic plan.Treasury’s goals and objectives are now linked with those of its bureaus and major program offices. Consequently, the plan provides a clearer discussion of which bureaus and program offices have responsibility for carrying out the goals and objectives. Treasury’s plan also states that details on resources needed to implement strategies are to be included in bureau and program office strategic plans as well as the Department’s budget submission. Because the plan also links the goals and objectives in the overview plan to the annual performance goals and measures in the strategic plans of the bureaus and program offices, it provides information on one of the elements required by the Results Act that was missing from the draft plan. Treasury has also made several other improvements to its plan. A section was added describing how program evaluations were used to develop the plan. This section also cites examples of planned evaluations that Treasury is to use as input for future plans. The plan includes more information aimed at addressing the critical management problems the Department faces. For example, an objective has been added to the plan to address the Year 2000 computer problems. Also, throughout its plan, Treasury addresses crosscutting issues by pointing out where coordination is required with other agencies. its bureaus and program offices. Treasury’s plan could also be improved if performance goals were provided for each objective and if some of these goals were more results oriented. Treasury could also improve its plan by more explicitly addressing its critical management problems. Finally, Treasury’s plan could better address issues relating to its capacity to provide the types of reliable data needed to measure performance and assess progress in meeting its goals and objectives. Treasury’s plan could be improved if the linkage between Treasury’s goals and objectives and those of its bureaus and program offices were more complete. Specifically, we found several gaps in the linkage between Treasury’s plan and the plans of its bureaus and components. For example, Treasury has an objective to “ensure strong financial management of Treasury accounts.” However, only six bureaus or program offices have corresponding objectives. For this objective, Treasury’s plan does not include a related objective for the Financial Management Service, which is responsible for managing the government’s finances, and IRS, the government’s primary revenue collector. Treasury’s plan could also be improved if more complete and detailed information on strategies for achieving goals and objectives were included in the plans of its bureaus and program offices. Treasury’s plan contains general information on some strategies that are needed to achieve its goals and objectives. It also states that more detailed information regarding resource needs is to be included in its budget submission and the plans of its bureaus and program offices. However, we found several instances where a Treasury objective was linked to a bureau objective, but the bureau plan contained no corresponding strategy. For example, Treasury has three law enforcement objectives—to reduce counterfeiting, money laundering, and drug smuggling—where IRS has a role. However, IRS’ plan contains no specific strategy related to these three objectives. Also, where strategies were found in the bureau plans, they could be improved if more detailed information, such as technological and resource needs, were included. For example, IRS lists several strategies, including expanding nationwide access to taxpayer information on-line and updating taxpayer information daily, to achieve its objective to “improve customer service.” However, IRS’ plan does not discuss the resources needed to carry out these strategies. Treasury’s plan has an objective to “improve capacity to recruit, develop, and retain high-caliber employees.” The plan lists six bureaus and program offices that have related objectives, but only one, the U.S. Mint, has a related performance goal. Likewise, Treasury could enhance its plan by making its performance measures more results oriented. For example, Customs’ strategic plan includes a strategy to prevent drug smuggling whose performance measures (the number of arrests, seizures, and convictions, for example) are output oriented. The plan could be improved if more results-oriented measures, focusing on lowered drug smuggling rates, were developed in support of Customs’ strategy to prevent drug smuggling. Treasury officials stated that they will attempt to develop results-oriented measures whenever possible, but that performance data may be difficult to collect in some cases, and output measures may be the best data available, at least for the near term. Furthermore, they felt that a balance of output-oriented and results-oriented measures may be desirable since the purpose of performance measures is to determine an agency’s effect on results. Nonetheless, Treasury’s plan could be further improved if results-oriented measures were developed to complement output measures wherever possible. As we observed in our review of Treasury’s draft strategic plan, the current plan could also be improved if it explicitly addressed all critical management problems. Although the plan states that it addresses all our high-risk areas and other critical management issues, its discussion of Treasury’s critical management problems is not always explicit. For example, IRS’ accounts receivable—a high-risk area—is not addressed specifically in Treasury’s or IRS’ plan. Both plans contain goals related to increasing compliance with the tax laws and improving customer service, which indirectly could address IRS’ accounts receivable. However, as we previously reported, Treasury’s strategic plan could be more useful to Congress and other stakeholders if it more clearly presented how Treasury will address its critical management problems and how this will facilitate the Department’s achievement of its strategic goals and objectives. performance measures are contained within the plans of the bureaus and program offices. However, the Treasury plan does not address the difficulties of developing measures and collecting reliable data for some important areas of performance. For example, IRS and the Bureau of Alcohol, Tobacco and Firearms both use taxpayer burden as a performance indicator, but neither agency has adequate measures or data for tracking taxpayer burden. We recognize that developing measures of some areas of Treasury’s performance, such as taxpayer burden, will be very challenging, but the Treasury plan does not discuss how the Department plans to deal with these challenges. On October 30, 1997, we obtained oral comments from Treasury officials, including the Director of the Office of Strategic Planning, on a draft of our analysis of Treasury’s strategic plan. The officials generally agreed with our observations but suggested several changes to clarify areas where Treasury has improved its plan. They said that it was important to emphasize that their July 1997 draft strategic plan was a “working document” issued as required for consultation purposes with Congress and other stakeholders. As a result of the consultation process, they said that the plan was revised to address the concerns of Congress and other stakeholders, including GAO and OMB. Also, while the officials agreed that the current plan could be further improved in several areas, they said that the plan meets the Results Act’s requirements in that it contains all six required elements. The officials also reiterated that the Department should be recognized for its Results Act implementation efforts. In particular, the officials told us that Treasury has reformatted its budget to serve as the performance plan required by the Results Act for the past 2 fiscal years. Also, last year, the Department issued its performance report for fiscal year 1996 as part of its budget submission—ahead of the Act’s requirements. They said that Treasury intends to better align its performance plan with the goals and objectives in its strategic plan and to submit the plan as part of its fiscal year 1999 budget request, scheduled to be released in February 1998. attempt to develop performance measures that are results oriented and for which reliable data exist. Jim White, Associate Director, Tax Policy and Administration Issues; General Government Division, (202) 512-9110. On July 11, 1997, we issued a report with our observations on the Department of Veterans Affairs’ (VA) draft strategic plan, dated June 9, 1997 (The Results Act: Observations on VA’s June 1997 Draft Strategic Plan, GAO/HEHS-97-174R). Following this review, VA issued two additional drafts, dated August 1 and August 15, 1997. The August 15 draft was sent by VA to OMB for review and interagency coordination. On September 18, 1997, we testified before the Subcommittee on Oversight and Investigations, House Committee on Veterans’ Affairs, on the improvements in VA’s August 15 draft and the challenges remaining for VA in implementing the Results Act.VA submitted its formally issued plan to Congress and OMB on September 25, 1997. On October 14, 1997, we briefed your staffs on the observations we made in our September 18 testimony and further observations based on our review of the formally issued strategic plan. We found that VA’s June 1997 draft strategic plan represented an inconsistent and incomplete application of the six key components of a strategic plan as required under the Results Act. Also, the draft plan was somewhat confusing and difficult to follow, mainly because it had several different levels of goals, objectives, and strategies. In addition, the draft plan had not clearly identified needs for VA to coordinate and share information with other federal agencies. In terms of the key strategic planning elements, VA’s draft plan (1) focused more on the process of providing benefits and services than on results of VA programs for veterans and their families; (2) lacked objectives and strategies for achieving some of VA’s major strategic goals—in particular, for veterans’ benefits programs; (3) provided only limited discussions of external factors beyond the control of VA that could affect achievement of strategic goals; and (4) was not based on formal program evaluations. VA officials acknowledged that these elements still need to be developed. The June 1997 draft included plans to establish a schedule of evaluations for VA’s major programs. These evaluations, in turn, would lead to development of results-oriented strategic goals. Also, the draft included plans to identify coordination efforts with other federal agencies and to develop communication mechanisms with them. VA made significant progress in making the strategic plan clearer, more complete, and more results oriented. Instead of presenting four overall goals, three of which were process oriented, VA has reorganized its draft strategic plan into two sections. The first section, entitled “Honor, Care, and Compensate Veterans in Recognition of Their Sacrifices for America,” is intended to incorporate VA’s results-oriented strategic goals. The second section, entitled “Management Strategies,” incorporates the three other general goals, related to customer service, workforce development, and taxpayer return on investment. VA believes that the process-oriented portions of the plan are important as a guide to VA’s management. We agree, as long as they are integrated with the plan’s primary focus on results. In addition, VA filled significant gaps in the discussions of strategic goals. The formally issued plan includes strategic goals covering all of its major programs and includes objectives, strategies, and performance goals supporting the strategic goals. VA’s strategic plan still needs improvement in four major areas: (1) development of results-oriented goals, (2) descriptions of how the goals are to be achieved, (3) discussion of external factors, and (4) discussion of coordination efforts with other agencies. Until VA makes improvements in these areas, its strategic plan will be incomplete and will not fully comply with the strategic planning requirements of the Results Act. Perhaps the most significant challenge for VA is to develop results-oriented goals for its major programs, particularly for benefit programs. For some major VA programs, the strategic plan’s goals are placeholders for results-oriented goals that have not yet been developed. For example, the general goals for four of five major benefit program areas—compensation and pensions, education, vocational rehabilitation, and housing credit assistance—are stated in terms of ensuring that VA is meeting the needs of veterans and their families. The objectives supporting VA’s general goal for its compensation and pension area are to (1) evaluate compensation and pension programs to determine their effectiveness in meeting the needs of veterans and their beneficiaries and (2) modify these programs, as appropriate. defining program results is difficult for programs where congressional statements of the program purposes and expected results are vague or nonexistent. This is an area where VA and Congress can make progress in further clarifying program purposes and expected results. Once VA has developed strategic goals focused on results, it can develop objectives and strategies for achieving the goals. Another remaining challenge for VA is to better integrate discussions of external factors that could affect its strategic planning. While VA added discussions of the implications of demographic changes among veterans, they are not linked to specific goals in the plan. For example, VA noted the impact of increased veteran death rates on demands for burials in VA and state veterans’ cemeteries. However, this is not linked to VA’s performance goals to complete specific numbers of cemetery construction and land acquisition projects by fiscal year 2002. Discussions of external factors were often limited to whether Congress would appropriate sufficient funds or make substantive legislative changes. Assessments of factors outside VA’s control, such as economic, social, and demographic changes, are also important in setting VA’s goals and in assessing VA’s progress in meeting them. The other remaining challenge for VA is to identify areas where it needs to coordinate and share information with other federal agencies, as well as develop coordination plans. VA’s strategic plan identifies this need and includes a goal to (1) identify overlaps and links with other agencies, (2) enhance communication links with other agencies, and (3) keep state directors of veterans’ affairs and other state officials apprised of VA benefits and opportunities for collaboration and coordination. VA had substantial consultations with Congress, and we participated in these consultations at the request of the House and Senate Committees on Veterans’ Affairs. In addition, VA held consultation sessions with representatives of veterans service organizations. VA has attributed improvements in its formally issued strategic plan to these consultations. VA officials have stressed that they consider strategic planning a continuing, long-term process. Based on comments by VA officials and the changes VA has already made to its strategic plan, we expect further improvements over the next few years. In transmitting the formally issued strategic plan to Congress and OMB, VA also provided detailed responses to comments on its draft plans from the House Committee on Veterans’ Affairs, GAO, OMB, veterans service organizations, and VA employee organizations. These comments addressed the observations in both our July 11 letter and September 18 testimony. In general, VA agreed with our observations and indicated areas where it has revised its plan since the June 1997 draft. Also, we sent a draft of this appendix to VA officials, who had no additional substantive comments. Cynthia M. Fagnoni, Associate Director, Veterans’ Affairs and Military Health Care Issues; Health, Education, and Human Services Division, (202) 512-7202. On July 30, 1997, we issued a report on the Environmental Protection Agency’s (EPA) draft strategic plan (Results Act: Observations on EPA’s Draft Strategic Plan, GAO/RCED-97-209R). EPA made revisions to the draft plan and formally submitted it to OMB and Congress on September 30, 1997. As requested, we have reviewed the September 1997 plan and compared the changes with the observations we made in our July 30 report. On October 15, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. EPA’s draft strategic plan contained four of the six elements required by the Results Act: (1) a mission statement, (2) general goals and objectives, (3) approaches or strategies to achieve the goals and objectives, and (4) an identification of key external factors. For these four elements, we noted that the draft plan did not contain all of the details suggested by OMB Circular A-11 and/or that other improvements could be made to increase the plan’s usefulness. The two elements not included in the draft plan were (1) the relationship between the general goals and objectives and the annual performance goals and (2) the program evaluations used in developing the plan and a schedule for future evaluations. Although the draft plan contained a section on program evaluation, the discussion focused on the role of evaluation in assessing future results and provided general criteria for deciding which evaluations to perform in the future. The draft strategic plan did not discuss interagency coordination for crosscutting programs, activities, or functions that are similar to those of other federal agencies. It is important that the plan do so because EPA and other agencies carry out a number of mission-related activities that are crosscutting or similar. Our July 30, 1997, report noted that EPA had begun taking steps to coordinate its plan with other agencies, such as the Department of Energy and the National Aeronautics and Space Administration, to address crosscutting programs and activities. The draft plan included actions to address major management challenges that we had previously identified. However, it provided limited details on how these long-standing problems are to be resolved. addition, the agency strengthened the plan’s treatment of management problems by setting out several additional actions to resolve them. In the September 30, 1997, version of its strategic plan, EPA added the two elements required by the Results Act that were missing from the draft plan: (1) the relationship of the general goals in the strategic plan to the performance goals to be included in the annual performance plan and (2) the program evaluations used in developing its general goals and objectives. The issued plan also incorporates improvements in other elements required by the Results Act. For example, the section identifying key external factors was expanded to include other factors, such as producer and consumer behavior, that could directly affect the achievement of the plan’s goals and objectives. The mission statement was also revised to more closely coincide with the language of the agency’s statutes. EPA improved the clarity of its strategic plan in several ways. It added information that explains how the agency’s responsibilities for human health and the environment intersect with or support the work of other federal departments or agencies, such as the Departments of the Interior and Health and Human Services. It also added information that better describes the important role of the states as having primary responsibility for implementing many day-to-day environmental program activities, such as issuing permits and monitoring environmental conditions. In addition, EPA added statements to clarify the relationship among certain components of its plan, that is, the goals and objectives, guiding principles, and planned cross-agency program activities. Furthermore, an addendum listing the agency’s potential authorities was revised to identify the actual authorities by goal and objective. The information that EPA added on interagency coordination of the plan included the major steps it took to coordinate with other agencies. The plan also identifies a total of 25 federal agencies whose activities relate to EPA’s efforts under one or more of its goals. According to the plan, the actions taken to coordinate with other agencies on the plan will help to establish long-term efforts to address any inconsistencies, conflicts, or redundancies among federal programs, as identified in any future strategic and annual performance plans. Environmental Performance Partnership System was developed by EPA and the states in 1995 as a more collaborative approach to implementing environmental programs. The plan now sets out the objectives of the partnership system and identifies how they will be accomplished. In addition, the plan now makes conducting peer reviews and providing guidance on the science underlying the agency’s decisions an objective under the “sound science” goal. As noted in our July 1997 report, the use of peer review is an important means of ensuring the credibility of the scientific and technical documents that the agency uses in its work. Furthermore, EPA added a performance measure to the “effective management” goal dealing with the need to achieve success in implementing the Chief Financial Officers Act and the Government Management Reform Act. This performance measure will help ensure that EPA addresses financial management issues that resulted in the agency’s receiving a qualified opinion on its fiscal year 1996 financial statements. Several revisions that we suggested in our previous report have not been made. Some of these relate to improvements in aspects of the six elements required by the Results Act, while others deal with further improvement in the treatment of management and data problems and the effectiveness of the plan in conveying the agency’s priorities. Although the plan provides a general methodology for selecting future program evaluations and describes how they are to be used, it does not identify the general scope and time frames of the evaluations, as encouraged by OMB’s guidance. In addition, as in the draft plan, (1) some of the goals and objectives, such as those for effective management, are not stated in quantifiable or measurable terms; (2) staffing skills and resources are generally not discussed in describing how the plan’s goals and objectives are to be achieved; and (3) because strategies are generally organized by goal rather than objective, it is not always clear how specific strategies relate to specific objectives. Moreover, future revisions or updates of the plan could further benefit from a more detailed discussion of how other federal agencies and the states are to contribute to individual goals and objectives. demonstrating that the agency recognizes the significance of these problems and is committed to resolving them. As the strategic plan evolves over time, EPA could improve its effectiveness in conveying the agency’s priorities. The large number of goals and objectives, coupled with the guiding principles and planned cross-agency program actions, continues to make it difficult to discern EPA’s priorities. To better convey its priorities, EPA could directly relate the cross-agency programs to specific goals and objectives or further consolidate its goals or objectives. We provided a draft of our observations on EPA’s strategic plan for its review and comment. EPA officials, including the Director of the Office of Planning, Analysis, and Accountability, told us that the strategic plan was a product of a broader reform of the agency’s planning, budgeting, analysis, and accountability functions and that the consolidation and harmonization of these functions will, over time, bring about many of the improvements that we have suggested. EPA noted that as EPA refines its approaches to analysis and accountability, the agency will be better able to outline the prospective uses of program evaluation and consequent refinements to its goals, objectives, and performance measures. In addition, EPA said that the agency has taken the “unprecedented” step of submitting its first annual performance plan under the Results Act and its fiscal year 1999 budget request to OMB as a single document. According to EPA, this action has had the effect of transforming budgetary decisions into the structure of strategic goals and objectives, which makes the kind of direct budget and performance linkages that we have suggested. While we recognize that EPA’s annual performance plan will provide detailed information on resources, strategies, goals, and objectives, we believe that the strategic plan would be more complete and useful to congressional and other stakeholders if it provides an overview of resource needs and links the agency’s major strategies to individual goals and objectives. Peter F. Guerrero, Director, Environmental Protection Issues; Resources, Community, and Economic Development Division, (202) 512-6111. On July 22, 1997, we issued a report on the Federal Emergency Management Agency’s (FEMA) draft strategic plan (Results Act: Observations on the Federal Emergency Management Agency’s Draft Strategic Plan, GAO/RCED-97-204R). FEMA’s strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we have reviewed the September 30 strategic plan and compared it with the observations in our July 22 report. On October 16, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. As we reported in July, FEMA’s draft plan indicated that the agency had made good progress toward fulfilling the requirements of the Results Act but needed improvements to fully meet those requirements. For instance, we observed that the draft plan lacked two of the six elements required by the Results Act: (1) the relationship between strategic goals and annual performance goals and (2) the role of program evaluations. The required elements contained in the plan could have better conformed to the Act’s requirements and OMB’s guidance. For example, the plan did not explicitly address the major legislation or executive orders that serve as a basis for FEMA’s mission statement, goals, and strategies or sufficiently deal with financial and information management issues that we and others have previously identified. We also noted that clarifying the linkage between FEMA’s strategic objectives and the strategies intended to achieve them would make the plan more useful to FEMA and to Congress. Furthermore, the draft plan did not address the roles of external stakeholders and how FEMA coordinated with them in developing the plan. FEMA’s September 30 plan incorporates many improvements that make it more responsive to the requirements of the Results Act. First, it contains explicit language describing the role of program evaluations, a key element missing from the earlier version. For instance, the plan includes for each of FEMA’s three strategic goals a discussion of the type(s) of program evaluation that have been completed and are planned to help assess the accomplishment of those goals. and the agency in ensuring that FEMA’s stated goals respond to the entire spectrum of its key statutory authorities. Following suggestions in our July report, FEMA’s September 30 plan elaborates on certain aspects of two issues that we felt were not fully discussed in the draft plan: (1) management issues that we and others have previously identified and (2) the agency’s capacity to provide reliable information assessing the achievement of its goals and objectives. For example, the plan now reflects consideration of containing disaster program costs and remedying financial management problems. It also now more fully addresses how FEMA intends to address the Year 2000 problem, which is an issue that we have identified as high risk across the government. Unlike the earlier version, the September 30 plan discusses external stakeholders involved in the development of the plan. Identifying external stakeholders is important given the many and varied stakeholders that have critical roles in determining the extent to which FEMA’s goals are met. For example, the U.S. Army Corps of Engineers provides assistance for constructing flood control facilities and clearing debris from disaster-ravaged areas. FEMA could further enhance its strategic plan by clearly identifying federal agencies, or programs within those agencies, with related missions or potentially crosscutting program activities, and how coordination with them shaped FEMA’s plan. Some of the elements of FEMA’s strategic plan could be further improved to more fully meet the purposes of the Results Act. For example, while the revised plan incorporates language on the relationship between annual goals and the strategic goals and objectives, the plan would benefit from elaboration on this issue. OMB’s guidance suggests that strategic plans include a discussion of the type, the nature, and the scope of the performance goals to be included in the annual performance plans. While FEMA’s September 30 plan states that the agency’s annual performance plans will illustrate how annual performance goals will support the strategic goals and objectives, it lacks an explicit discussion of the type, the nature, and the scope of the performance goals to be included in the plans and their linkages to the strategic goals and objectives. also describe how achieving the goals could be influenced by the factors. While the plan contains a section on external factors, it does not link the factors to specific goals or objectives or articulate strategies for mitigating the factors’ effects. Also, in our July report, we suggested that FEMA’s plan could be strengthened if the strategies were more integrally linked to FEMA’s strategic objectives. The September 30 plan does more clearly link strategies with overall goals, although not with specific objectives. Because of this structure, the plan is not as useful as it could be in assigning accountability for achieving specific objectives. The Results Act requires that strategic plans contain goals and objectives that are expressed in a manner allowing a future assessment of whether they are being achieved. While the goals in FEMA’s September 30 plan are not substantially different from those in the earlier version, the proposed assessment approaches are. The revised approaches raise questions as to their feasibility. For example, the first goal—“protect lives and prevent the loss of property from all hazards”—includes an approach that relies on incomplete modelling and data collection efforts and an implied but unquantified relationship between an increase in readiness and a decrease in risk. Because of the potential assessment difficulties, it is less clear that FEMA’s goals and objectives are expressed in a manner that allows a future assessment of whether they are being achieved. The plan’s usefulness could be enhanced if it were easier to read and follow. More explanatory language and/or a visual “road map” might help show how the major elements of the plan relate to one another. For example, a few sentences explaining that the operational objectives link to the strategic goals rather than the strategic objectives would be helpful. Finally, providing clarifying and simplified language would enhance the plan’s usefulness to audiences external to FEMA. achievement of goals and objectives. We incorporated their suggested changes where appropriate. Judy A. England-Joseph, Director, Housing and Community Development Issues; Resources, Community, and Economic Development Division, (202) 512-7631. On July 7, 1997, we issued a report on the General Services Administration’s draft strategic plan (The Results Act: Observations on GSA’s April 1997 Draft Strategic Plan, GAO/GGD-97-147R). GSA has since revised its strategic plan and formally submitted it to OMB and Congress on September 30, 1997. As requested, we have reviewed the September 30 strategic plan and compared it with the observations in our July 7 report. On October 16, 1997, we briefed your offices on our further observations on the strategic plan. The key points from that briefing are summarized herein. We reported in July that the April 28 draft plan included the six components required by the Results Act and that the general goals and objectives in the plan reflected GSA’s major statutory responsibilities. However, our analysis showed that the plan could have better met the purposes of the Act and related OMB guidance. Two of the required components—how goals and objectives were to be achieved and program evaluations—needed more descriptive information on how goals and objectives were to be achieved, how program evaluations were used in setting goals, and what the schedule would be for future evaluations to better achieve the purposes of the Act. The four other required components—mission statement, general goals and objectives, key external factors, and relating performance goals to general goals and objectives—were more responsive to the Act but needed greater clarity and context. We also noted that the general goals and objectives and the mission statement in the draft plan did not emphasize economy and efficiency, as a reflection of taxpayers’ interests. Also, the general goals and objectives seem to have been expressed in terms that may be challenging to translate into quantitative or measurable analysis, and there could have been better linkages between the various components of the plan. We also reported that the draft plan could have been made more useful to GSA, Congress, and other stakeholders by providing a fuller description of statutory authorities and an explicit discussion of crosscutting functions, major management problems, and the adequacy of data and systems. Although the plan reflected the major pieces of legislation that establish GSA’s mission and explained how GSA’s mission is linked to key statutes, we reported that GSA could provide other useful information, such as listing laws that broaden its responsibilities as a central management agency and that are reflected in the goals and objectives. Relatedly, the draft plan did not discuss the potential for crosscutting issues to arise or how these issues might affect successful accomplishment of goals and objectives. It also made no mention of whether GSA coordinated the plan with its stakeholders. The plan was also silent on the formidable management problems we have identified over the years—issues that are important because they could have a serious impact on whether GSA can achieve its strategic goals. Finally, the plan made no mention of how data limitations would affect GSA’s ability to measure performance and ultimately manage its programs. We reported that consideration of these areas would give GSA a better framework for developing and achieving its goals and help stakeholders better understand GSA’s operating constraints and environment. The September 30 plan reflects a number of the improvements that we suggested in our July 1997 report. The clarity of the September 30 plan is improved, and it provides more context, descriptive information, and linkages within and among the six components that are required by the Act. Compared to the April 28 draft, the September 30 plan generally should provide stakeholders with a better understanding of GSA’s overall mission and strategic outlook. Our analysis of the September 30 plan also showed that, in line with our suggestion, GSA placed more emphasis on economy and efficiency in the comprehensive mission statement and general goals and objectives components. The September 30 plan also generally described the operational processes, staff skills, and technology required, as well as the human, information, and other resources needed, to meet the goals and objectives. The plan now contains a listing of program evaluations that GSA used to prepare the plan and a more comprehensive discussion of the major pieces of legislation that serve as a basis for its mission, reflecting additional suggestions we made in our July 1997 report. improvements, which are described in the following section, would strengthen the strategic plan as it evolves over time. As we discussed in our July 7, 1997, report on the draft plan, the September 30 plan continues to have general goals and objectives that seem to be expressed in terms that may be challenging to translate into quantitative or measurable analysis. This could make it difficult to determine whether they are actually being achieved. For example, the goal to “compete effectively for the federal market” has such objectives as “provide quality products and services at competitive prices and achieve significant savings” and “open GSA to marketplace competition where appropriate to reduce costs to the government and improve customer service.” However, this goal, its related objectives, and the related narrative do not state specifically how progress will be measured, such as the amount of savings GSA intends to achieve or the timetable for opening the GSA marketplace for competition. OMB Circular A-11 specifies that general goals and objectives should be stated in a manner that allows a future assessment to be made of whether the goals are being met. The OMB guidance states that general goals that are quantitative facilitate this determination, but it also recognizes that the goals need not be quantitative and that related performance goals can be used as a basis for future assessments. However, we observed that many of the performance goals that GSA included in the plan also were not expressed in terms that could easily enable quantitative analysis, which could make gauging progress difficult in future assessments. The strategies component—how the goals and objectives will be achieved—described the operational processes, human resources and skills, and information and technology needed to meet the general goals and objectives. This component is an improvement over the prior version we reviewed, and applicable performance goals are listed with each of these factors. Although GSA chose to discuss generally the factors that will affect its ability to achieve its performance goals, we believe that a more detailed discussion of how each goal will actually be accomplished would be more useful to decisionmakers. To illustrate with a specific example, the plan could discuss the approaches that GSA will use to meet the performance goals related to its general goal of promoting responsible asset management using operational processes, human resources and skills, information and technology, and capital/other resources. The September 30 plan does discuss, in the general goals and objectives component, an operational/human resource change involving the appointment of a new Chief Measurement Officer in the Public Buildings Service. More discussion of this type of change in the strategies component would help stakeholders better understand GSA’s specific strategies to ensure that it is achieving its goals and objectives. We also noted that the strategies component does not discuss priorities among the goals and objectives. Such a discussion would be helpful to decisionmakers in determining where to focus priorities in the event of a sudden change in funding or staffing. Finally, GSA deferred to the President’s budget its discussion about capital and other resources. We believe it seems reasonable to include in this component at least some general discussion of how capital and other resources will be used to meet each general goal. Although the external factors component in the September 30 plan is much clearer and provides more context than the draft plan we reviewed, the factors are not clearly linked to the general goals and objectives. OMB Circular A-11 states that the plan should include this link, as well as describe how achieving the goals could be affected by the factors. This improvement would allow decisionmakers to better understand how the factors potentially will affect achievement of each general goal and objective. The program evaluations component in the September 30 plan provides a listing of the various program evaluations that GSA indicates were used in developing the plan. However, it still does not include a schedule of future evaluations. Instead, the plan states that the schedule for future program evaluations is under development and that GSA intends to use the remainder of the consultation process to obtain input from Congress and stakeholders concerning the issues that should be studied on a priority basis. However, OMB Circular A-11 indicates that the schedule should have been completed and included in the September 30 plan, together with an outline of the general methodology to be used and a discussion of the particular issues to be addressed. decisionmakers understand the crosscutting issues that affect GSA as a central management agency. However, explicit discussion of these issues is limited, and the September 30 plan makes no reference to the extent to which GSA coordinated with stakeholders. The September 30 plan references major management problems in the program evaluations component, but it does not explicitly discuss these problems or identify which problems could have an adverse impact on meeting the general goals and objectives. Our work has shown over the years that these types of problems have significantly hampered GSA’s and its stakeholder agencies’ abilities to accomplish their missions. For example, the plan could address how GSA will attempt to ensure that its information systems meet computer security requirements or how GSA plans to address the Year 2000 problem in its computer hardware and software systems. The plan does reference data reliability in the general goals and objectives component. However, the discussion of data reliability, which is so critical for measuring progress and results, is limited and not as useful as it could be in attempting to assess the impact that data problems could have on meeting the general goals and objectives. We continue to believe that greater emphasis on how GSA plans to resolve management problems and on the importance of data reliability could improve the plan. On October 9, 1997, we obtained oral comments from GSA’s Director of Performance Management on a draft of our analysis of GSA’s September 30 plan. She said that GSA generally agreed with our observations about the September 30 plan and said that many of the observations will be addressed in future versions of the plan and in the various performance plans that GSA has drafted. However, she added that GSA is concerned that many of our observations could lengthen the plan, thereby making it less usable or readable to GSA’s broad constituency, including Congress, OMB, GSA employees, and GSA’s various business partners. concerns in a succinct fashion that is both usable and reader-friendly while, at the same time, better achieving the purposes of the Act and related OMB guidance. Bernard L. Ungar, Director, Government Business Operations Issues; General Government Division, (202) 512-4232. On July 11, 1997, we issued a report on the Office of Personnel Management’s (OPM) draft strategic plan (The Results Act: Observations on OPM’s May 1997 Draft Strategic Plan, GAO/GGD-97-150R). OPM’s formally issued strategic plan was submitted to OMB and Congress on September 29, 1997. As requested, we have reviewed the publicly issued strategic plan and compared it with the observations in our July 11 report. On October 17, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. Of the six components required by the Results Act, two—how the goals and objectives will be achieved and relating performance goals to general goals/objectives—were not specifically identified in the draft plan. The remaining four components—mission statement, goals and objectives, external factors, and program evaluations—were discussed in the draft plan. However, each of these components had weaknesses, some of more significance than others. Specifically, the mission statement was too broad, lacking explicit reference to certain key responsibilities; the goals and objectives suggested some results to be achieved but provided little basis for judging how OPM would know whether those goals were being achieved or what OPM’s contribution toward achieving those results might be; some external factors were identified, but others were not included in the plan, which also did not meet the Act’s requirement to link each factor to a particular goal or to identify how it might affect OPM’s success in meeting its goals; and, finally, the program evaluation component discussed customer satisfaction with OPM services but did not indicate how evaluations were used in developing strategic goals or provide a schedule for future evaluations as the Act requires and OMB guidance reiterated. In addition, while the draft plan did identify a number of OPM’s crosscutting program activities, it did not discuss coordination or indicate that OPM, in developing the plan, coordinated with the entities involved in these crosscutting activities. Including a fuller discussion of OPM’s interrelationship with other agencies in the plan would be consistent with the purposes of the Act. Such a discussion likely would also provide more information for Congress and other stakeholders to use in judging whether OPM’s crosscutting responsibilities should be modified in any way. competitive in obtaining future workers; (2) determining whether federal employee compensation (e.g., pay and benefits) is appropriate; and (3) ensuring that decisions for information technology projects are based on assessments of mission benefits, risks, and costs. Discussion of these challenges as well as of management problem areas where OPM has taken successful corrective actions would be informative and useful to both OPM and Congress. OPM’s publicly issued strategic plan incorporated several improvements that make it more responsive to the requirements of the Results Act than was the draft plan. In particular, OPM revised its mission statement to recognize its key responsibilities. OPM’s revised mission statement is more results oriented and outlines OPM’s functions and activities as the government’s central personnel agency. OPM has added specific sections to its plan describing OPM and what it does, OPM’s history, and its statutory responsibilities. These sections augment the mission statement by linking it to relevant statutory authorities. Two of the components required by the Results Act that were missing when we reviewed the draft plan in July—how the goals and objectives will be achieved and relating annual performance goals to general goals/objectives—have been added to the issued strategic plan. OPM has a section under each goal entitled “Strategies for Achieving Objectives” that lists general action items OPM has identified for achieving its goals and objectives. Also, under each strategic goal, OPM has proposed measures to assess progress toward its overall goals. In several cases, OPM has established measurable targets that can be used to gauge the agency’s progress. OPM’s issued plan includes a section on the relationship between its strategic goals and objectives and its forthcoming annual performance plans. This section generally states that strategic goals will be linked to specific performance goals and performance improvements in the annual performance plan and that the plan’s program evaluation section further links the strategic plan to the annual performance plans. OPM’s evaluation agenda and schedule and also describes the evaluations used to provide baseline data for some of OPM’ s performance measures. We observed in July that the draft plan did not assess the potential for overlap and duplication or, conversely, cooperation and coordination with agencies and others on crosscutting issues. The issued plan includes a strategy to identify and solve common problems and avoid duplication of effort by working cooperately with consortia, agencies, and interagency groups such as the Interagency Advisory Group of Federal Human Resources Directors. OPM’s issued plan includes specific sections on its information technology and financial management systems strategies. For example, the financial management strategies section addresses goals in improving OPM’s financial information. This is a positive step since data reliability is extremely important for obtaining reliable performance measures to evaluate management performance and measure progress and results. Although the plan does not have specific sections that address other challenges, such as attracting and retaining well-qualified employees and determining appropriate compensation, as we had suggested, we note that OPM has included under its strategic goals certain objectives and strategies regarding staffing and examining and federal compensation. Although OPM made several improvements that we suggested in our previous report, some elements of its issued plan could be further improved. OPM’s five strategic goals, which we previously characterized as process oriented as opposed to results oriented, have not been revised from OPM’s May 1997 draft. However, for each of the five goals, OPM has provided a number of corresponding results-oriented objectives. Several of these objectives are time specific and allow future assessments to be made of whether they were or are being achieved. We acknowledge that not all objectives may lend themselves to quantification; however, this element of the plan could be more helpful to decisionmakers if more of the objectives had specified time frames, quantifiable targets, or identified base points against which progress could be measured. For example, neither OPM’s objective for fostering movement by senior executives nor its associated measures of success provide a sense of how much more movement may be desirable or how OPM or others will know when movements of executives have reached a more appropriate level. As previously mentioned, OPM has added specific strategies for achieving its objectives. However, these strategies generally do not include a description of the processes and the human, capital, and information resources required to achieve the goals and objectives as called for by the Results Act. OPM officials point to the fifth goal and its accompanying strategies as providing this information. However, although one strategy states in part that OPM will “acquire the necessary resources,” thus implying that additional resources will be needed, none of the strategies under this goal specify the necessary resources, costs, or information technology OPM will need to achieve its goals. One of the elements required by the Results Act that was missing when we reviewed the draft plan in July—the relationship between long-term goals and objectives and annual performance plans—is addressed in the issued strategic plan as previously described. By explicitly recognizing that future annual performance goals will be needed to assess progress toward the targets set in the strategic plan and by adding additional measurable targets to its issued plan, OPM has provided a greater assurance that combined information in the strategic and annual plans will be useful to OPM and stakeholders in tracking OPM’s progress. However, particularly because a significant number of OPM’s goals and objectives are not expressed in a manner readily susceptible to progress assessments, additional discussion of how OPM will assess progress over the 5-year period covered by the plan would have been useful. OPM’s discussion of external factors also could be further improved. Specifically, OPM could provide more information on how the external factors may affect goals and also identify additional actions by OPM to reduce potential impact. For example, the issued plan notes that the accelerated loss of experienced managers and personnelists throughout the federal government may affect strategic goals II, III, and IV. However, the plan does not indicate how OPM believes this external factor will affect each goal, nor does it indicate the actions that OPM plans to take to reduce the potential impact of the factor on OPM’s efforts to achieve its goals. On October 17, 1997, the Acting Director of OPM provided written comments on a draft analysis of the issued strategic plan. OPM expressed appreciation for our recognition that improvements had been made in its plan and said that it would give consideration to our comments as it further revises and updates the strategic plan and also as it develops the annual performance plan. Michael Brostek, Associate Director, Federal Management and Workforce Issues; General Government Division, (202) 512-8676. On July 22, 1997, we issued a report on the National Aeronautics and Space Administration’s (NASA) draft strategic plan (Results Act: Observations on NASA’s May 1997 Draft Strategic Plan, GAO/NSIAD-97-205R). NASA formally transmitted its strategic plan to OMB and Congress on September 30, 1997. As requested, we reviewed this plan and compared it with the observations in our July report. On October 17, 1997, we briefed your staffs on our further observations on the plan. The key points from that briefing are summarized herein. NASA’s draft strategic plan included four of the six elements required by the Results Act. Those four elements were a mission statement, goals and objectives, strategies for achieving the goals and objectives, and a discussion of external factors. Two of the four elements had weaknesses. The other elements—relating annual performance goals to general goals and objectives, and providing a description of program evaluations used to establish general goals and objectives and a schedule of future program evaluations—were not explained in enough detail in the draft plan. Although many of NASA’s objectives are shared with or involve other agencies, the draft plan did not discuss whether interagency coordination occurred to address duplication or overlap of activities. Also, the draft plan did not address the importance of working with other agencies to achieve its objectives. Although NASA officials said that activities are coordinated at the program level, such efforts were not discussed in the plan. Major management problems that could affect NASA’s ability to achieve its mission were not explicitly discussed in the draft plan. For example, NASA did not discuss its long-standing problems with managing contracts, managing information technology, and developing a fully integrated accounting system, even though the agency has recognized them as problems and has initiated some steps designed to address them. This information could be beneficial to NASA and its stakeholders because major management problems could impede the agency’s efforts to achieve its goals and objectives. performance goals to general goals and objectives and (2) describing how program evaluations were used to establish general goals and objectives and a schedule for future program evaluations. In addition, the relationship between questions and missions has been clarified. NASA’s plan now includes a more detailed discussion of the element relating performance goals to general goals and objectives. A new chart (characterized by NASA as the “Strategic Management System Roadmap”) illustrates the relationship between agency-level goals and the goals and objectives of the four Enterprises. Also, in the crosscutting processes section of the plan, NASA provides examples of how agency goals relate to performance goals. In our July report, we observed that the draft plan did not clearly indicate whether near-term, mid-term, or long-term goals would be used for performance measurement, or whether performance would be measured against the strategic outcomes, agencywide goals, or Enterprise goals. The plan now includes provisions for reviewing performance goals against the near-term objectives of the Enterprises and the four crosscutting processes that support all agency activities. Responding to our concern that the draft plan did not provide a description of program evaluations used to establish general goals and objectives and a schedule of future program evaluations, NASA has included a description of its strategic plan provisions for semiannual reviews by NASA’s Senior Management Council. These reviews are to take place in March and September of each year. The plan also provides a more detailed explanation of NASA’s planning process. According to the plan, NASA’s Strategic Management System will provide the information and results to fulfill the planning and management requirements of the Results Act. Furthermore, a series of documents, such as the Headquarters Functional/Staff Office Implementation Plans, will explain how NASA plans to implement activities to accomplish its goals. The draft plan included “fundamental questions” posed by the NASA Administrator. In our July report, we noted that these questions were not discussed in the context of the stated missions of the agency. The plan addresses this concern by including the questions in the Strategic Management System Roadmap chart and linking them to the mission, Enterprises, and crosscutting processes. implementation of an integrated financial management system, contract management reform, and information technology management in the context of their having been long-standing management challenges. On October 15, 1997, NASA’s Senior Advisor for Strategic Planning and Management provided us with the agency’s comments on our observations about its strategic plan. The Senior Advisor made three points. First, he said that NASA strongly believes that the agency has addressed all six required elements of the Results Act. Second, he stated that NASA is pleased that we recognize that many improvements have been made. He added that NASA has gone to great lengths and effort to ensure that concerns about the draft strategic plan expressed by Congress, OMB, and GAO were addressed. In particular, he said that numerous examples have been added to illustrate the fact that NASA is committed to leveraging other agency programs and resources. Third, NASA agrees with our view that developing a strategic plan is a dynamic process and that NASA will consider our suggested improvements when the agency moves forward in future updates to the plan. Allen Li, Associate Director, Defense Acquisitions Issues; National Security and International Affairs Division, (202) 512-4841. On July 11, 1997, we issued a report on the National Science Foundation’s (NSF) draft strategic plan (Results Act: Observations on the National Science Foundation’s Draft Strategic Plan, GAO/RCED-97-203R). NSF’s formally issued strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we have reviewed the publicly issued strategic plan and compared it with the observations in our July 11 report. On October 15, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. Of the six elements required by the Act, one—external factors that could affect the achievement of the plans’ goals—was not specifically identified in the draft plan. Of the remaining five elements, three—goals and objectives, strategies for achieving goals, and how program evaluation was used—were discussed but were not complete. Specifically, some of the goals were not expressed in a measurable form, the strategies to achieve NSF’s goals lacked precision, and the description of program evaluations was not fully developed. In addition, we observed that the draft plan could be more useful to NSF, Congress, and other stakeholders if it provided a more explicit discussion of crosscutting activities, statutory authorities, and NSF’s capacity to provide reliable information to manage its programs or determine if it is achieving its strategic goals. Recognizing crosscutting issues and the coordination required to address them is particularly important for NSF because in the science and technology area, for which the federal government spent $60 billion in fiscal year 1996, the potential for unnecessary overlap is particularly high. While NSF’s draft plan reflected its key statutory authority, other statutes broaden the scope of its responsibilities and are embedded in NSF’s goals and objectives. Explicit consideration of NSF’s capacity to produce reliable information for management decisionmaking is important because it could affect NSF’s ability to develop and meet its goals. future program evaluation efforts, (4) interagency crosscutting activities, and (5) how NSF plans to use information technology. External factors are now addressed in appendix 1 of NSF’s plan. In it, NSF describes the challenges that science and engineering faculty and students face in the current research environment and identifies how the achievement of four of its five goals could be affected by external factors. Consistent with OMB Circular No. A-11, Part 2, NSF briefly describes external factors, their link with a particular goal, and how the achievement of the goal could be affected by the factor. For example, for goal 1—“discoveries at and across the frontier of science and engineering,” NSF raises concern about the quality of research facilities and their influence on the pace of discovery. In particular, NSF relies on the academic research facilities available at colleges and universities to provide a base from which grantees can build their research programs. To the extent that moves toward cost efficiency in academic institutions affect this base, allowing it to deteriorate or failing to maintain it at the state of the art, NSF’s costs for the support of research will increase, which could slow the pace of discovery or change the types of discoveries open to researchers. NSF states that it would need to balance the number of researchers whose work could be supported with the added cost of conducting the research. NSF’s outcome goals are addressed more fully in appendix 2 and in the body of the report. In our earlier report on the draft plan, however, we had several reservations about NSF’s goals, some of which remain. In particular, we noted in our earlier report that while NSF’s draft plan provided some general dates for achieving its goals, it did not provide underlying assumptions, projections, or a schedule for initiating or completing significant actions. It also lacked a process for communicating goals and objectives throughout the agency and for assigning accountability to managers and staff for the achievement of goals. While the goals are still not expressed in a measurable fashion, the plan now describes examples of performance goals for NSF management and programs and includes both investment strategies and action plans for achieving each goal. According to OMB Circular A-11, when the goals are defined in a way that precludes a direct, future determination of achievement, the performance goals and indicators in the performance plan should be used to provide the basis for assessment. According to NSF, the action plans are provided to operationalize each strategy; to provide guidance to program officers; and through quantitative indicators, to link the goals to the development of annual budgets and performance plans. Current and future program evaluation efforts are now addressed in appendix 3. NSF discusses how the agency used specific formal and informal evaluations to develop key investment strategies, action plans for the strategic plan, and aspects of performance plans. Also included are details on future evaluations, a rough schedule for their implementation, and how the findings could be useful (1) in assessing NSF’s progress toward outcome goals and (2) for strategic planning discussions. For example, a Results Act pilot project on the physical sciences facilities gave NSF experience with setting performance goals and performance baselines for NSF’s oversight of the construction and operation of large facilities. This effort facilitated NSF’s development of appropriate performance goals for facilities management that are applicable across NSF for its performance plans. In addition, in connection with future evaluations and beginning in fiscal year 1998, NSF is planning to develop a formal process of assessment that includes periodic external assessment of progress toward outcome goals. programs are the U.S. Global Change Research Program and the High Performance Computing and Communications Program, both of which NSF participates in. Information technology in support of NSF’s mission is now discussed in appendix 5, and strategies for addressing information technology needs are identified in the section on “Critical Factors For Success.” In connection with its attention to Year 2000 issues, NSF states that it sent a notification to all grantees describing this potential problem and making clear that grantees bear the responsibility of addressing any difficulties it might create for the conduct of the research and education awards they hold. NSF refers to the fiscal year 1996 Annual Financial Report in its strategic plan; in that report, the Chief Financial Officer noted that NSF continues to meet or exceed virtually every federal goal for financial management performance. In addition, NSF has noted its commitment to manage its systems in support of the Results Act and the Chief Financial Officers Act as a key strategy in its plan. We observed in July that the draft plan could be enhanced by further discussion of statutory authorities and additional detail on NSF’s use of information. While our earlier report indicated that NSF’s statutory responsibilities were generally reflected in NSF’s draft strategic plan, we also stated that NSF is subject to other statutes related to its core functions. We suggested that providing a description of its responsibilities under its various statutory authorities could be useful, as a supplement to its plan, since the plan includes goals and objectives based on them. We also noted that it might be helpful to link the stated outcome goal to the relevant statutory objective. In this regard, NSF’s mission statement briefly touches on additional charges to the agency beyond the initial authorizing legislation, but the plan does not attempt to link NSF’s goals and strategies to the relevant statutory objective. As previously stated, information technology in support of NSF’s mission is now discussed in appendix 5, and strategies for addressing information technology needs are identified in the section on Critical Factors For Success. However, with respect to the high-risk issue of information security, the plan is still silent. Also, the revised plan does not discuss how NSF intends to improve its accounting for property, plant, and equipment in the possession of contractors and grantees in order to attain an unqualified audit opinion—which would seem to be a key goal for the financial management area. NSF’s performance goals for the results of its investments will appear as descriptive standards developed under the Results Act option to set performance goals in alternative formats. Since the timing of outcomes from NSF’s activities is unpredictable and annual change in the outputs does not provide an accurate indicator of progress toward outcome goals, performance goals for results are not specific to a fiscal year. NSF plans to use data and information on the products of NSF’s investments combined with the expert judgment of external panels to assess NSF’s performance over time and to provide a management tool for initiating changes in direction, when needed. As we stated in our earlier report, quantitative and qualitative indicators are widely used as proxies to assess research and development results because of the difficulties in identifying the impacts of research. Yet, while implying a degree of precision, these indicators were not originally intended to measure long-term research and development results. It remains to be seen whether NSF’s use of descriptive standards to evaluate results will become valuable sources of information for tracking progress and measuring outcomes. On October 10, 1997, we spoke with NSF’s Assistant to the Director for Science Policy and Planning to obtain the agency’s comments on our observations about its strategic plan. NSF generally supported our observations and agreed that some stakeholders may find useful the addition of an appendix explicitly identifying the links between the goals and strategies to the relevant statutory objective. In addition, NSF pointed out that attention to information security is addressed in another strategic plan in accord with the Information Technology Management Reform Act. Finally, with respect to NSF’s accounting for property, plant, and equipment, the agency indicated that NSF is taking necessary preliminary steps while awaiting guidance from the Federal Accounting Standards Advisory Board and OMB and expects to address this topic in a forthcoming performance plan. Victor S. Rezendes, Director, Energy, Resources, and Science Issues; Resources, Community, and Economic Development Division, (202) 512-3841. On July 31, 1997, we issued a report on the Nuclear Regulatory Commission’s (NRC) draft strategic plan (The Results Act: Observations on the Nuclear Regulatory Commission’s Draft Strategic Plan, GAO/RCED-97-206R). NRC’s formally issued strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we have reviewed the publicly issued strategic plan and compared it with the observations in our July 31 report. On October 15, 1997, we briefed your staffs on our further observations on the September 1997 plan. The key points from the briefing are summarized herein. The draft strategic plan contained two of the six required components of the Results Act—the mission and the goals and objectives. While NRC’s draft strategic plan met some of the requirements for three other components, it did not describe (1) the resources, such as staff skills and experiences, capital, and information, that will be needed to execute the plan’s strategies; (2) how key external factors could affect the achievement of its goals; and (3) its schedule for future program evaluations. Finally, NRC had not included in its draft plan the relationship between its long-term goals and objectives and its annual performance goals. Although NRC shared its draft and consulted with other agencies, the draft strategic plan did not fully discuss some programs and activities that were crosscutting or similar to those of other federal agencies. For example, NRC and the Department of Energy (DOE) share the responsibility for the federal government’s high-level waste disposal program; DOE builds such facilities, which NRC must license. Consequently, NRC is affected by changes in DOE’s strategies and program funding. The draft plan would have benefited by a more thorough discussion of these issues. realistic, which could mean future problems for those licensees not having sufficient funds to properly close their facilities. NRC’s September 1997 plan incorporated several improvements that make it more responsive to the requirements of the Results Act than was the draft strategic plan. In response to our concern that resource needs to execute strategies were not discussed, NRC added a statement to the September 1997 plan explaining that it did not anticipate any major, unique resource requirements and that its budget will identify the specific resources needed to implement the plan. NRC noted in its September 1997 plan that performance indicators have been established for human, capital, information, and funding resources in its performance plan. NRC explained that in the event legislation is enacted to have NRC oversee DOE’s facilities, changes to NRC’s strategies and resource needs could be required. NRC also added key external factors, which it called “major factors or assumptions,” affecting the achievement of its goals for the two strategic arenas that had none—“Protecting the Environment” and “Excellence.” NRC also expanded its goals section to provide a clearer link between the long-term (general) goals in its September 1997 strategic plan and those to be included in the annual performance plan. NRC added intermediate performance goals from the annual performance plan to the general goals to show the relationship between the final September 1997 plan and the annual performance plan; it also provided additional measures of results. In July, we observed that NRC did not fully address crosscutting program activities. The September 1997 plan was extensively revised to include a section in the appendix, entitled “Cross-Cutting Functions,” that identifies major crosscutting functions and interagency programs and discusses NRC’s coordination with other agencies, such as DOE and the Environmental Protection Agency (EPA). plan to discuss its actions to provide reliable performance information . Most of the data that NRC plans to use to measure performance will come from existing reports to Congress; and, in fiscal year 1998, it plans to identify any primary data systems that require improvement to provide any other information needed. NRC also addressed our concern that it had not discussed legislative needs that it may have had. NRC added a statement to its September 1997 plan to indicate its conclusion that it had not identified a need for any significant legislative changes to achieve its goals and strategies. NRC noted, for certain substrategies related to reactor and nonreactor decommissioning, that it is seeking legislation that would eliminate the overlap in the standard-setting authority of NRC and EPA in connection with Atomic Energy Act sites and materials by recognizing NRC’s and Agreement States’ standards in these areas. While NRC described in its September 1997 plan its program evaluation process, NRC still needs to include schedules for future program evaluations as required by the Results Act. Moreover, the September 1997 plan does not describe the general methodology to be used and the scope and issues to be addressed in such evaluations. The NRC plan indicates that no unique resources are anticipated, but it does not explicitly describe the resources and processes required to achieve its goals—in particular, its goal for nuclear waste safety. The Act states that the strategic plan is to contain a description of how the goals and objectives are to be achieved, including a description of the operational processes, skills and technology, and other resources required to meet the goals and objectives. To the extent that the achievement of a goal (i.e., the nuclear waste safety goal) relies on the resources or activities of others, NRC should describe those resources and activities in describing how its goals are to be achieved. Discussions of major management challenges and how NRC will meet them should appear in NRC’s plan, either under its “Excellence” goal or as strategies for achieving programmatic goals. While NRC’s 1997 strategic plan provides a set of strategies that are linked to specific goals, these strategies could be more complete. planned or directly under way and will not provide the information needed to assess the achievement of the strategic goals. Also, the precise meaning of some of its goals—in particular, its “Common Defense and Security and International Involvement” goal relating to international involvement—could be further clarified. We observed in July that NRC’s draft strategic plan did not discuss how NRC intended to plan for and use information technology to support the agency’s missions and improve its program performance. NRC modified its draft strategic plan to explain that annual performance plans that will delineate objective, quantifiable, and measurable goals to be achieved in a given fiscal year will be developed to further the general goals in the strategic plan. NRC’s September 1997 plan does not indicate how it intends to address such key information technology issues as the Year 2000 problem and the information security problem, or how it intends to plan for and use information technology to support the agency’s mission. Instead, the strategic plan says that these key issues are included in NRC’s fiscal year 1999 performance plan. NRC recognizes that assuming the regulation of the nuclear activities of DOE may be required in the future, but it has not yet begun projecting plans for that purpose. NRC has, however, agreed to pursue a pilot program of simulated regulation of DOE, in which regulatory concepts may be tested. NRC and DOE believe that information from the pilot program should be available before legislation to transfer regulatory responsibility is enacted. We had suggested that NRC link all of its goals and strategies to its major statutory authorities to facilitate a better understanding of the diversity and complexity of its overall mission, goals, and strategies. NRC responded to this suggestion by listing the statutory authorities for its general goals under five of its seven strategic arenas, but it did not include specific statutory references for its “Public Confidence” and “Excellence” arenas. On October 8, 1997, we met with NRC officials, including NRC’s Chief Financial Officer, to obtain NRC’s comments on our observations about the September 1997 plan. NRC said that the Commission is committed to implementing the Results Act and will continue to make improvements to its first strategic plan, including addressing our observations for future improvements, and take the other actions necessary to make managing for results a reality at NRC. Victor S. Rezendes, Director, Energy, Resources, and Science Issues; Resources, Community, and Economic Development Division, (202) 512-3841. On July 11, 1997, we issued a report on the Small Business Administration’s (SBA) draft strategic plan (Results Act: Observations on the Small Business Administration’s Draft Strategic Plan, GAO/RCED-97-205R). SBA’s formally issued strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we have reviewed the publicly issued strategic plan and compared it with the observations in our July 11 report. On October 16, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. SBA’s draft strategic plan, as discussed in our July report, lacked two required elements: (1) a discussion of the relationship between the long-term goals and objectives and the annual performance goals and (2) a description of how program evaluations were used to establish or revise strategic goals and a schedule for future program evaluations. The four required elements contained in the plan could have better conformed to the Results Act’s requirements and OMB’s guidance. For example, (1) the mission statement did not encompass SBA’s significant disaster loan program for individuals, (2) many of the goals and objectives appeared less outcome oriented than process oriented, (3) the strategies consisted entirely of one-line statements and were not detailed enough to enable an assessment of how they would help achieve the plan’s goals and objectives, and (4) the plan did not discuss how identified external factors would be taken into account when assessing progress toward goals. Also, because of the way in which the information was presented, the linkages among specific performance measures, strategies, and objectives were not clear. SBA’s draft strategic plan also did not explicitly address the relationship of SBA’s activities to similar activities in other agencies and provided no evidence that SBA coordinated with other agencies in developing its plan. In addition, the plan could have benefited from an explicit acknowledgment of the extent to which SBA must rely on other federal agencies in carrying out its federal procurement-related responsibilities. processes but did not describe the specific strategies to achieve the objectives. SBA’s September 30, 1997, strategic plan includes several improvements that make it more responsive to the requirements of the Results Act than the earlier version. At the same time, SBA’s September plan differs significantly from the earlier draft in that it includes, as appendixes, separate strategic plans for SBA’s Office of Inspector General and Office of Advocacy. As discussed further on, SBA has not made clear the relationship between the goals and objectives in the plans included in the appendixes and those in the main text of the plan. With a discussion of (1) the relationship between the long-term goals and objectives and the annual performance goals and (2) how program evaluations were used to establish or revise strategic goals, SBA’s September plan addresses all six required elements. The plan’s five new strategic goals, as a group, are more clearly linked to SBA’s statutory mission than were the previous plan’s seven goals. In addition, the inclusion of date-specific performance objectives that incorporate performance measures make the strategic goals more amenable to a future assessment of SBA’s progress. For example, under the goal to “increase opportunities for small business success,” one of SBA’s performance objectives is as follows: “By the year 2000, SBA will help increase the share of federal procurement dollars awarded to small firms to at least 23 percent.” Also, SBA significantly improved its plan by more clearly and explicitly linking the strategies to the specific objectives that they are intended to accomplish. Other improvements include a mission statement that includes the disaster loan program for individuals and more accurately reflects SBA’s statutory authorities, a better recognition that SBA’s success in achieving certain goals and objectives in the plan is dependent on the actions of other agencies, and the addition of a section that discusses how SBA’s programs and activities interact with other federal agencies’ programs and activities. While the latter section states that SBA will coordinate with other agencies in the future, it does not provide evidence that SBA coordinated with the other agencies in the plan’s development. Also, the section that discusses SBA’s goal to improve internal controls implicitly addresses management problems that we and others have identified. However, specific strategies to address the identified management problems are not described. While SBA’s goals are more clearly linked to SBA’s statutory mission, the relationship of one goal—leading small business participation in the welfare-to-work effort—to SBA’s mission is unclear. While the plan’s performance objective places an emphasis on helping small businesses meet their workforce needs, the subsequent discussion implies a focus on helping welfare recipients find employment; for example, the plan states that “SBA’s goal is to help 200,000 work-ready individuals make the transition from welfare to work . . . .” It is not clear why SBA is focusing on welfare recipients only and not on other categories of potential employees to help meet small businesses’ workforce needs. SBA’s plan mentions certain program evaluations that SBA plans for future fiscal years, as well as the continuation of its goal of monitoring field and headquarters offices. However, the plan does not contain schedules of future comprehensive program evaluations for SBA’s major programs, including its 7(a) loan program and 8(a) business development program. (The Inspector General’s plan references future audits and evaluations that the Inspector General plans to conduct to improve SBA management.) In addition, the plan acknowledges that SBA needs a more systematic approach for using program evaluations for measuring progress toward achieving its goals and objectives, but it does not outline how SBA will develop and implement such an approach. Also, the strategy sections in the plan do not describe the human, capital, and information resources that are needed to achieve the goals and objectives. The September plan identifies various external factors, such as the economy and congressional support, that could affect the achievement of the plan’s goals. However, with the exception of “interagency coordination,” the plan does not link these factors to particular goals or consistently describe how the factor(s) could affect achievement of the goals and objectives. Furthermore, the plan also does not articulate strategies for mitigating the factors’ effects. Also, while recognizing the need for reliable information to measure progress toward the plan’s goals and objectives, the plan notes that SBA currently does not collect or report many of the measures that it will require to assess performance. The plan would benefit from brief descriptions of how SBA plans to collect the data to measure progress toward its goals and objectives. General’s plan as an appendix, without cross-reference to any specific SBA goal or objective. Also, the September plan includes an appendix containing a plan for the Office of Advocacy; this material did not appear in SBA’s earlier plan. Generally, the goals and objectives in the Inspector General and Advocacy plans appear consistent with, and may contribute to the achievement of, the goals and objectives in SBA’s plan, but the relationship is not explicit. SBA’s plan makes little mention of the Inspector General and Advocacy plans and does not indicate at all how, or if, the Inspector General and Advocacy activities are intended to help SBA achieve the agency’s goals and objectives. Similarly, the Inspector General and Advocacy plans do not make reference to the goals and objectives in the SBA plan. These plans could be more useful to decisionmakers if their relationships were clearer. We provided copies of a draft of these observations to SBA for review and comment. We received comments from the SBA Administrator. SBA commented that our analysis of the plan provided useful suggestions that will be used in its next draft of the plan. SBA also provided additional information concerning two of our observations. First, SBA stated that the emphasis of the fourth goal—to lead small business participation in the welfare-to-work initiative—is focused on helping small businesses rather than former welfare recipients. Second, SBA commented that the agency works with the Inspector General and Advocacy offices to carry out SBA’s mission, and the Inspector General and Advocacy plans were included as appendixes to its strategic plan to highlight the offices’ statutory independence. Judy A. England-Joseph, Director, Housing and Community Development Issues; Resources, Community, and Economic Development Division, (202) 512-7632. On July 22, 1997, we issued a report on the Social Security Administration’s (SSA) draft strategic plan (The Results Act: Observations on the Social Security Administration’s June 1997 Draft Strategic Plan, GAO/HEHS-97-179R). SSA’s formally issued strategic plan was submitted to OMB and Congress on September 30, 1997. As requested, we reviewed the September 30 plan and compared it with the observations in our July 22 report. On October 14, 1997, we briefed your staffs on our further observations on the strategic plan. The key points from that briefing are summarized herein. SSA’s draft strategic plan contained all six of the elements required by the Results Act and reflected its status as an independent agency. To the agency’s credit, the draft plan was forward-looking and provided a solid foundation for SSA’s consultation with Congress and other stakeholders. Also, the goals in the draft plan were more balanced than those of prior SSA plans because they emphasized sound program management in addition to customer service. However, some of the required elements in the plan could have been strengthened in important ways. For example, for some goals, it was not clear what SSA hoped to achieve and how it planned to measure its achievement, and some goals seemed to overlap. In addition, the plan cited many initiatives that SSA intends to begin or continue without additional agencywide resources and without setting priorities or delineating time frames and schedules. As a result, it was difficult to see how SSA could accomplish all of its planned initiatives. SSA went beyond minimum requirements by providing numerous performance measures. However, we noted that it was sometimes difficult to link the performance measures with specific objectives. The draft plan included a description of the external factors that SSA considered in developing the plan, but this discussion could have been improved had SSA more explicitly linked the effects of certain external factors, such as changes in available technology, with goal attainment and had it more clearly explained how it has used and plans to use program evaluations. disclose the challenges SSA has faced in redesigning its disability process and did not fully integrate a return-to-work strategy for its disabled beneficiaries throughout the agency’s operations. SSA’s draft plan accurately conveyed the agency’s strong reliance on improved information technology to provide world-class service and to better manage its programs with its existing resources. However, we observed that the plan would be strengthened by adding information on how SSA will use information technology to achieve the agency’s goals and objectives. Finally, the draft could have discussed in more detail SSA’s plans to cope with two technology-related high-risk areas—the Year 2000 computer problem and the need to adequately protect the sensitive data in its computer systems. SSA incorporated several of the changes we suggested in its formally issued plan, but the extent of the revisions and the attendant improvements vary from element to element. Throughout the plan, SSA added pieces of information on processes and technologies it will use to achieve its goals. However, this information, along with the needed staff skills and timetables, is not discussed uniformly for each goal. In response to the need to better link performance measures with specific objectives, SSA added a matrix that presents the goals, objectives, and related performance indicators. In most cases, the goals, objectives, and measures are clearly stated in the body of the plan, and the matrix provides a useful summary of how SSA will assess its performance. In other cases, however, it is difficult to relate the discussion of performance measures in the body of the plan with the indicators in the matrix. SSA appropriately included its program evaluation activities in its first goal; it also added more information about the types of program evaluations used or planned for the future and the timetables for some of the planned evaluations. discussion of goals and strategies. For example, SSA describes its need to cooperate with law enforcement agencies in its discussion on ways to combat fraud. SSA also acknowledged that some of its management challenges were not adequately addressed in its draft plan. SSA recognized SSI as a high-risk area and noted that the agency intends to develop a separate plan to improve the program. For its disability process redesign, SSA added a short explanation of the complexity of the redesign process and recent attempts to narrow its focus. SSA also expanded its discussion of its return-to-work efforts and included information on the studies it plans to undertake. Finally, SSA has improved its plan by including discussions of the Year 2000 problem, the importance of resolving it, and the need to mitigate any future problems with other agencies with whom SSA shares information. future if SSA linked the discussion of these external factors with goal attainment and consistently included any mitigation strategies. As previously stated, the success of several goals is dependent on technological improvements or changes in agency operational processes, but we found that SSA has encountered difficulty implementing some of these changes. SSA has not acknowledged these difficulties, such as the challenges it faces in developing new software to complement the redesigned disability determination process, and the plan could be improved by providing additional information on how information technology strategies will be used to achieve the agency’s goals and objectives. Relative to changes in technology, SSA has not incorporated any plans to begin the difficult task of assessing its current service delivery structure and how it should change in the future. We provided SSA with a draft of our observations on its strategic plan. In its written reply, SSA said that it appreciated that we recognized the improvements made to the draft plan. SSA also stated that it believes that the strategic plan contains as much detail as is possible and appropriate at this point in its planning cycle. It is refining and refocusing its current key initiatives, as necessary, and developing plans for new initiatives to ensure that the agency reaches its objectives. Jane L. Ross, Director, Income Security Issues; Health, Education, and Human Services Division, (202) 512-7215. On July 11, 1997, we issued a report on the U.S. Agency for International Development’s (USAID) draft strategic plan (The Results Act: Observations on USAID’s November 1996 Draft Strategic Plan, GAO/NSIAD-97-197R). USAID submitted its formally issued strategic plan to OMB and Congress on September 30, 1997. As requested, we have reviewed the publicly issued strategic plan and compared it with the observations in our July 11 report. On October 24, 1997, we briefed your staffs on our further observations on USAID’s strategic plan. We summarize the key points from that briefing herein. USAID’s November 1996 draft strategic plan included the six elements required by the Results Act. However, two components of the plan—sections on (1) relating performance goals to general goals and objectives and (2) program evaluations—did not contain sufficient information to fully achieve the purposes of the Results Act and related OMB guidance. More specifically, these sections did not include a discussion of performance goals, relevant evaluation findings USAID used to develop its plan, or USAID’s plan for conducting future evaluations. Many agencies are involved in activities directly related to USAID’s mission, goals, and objectives, and there is potential for crosscutting issues. Nevertheless, the draft strategic plan did not address areas of possible duplication and USAID’s efforts to minimize them or the extent to which USAID relies on other agencies to meet its goals and objectives. We also observed that the draft plan did not address key management challenges that the agency faces. The plan provided a general description of recent management initiatives but did not discuss how effective these initiatives have been in resolving critical management problems USAID has acknowledged in nearly all areas of its operations. In particular, the plan did not describe difficulties USAID has encountered in developing a performance measurement system, in reforming its personnel systems, in implementing the Chief Financial Officers Act of 1990 (P.L. 101-576), and in deploying a new information management system that is intended to correct several material weaknesses in its financial management processes. USAID’s publicly issued strategic plan incorporated some improvements that make it more responsive to the requirements of the Results Act. In particular, USAID has developed performance goals related to the agency’s overall goals and objectives. These goals generally appear to be objective, quantifiable, and measurable. The rationale and data sources for the indicators are described in detail in an appendix to the plan. Although the performance goals presented are generally long-term ones, it appears that USAID will be able to derive required annual performance goals from many of them in the future. We did not evaluate the appropriateness of these indicators or the reliability of the data sources cited. USAID’s plan is clearer and more explicit about its long-term goals and objectives. The seven goals are clearly identified in narrative form, and both the goals and related objectives are presented graphically in an appendix. USAID has also improved this element of its plan by omitting other implicit goals, included in the November 1996 draft plan, that made it unclear what USAID intended to achieve. However, USAID’s goals and objectives are targeted at results over which USAID does not have a reasonable degree of influence. As we previously reported, USAID officials have acknowledged that in only a few cases have USAID’s programs been directly linked to the types of country-level development results described in the plan. With regard to strategies to achieve these goals, USAID’s plan now includes the goal of improving its management efficiency and effectiveness, including the steps that it is taking in that regard, and indicators for measuring progress. Consistent with suggestions in our July report, the plan now also includes an explicit discussion of the program, support, and workforce resources USAID believes are necessary to achieve its performance goals. The plan presents resource needs at an aggregate level and does not specify the level of resources needed to achieve each of USAID’s strategic objectives. contribution that USAID’s development partners make toward achievement of the agency’s goals and objectives. In particular, the plan identifies the commitment of other donor countries and multilateral agencies as the major external factor affecting USAID’s performance. USAID’s plan now includes a discussion of crosscutting functions across the U.S. government. It recognizes that other agencies provide technical assistance to developing and transitional countries and that achievement of USAID’s goals is affected by the actions of these agencies. The plan states that mechanisms are in place to reduce or minimize duplication at the field level, and for each goal it identifies those agencies with which it coordinates on related activities. However, it does not indicate what these coordination mechanisms are and lacks the information to demonstrate that they are adequate. The plan implies that only limited coordination with these other agencies on strategic planning has taken place and indicates that USAID anticipates expanded and ongoing interagency dialogue. The plan more fully addresses key principles of the Foreign Assistance Act of 1961 (P.L. 87-195), as we suggested in our July report. For example, it more extensively discusses the principles of coordination of foreign assistance with other donors and supporting development goals chosen by the recipient country. However, it does not specifically address the principle of encouraging regional cooperation by developing countries. We suggested in our previous report that several elements of the USAID plan could be further improved to better meet the purposes of the Results Act. The plan still does not contain sufficient information on program evaluations. It does not show how program evaluations by USAID or external organizations were used to establish strategic goals and does not outline the scope, methodology, key issues, or schedule for future evaluations. Although the plan refers to other documents and means by which USAID communicates evaluation schedules and findings, a summary of that information would be appropriate in this section to demonstrate the role that program evaluation plays in USAID’s strategic planning and results assessment. difficulties USAID has encountered in its efforts to improve. For example, the plan indicates that USAID hopes to improve the availability of financial and program results information. However, it does not convey the significant problems USAID has had to date generating complete, timely, and reliable financial and performance data—problems that hamper USAID’s ability to identify costs and measure performance. Nor does the plan establish a time frame for achieving substantial and verifiable improvement in this area. Frank acknowledgement of specific management challenges in the area of information technology is also absent from the strategic plan. The plan describes progress USAID has made in implementing a new management system but is silent on the major setbacks it is having with this implementation, even though this system will be critical to the success of financial and program management reforms. Similarly, the plan does not address information security and the Year 2000 problem, which we have identified as high-risk areas governmentwide. Instead of dealing with these issues directly, the plan refers to a Strategic Information Resource Management Plan that is said to set the direction for USAID to meet its information needs through 2002. A summary of the plan would be helpful, inasmuch as it acknowledges the hurdles USAID must overcome in achieving its goals. While USAID recognizes its dependence on other donors and its susceptibility to factors beyond its control, as we had suggested, we believe that USAID has not adequately emphasized the importance of these issues. The plan could articulate the relative magnitude of USAID’s assistance within the donor community to more clearly convey the extent of USAID’s dependence on the contributions of other donors to meet the performance goals it has established. In addition, the plan could articulate the extent of USAID’s ability to offset country and international conditions that hamper development to more realistically convey the magnitude of the risk and uncertainty that USAID faces in trying to achieve its goals. Further, USAID’s strategic plan does not specifically discuss its Economic Support Fund programs and its programs in the East European and Baltic states and newly independent states of the former Soviet Union. We noted in our July report and continue to believe that the plan could benefit from greater discussion of these activities, which directly serve U.S. foreign policy interests and represent about 60 percent of USAID’s budget. whether the goals apply to each recipient country individually or to all collectively. In some, but not all, cases this is clarified within the text of the appendix containing the rationale for the indicators used. USAID substantially reorganized the strategic plan from the November 1996 version. Many key elements of the plan have been consolidated into one section with no indication of where one element ends and another begins. Separate sections or increased use of subheadings would significantly improve the presentation and the ease of using this plan. On October 10, 1997, we briefed USAID officials on our observations about the issued strategic plan. On November 3, 1997, USAID officials provided us with comments on a draft of this appendix. They generally believe that we have fairly recorded the progress made to date, but they provided additional comments and clarification of several points, which we have incorporated as appropriate. They acknowledged that in some cases, for the sake of brevity, the plan did not reflect the level of specificity that is called for by the OMB and our guidance, particularly with regard to program evaluations, crosscutting functions, and information resource management issues. They noted that such detail is readily available from other USAID sources and believe that including it in the plan would add little value and would unduly increase the plan’s size. We continue to believe that the clarity and credibility of USAID’s strategic plan could be improved with the inclusion of the type of detail we have outlined. USAID officials also contended that the plan acknowledges USAID’s management challenges by outlining management improvement strategies that would resolve the types of problems we raised. However, we believe that an explicit description of management challenges would provide the reader a better sense of the nature and gravity of the problems USAID must overcome and the implications for USAID’s performance if it is not successful in overcoming these problems. country-level results. They stated that USAID has been able to influence the use of the resources of other donors, which affects the development goals USAID seeks to achieve. Benjamin F. Nelson, Director, International Relations and Trade Issues; National Security and International Affairs Division, (202) 512-4128. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed federal agencies' strategic plans submitted in response to the Government Performance and Results Act of 1993, focusing on: (1) summarizing its observations on agencies' September plans; and (2) providing additional information on how the next phase of the Results Act's implementation--performance planning measurement--can be used to address the critical planning issue GAO observed in reviewing the September strategic plans. GAO noted that: (1) on the whole, agencies' September plans appear to provide a workable foundation for Congress to use in helping to fulfill its appropriations, budget, authorization, and oversight responsibilities and for agencies to use in setting a general direction for their efforts; (2) agencies' strategic planning efforts are still very much a work in progress; (3) GAO's reviews of September plans indicate that continued progress is needed in how agencies address three difficult planning challenges--setting a strategic direction, coordinating crosscutting programs, and ensuring the capacity to gather and use performance and cost data; (4) GAO found that agencies can build upon their initial efforts to set a strategic direction for their programs and activities; (5) the next stage in the Results Act's implementation--performance planning and measurement--can assist agencies in addressing the challenge of setting a strategic direction; (6) as an agency develops its performance plan, it likely will identify opportunities to revise and clarify those strategic goals in order to provide a better grounding for the direction of the agency; (7) also, as agencies develop the objective, measurable annual performance goals as envisioned by the Act, those goals can serve as a bridge that links long-term strategic goals to agencies' daily operations; (8) the Results Act's requirements for annual performance plans and performance measurement can also provide a structured framework for Congress, Office of Management and Budget, and agencies to address agencies' crosscutting programs--the second critical planning challenge; (9) GAO found that although agencies have begun to recognize the importance of coordinating crosscutting programs, they must undertake the substantive coordination that is needed for the effective management of those programs; (10) the third critical planning challenge is the need for agencies to have the capacity to gather and use sound program performance and cost data to successfully measure progress toward their intended results; (11) under the Results Act, agencies are also to discuss in their annual performance plans how they will verify and validate the performance information that they plan to use to show whether goals are being met; and (12) verified and validated performance information, in conjunction with augmented program evaluation efforts, will help ensure that agencies are able to report progress in meeting goals and identify specific strategies for improving performance.
HUD’s purchase card program is part of the governmentwide commercial credit card program established to simplify federal agency acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services from vendors. According to Federal Acquisition Regulation (FAR) Part 13.201(b), government purchase cards should be used for micropurchases, which are purchases up to $2,500. The Department of the Treasury also requires agencies to establish approved uses and limitations on the types of purchases and spending limits. GSA administers the master contract and HUD’s purchase card policy was derived from the GSA governmentwide credit card program and tailored by HUD to meet its specific needs. During the period of our review—October 2000 through September 2001—HUD was operating under a policy dated October 1995. HUD is currently updating its purchase card policy. HUD’s purchase card policy states that purchase cards are intended to procure general-purpose office supplies and other support needs. The policy requires each approving official to develop a preapproval process to ensure that all purchase card transactions are authorized and in accordance with departmental and other federal regulations. The approving official signifies that a cardholder’s purchases are appropriate by reviewing and signing monthly statements. As required by the Department of the Treasury, HUD’s purchase card policy established approved uses and limitations on the types of purchases and dollar amounts in its purchase card policy. This policy also includes a detailed list of items that cardholders are prohibited from buying with their government purchase cards. For example, purchase or rental of nonexpendable property (generally defined as property of a durable nature with a life expectancy of at least 1 year), meals, drinks, entertainment or lodging, and construction costs exceeding $2,000 are generally prohibited. Fiscal year 2001 single purchase limits for individual cardholders, which are required to be established by the approving officials and approved by the departmental directors, ranged from $100 to $80,000, and their monthly limits ranged from $100 to $300,000. HUD was in the process of reevaluating and where applicable, lowering these limits. Bank One currently services the purchase card program at HUD. Internal control is a major part of managing an organization and is key to ensuring proper use of government resources. As mandated by 31 U.S.C. 3512, commonly known as the Federal Managers’ Financial Integrity Act of 1982, the Comptroller General issues standards for internal control in the federal government. These standards provide the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. According to these standards, internal control comprises the plans, methods, and procedures used to meet missions, goals, and objectives. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives and help ensure that actions are taken to address risks. Control activities are an integral part of an entity’s planning, implementation, review, and accountability for stewardship of government resources and achieving effective results. They include a wide range of diverse activities. Some examples of control activities include controls over information processing, physical control over vulnerable assets, segregation of duties, proper execution of transactions and events, and access restrictions to and accountability for resources and records. To determine whether HUD’s existing controls over the purchase card program provided assurance that improper purchases would be detected or prevented in the normal course of business, we interviewed HUD staff and performed walk-throughs of the process. We reviewed HUD’s policies and procedures and prior GAO reports as well as reports by HUD’s Office of Inspector General (OIG) and independent auditors on this topic. To test the effectiveness of internal controls, we selected a stratified random sample of 222 purchase card transactions made during fiscal year 2001 totaling over $1.8 million from a population of purchase card transactions totaling $10.6 million. To identify potential improper purchases we requested and obtained fiscal year 2001 transaction data from Bank One and used data mining techniques and other computer analyses to identify unusual transactions and payment patterns in HUD’s fiscal year 2001 purchase card transaction data that may be indicative of improper purchases. In order to determine if fiscal year 2001 purchases were adequately supported and for a valid government use, we requested and analyzed supporting documentation for those transactions that we identified as potentially improper and questionable. While we identified some improper, potentially improper, and questionable purchases, our work was not designed to determine the full extent of improper purchases. We requested comments from the Secretary of Housing and Urban Development. We conducted our work from November 2001 through November 2002 in accordance with generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. HUD staff did not comply with key elements of its purchase card policies that would have helped minimize the risk of improper purchases, including (1) obtaining preapproval for purchases, (2) retaining adequate supporting documentation, (3) conducting supervisory review of all purchases, and (4) periodically reviewing purchase card transactions to ensure compliance with key aspects of the department’s policy. This created an environment where improper purchases could be made with little risk of detection and likely contributed to the $2.3 million in improper, potentially improper, and questionable purchases we identified through our data mining efforts. GAO’s Standards for Internal Control in the Federal Government states that transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. This is the principal means of assuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. To address these internal control standards, HUD’s purchase card policy contains fundamental controls designed to minimize the agency’s exposure to improper purchases. HUD’s policy requires each approving official to establish a preapproval process for each cardholder to ensure that all purchases are appropriate and for official government use. Further, HUD’s policy states that the approving official is required to review, certify, and monitor all cardholder purchases to ensure that they have the necessary approvals before purchases are made. Additionally, HUD’s purchase card policy requires that approving officials review each purchase along with the applicable supporting documentation in order to certify that the purchases were appropriate and a valid use of government funds. Based on our review of HUD’s purchase card process, we found that most approving officials had not established a preapproval process to ensure the appropriateness of purchases before they are made. Only the Information Technology Office routinely obtained authorization prior to purchasing items with the purchase card. The approving official’s review of each purchase card transaction is one of the most important controls to ensure that all purchases are a valid use of government funds. We found that this critical control was seriously compromised because of inadequate supervisory review of supporting documentation by approving officials. To test the effectiveness of this key internal control, we selected and tested a stratified random sample of 222 purchase card transactions made during fiscal year 2001. Of the total $1.8 million purchase card transactions selected in the statistical sample, $1.4 million lacked adequate supporting documentation for the approving official to determine the validity of the purchase. Based on the results of this sample, we estimate that $4,753,253 of the total sampled population of purchases ($10,590,461) made during fiscal year 2001 lacked adequate supporting documentation. Our Standards for Internal Control in the Federal Government states that internal control activities help ensure that management’s directives are carried out. One such activity is the appropriate documentation of transactions. Internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. All documentation should be properly managed and maintained. We determined that some of HUD’s records supporting the purchase card program were not properly managed or maintained. For instance, HUD could not provide a complete and accurate list of all approving officials. When we attempted to contact cardholders and their respective approving officials to request supporting documentation using the list the agency provided, at least 28 approving officials provided written notification that cardholders assigned to them according to HUD records were not their responsibility. According to HUD officials, the purchase card program administrator is not routinely informed of changes in approving officials and often does not have the time to update the list regularly. Because HUD does not know who should be approving purchases, there is an increased risk of collusion as well as a general lack of accountability for ensuring the proper use of government funds. Another control activity that was available but not being used by HUD is blocking Merchant Category Codes (MCC). Blocking categories of merchants allows agencies to prohibit certain types of transactions that are clearly not business related, such as purchases from jewelry stores or entertainment establishments. During our review, we found that HUD was not blocking any MCCs. These blocks are available as part of HUD’s purchase card task order, under the GSA SmartPay Master Contract with Bank One. Because HUD did not take advantage of this control, there were no restrictions on the types of purchases employees could make during fiscal year 2001—the period of our audit. As a result of our audit work, on March 6, 2002, HUD began using selected MCC blocks. Our Standards for Internal Control in the Federal Government states that internal control should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. Internal control monitoring should assess the quality of performance over time and ensure that findings of audits and other reviews are promptly resolved. Program and operational managers should monitor the effectiveness of control activities as part of their regular duties. HUD’s purchase card policy requires the department to perform annual program reviews and report the results, including findings and recommendations, to the purchase card program administrator. However, HUD officials could locate only one such report. This November 2001 report, prepared by a consultant, identified problems that were similar to the findings previously reported by the OIG in February 1999. Both reports documented problems with weak internal controls and insufficient supporting documentation. The consultant’s report also noted that HUD was not performing the periodic program reviews required by its policies and that employees were making improper split purchases. HUD management agreed with the findings in the OIG report and developed and implemented an action plan to address the identified weaknesses. According to HUD OIG staff, its recommendations were implemented and have been closed since September 30, 2000. However, based on our findings, corrective actions taken at that time were not effective. The results of our control testing indicate that HUD’s lack of internal control over the purchase card process allows continued vulnerability to wasteful, fraudulent, or otherwise improper purchases by employees using government purchase cards. Poor internal controls created an environment where improper purchases could be made with little risk of detection. We define improper purchases as those purchases that include errors, such as duplicate charges and miscalculations; charges for services not rendered; multiple charges to the same vendor for a single purchase to circumvent existing single purchase limits—known as split purchases; and purchases resulting from fraud and abuse. We define questionable purchases as those that, while authorized, were for items purchased for a questionable government need as well as transactions for which HUD could not provide adequate supporting documentation to enable us to determine whether the purchases were valid. We identified 88 transactions totaling about $112,000 that were improper split purchases. For example, one cardholder purchased nine personal digital assistants and the related accessories from a single vendor on the same day in two separate transactions just 5 minutes apart. Because the total purchase price of $3,788 exceeded the cardholder’s single purchase limit of $2,500, the purchase was split into two transactions of $2,388 and $1,400, respectively. These improper split purchases violate provisions of the Federal Acquisition Regulation and HUD’s own purchase card policy, which prohibits splitting purchases into more than one transaction to circumvent single purchase limits. We received documentation from some cardholders confirming that they split their purchases because they exceeded their single purchase limits, while one cardholder claimed the vendor independently split the purchases. We identified an additional 465 purchases totaling over $913,000 where HUD employees made multiple purchases from a vendor on the same day. Specifically, cardholders made multiple purchases totaling over $2,500 on the same day from the same vendor. Although we were unable to determine definitively whether these purchases were improper, based on the available supporting documentation, these transactions share similar characteristics with the 88 split purchases, and therefore we consider these transactions to be potentially improper. We also found 2,507 transactions, totaling about $1.3 million, with vendors that would not routinely be expected to engage in commerce with HUD. In order to determine whether these questionable purchases were a valid use of government funds, we requested supporting documentation for each purchase. HUD was able to provide us with adequate supporting documentation for 1,324 transactions totaling about $412,000. The department was unable, however, to provide adequate support for the remaining 1,183 transactions (47 percent of total transactions requested) totaling about $869,000 (67 percent of total dollars requested). Additionally, we found 940 transactions, totaling about $554,000, where the purchases were made either on a weekend or holiday. We requested supporting documentation for each of these transactions. HUD was able to provide us with adequate support for 645 transactions totaling about $189,000. HUD was unable to provide adequate support for the remaining 295 transactions (31 percent of total transactions requested) totaling over $364,000 (66 percent of total dollars requested). In these instances, we were unable to determine what was purchased, for whom, and why. Some examples of the questionable vendor transactions for which we did not receive adequate support included over $27,000 to various department stores such as Best Buy, Circuit City, Dillard’s, JCPenney, Lord & Taylor, Macy’s, and Sears; over $8,900 to several music and audio stores including Sound Craft Systems, J&R’s Music Store, Guitar Source, and Clean Cuts Music; and over $9,700 to various restaurants such as Legal Sea Food, Levis Restaurant, The Cheesecake Factory, and TGI Fridays. Additional examples of questionable or potentially improper purchases we found include $25,400 of “no show” hotel charges for HUD employees who did not attend scheduled training and $21,400 of purchases from vendors who appear to have been out of business prior to the purchase. Because HUD was unable to provide adequate documentation for these purchases, we consider them to be a questionable use of government funds and therefore potentially improper purchases. We also have concerns about HUD’s accountability for computer and related computer equipment bought with purchase cards because of the large volume of transactions for which it did not have appropriate documentation. For example, our testing revealed that HUD employees used their purchase cards to buy portable assets, such as computer equipment and digital cameras, totaling over $74,500 for which they have provided either no support or inadequate support. In HUD’s August 28, 2002, purchase card remedial action plan, discussed in more detail in the next section, HUD acknowledged that items bought with purchase cards were not being consistently entered in the department’s asset management system. As a result, portable assets became vulnerable to loss or theft. In our follow-up work, we plan to determine whether these items are included in HUD’s asset management system and are being appropriately safeguarded. OMB’s April 18, 2002 memorandum, M-02-05, requires all agencies to develop remedial action plans to manage the risk associated with purchase card usage. Agencies were required to submit their plans to the Office of Federal Procurement Policy no later than June 1, 2002. HUD’s remedial action plan was submitted to OMB on May 31, 2002. Our review of HUD’s remedial purchase card action plan found that it did not address all the weaknesses we identified. Although HUD’s plan includes steps for resolving and preventing a number of potential problem areas, including the need for (1) adequate monitoring, (2) more frequent internal audits, (3) accountability and penalties for misuse of cards, (4) updating the agency handbook, (5) spending limits in line with purchasing requirements, (6) adequate program records, including proper approving officials, and (7) entering property purchased with purchase cards in the inventory system, the plan falls short in other key areas. For example, the plan did not include requirements for (1) a robust review and approval function for purchase card transactions, focusing on identifying split purchases and other inappropriate transactions, (2) a process to periodically assess the effectiveness of the review and approval process, and (3) specific documentation and records to support the purchase card transactions. In addition, the remedial plan lacked specifics as to how and when HUD would implement it. On August 16, 2002, OMB returned HUD’s remedial action plan and asked that a timeline be incorporated. HUD submitted a new plan to OMB on August 28, 2002. While the revised remedial action plan includes a broad timeline for completion of each objective, we found that it still does not adequately address key control weaknesses we identified, in part because it lacks specific steps necessary to fully address identified problem areas. In addition, the revised remedial action plan does not require the program administration staff to begin designing a monitoring plan to assess HUD’s compliance with key aspects of its purchase card policy until the second quarter of fiscal year 2003 and does not give an estimated date for when this key internal control will be implemented. Additionally, the revised plan does not specifically identify who is responsible for developing or implementing any of the proposed improvements. The problems we identified with HUD’s purchase card program leave the agency vulnerable to wasteful, fraudulent, or otherwise improper purchases. The remedial action plan prepared by HUD is an important first step toward addressing the control weaknesses we identified. At the same time, much still remains to be done to effectively control the inherent risk in HUD’s purchase card program. HUD management will have to effectively follow through on its implementation plan and expand the plan to improve its review and approval process, requirements for documentation and record retention, monitoring process, and remedial action plan or HUD will continue to be susceptible to misuse of government funds. To strengthen its internal control over the purchase card program and reduce HUD’s vulnerability to improper purchases, we recommend that the Secretary direct the Assistant Secretary for the Office of Administration to take the following actions: implement the preapproval requirement in the existing purchase card develop and implement a robust review and approval function for purchase card transactions, focusing on identifying split purchases and other inappropriate transactions, and on performing a detailed review of relevant supporting documentation for each purchase; update the list of approving officials and their designated cardholders quarterly to ensure accuracy and completeness; establish specific requirements for documentation and records to support all purchase card purchases; develop and implement a formal monitoring process to periodically assess the effectiveness of the enhanced review and approval process; revise the remedial action plan for purchase cards to include the specific steps necessary to fully implement the above five recommendations; and follow up on the purchases we identified for which cardholders did not provide adequate supporting documentation to determine the validity and the propriety of the purchases. In written comments on a draft of this report, which are reprinted in appendix I, HUD agreed that further improvements are needed to strengthen the department’s purchase card controls. Although HUD did not specifically agree or disagree with our individual recommendations, the actions being taken or planned by the agency address five of our seven recommendations. For example, in response to our recommendation to implement the preapproval requirement in the existing purchase card program, HUD stated that it has always had an effective preapproval process in its field offices through the Automated Client Response System (ACRS). While we agree that this system is available for use, during our review of supporting documentation, we found no evidence that cardholders were utilizing this system. To improve its preapproval process at its headquarters, HUD stated that it has implemented the mandatory use of HUD Form 10.4, Requisition for Supplies, Equipment, Forms, Publications, and Procurement Services. In addition, to enhance its review and approval function, HUD said it had provided mandatory training in January 2003 to approving officials on the procedures for reviewing and approving cardholder statements. HUD also said that it was working with Bank One to provide training to cardholders and approving officials on the use of the automated purchase card system and the monitoring tools available through Bank One. HUD also stated that as of January 2003, a review of the approving officials will be performed and the Agency Program Coordinator will make the necessary changes quarterly to ensure the list is accurate and complete. To ensure proper supporting documentation is maintained for all purchases, HUD also noted that it provided training to cardholders and approving officials starting in January 2003. Additionally, HUD stated that in October 2002, a staff person was assigned to begin performing planned internal reviews and random spot reviews of purchase card transactions with reports to be issued on an interim basis as the reviews are completed to ensure that proper management and internal controls are maintained over the authorization of purchases and use of the purchase card. These actions will be helpful in strengthening the purchase card controls at HUD. HUD did not state what, if any, action it planned to take regarding the two remaining recommendations. Regarding our recommendation to revise its remedial action plan, HUD stated that the plan adequately met the requirements set forth by OMB. While the plan may address the elements required by OMB, we do not believe it lays out an adequate approach for resolving identified control weaknesses. As discussed in the report, the plan lacks the specific steps necessary to fully implement the proposed changes to strengthen internal controls. Concerning follow-up on inadequately supported purchases we identified, HUD stated that it had provided documentation when asked and would provide more if necessary. On July 8, 2002, we provided HUD with a compact disk containing all transactions for which we received either no support or inadequate support during our fieldwork and allowed an additional 3 weeks for the agency to provide the supporting documentation. We have not received any additional supporting documentation since then. It is our view that HUD has a fiduciary duty to follow up on the inadequately supported purchases, which total about $2.1 million and represent 57 percent of the total purchase transactions we tested, to ensure their propriety. HUD offered several additional technical comments, which have been incorporated into this report as appropriate. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight within 60 days of the date of this report. You must also send a written statement to the House and Senate Committees on Appropriations with the agency’s first request for appropriations more than 60 days after the date of this report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs and the House Committee on Government Reform, the Director of Office of Management and Budget; and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-8341 or by E-mail at [email protected]. In addition to the contact named above, Sharon Byrd, Lisa Crye, Sharon Loftin, and Julie Matta made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Due to the Department of Housing and Urban Development's (HUD) increasing use of purchase cards and the inherent risk associated with their use, Congress asked GAO to audit the purchase card program concentrating on assessing internal controls and determining whether purchases being made are a valid use of government funds. Significant internal control weaknesses in HUD's approximately $10.6 million purchase card program resulted in improper, potentially improper, and questionable purchases in fiscal year 2001. Because of these internal control weaknesses, there was often inadequate documentation supporting many purchases GAO reviewed, and as a result, GAO was unable to determine whether these purchases were a valid use of government funds. GAO also found that HUD's remedial action plan for its purchase card program does not adequately address all the control weaknesses we identified. These weaknesses created an environment in which improper purchases could be made with little risk of detection and likely contributed to the $2.3 million in improper, potentially improper, and questionable purchases GAO identified. GAO found improper and potentially improper purchases totaling about $1 million where HUD employees either split or appeared to have split purchases into multiple transactions to circumvent cardholder limits. GAO also found that HUD employees lacked adequate supporting documentation for about $1.3 million in questionable purchases including those from vendors not expected to engage in commerce with HUD, purchases made on holidays and weekends, and $74,500 in portable assets such as computer equipment and digital cameras. In these instances, it was not possible to determine what was purchased, for whom, and why. The problems GAO identified with HUD's purchase card program leave the agency vulnerable to wasteful, fraudulent, or otherwise improper purchases. Unless HUD makes specific improvements to its review and approval process, requirements for documentation and record retention, monitoring process, and remedial action plan, the department remains susceptible to fraud, waste, and abuse.
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after the transmittal of the President’s budget, provide a direct linkage between an agency’s longer-term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. SBA is responsible for aiding, counseling, assisting, and protecting the interests of the nation’s small businesses and for helping businesses and families recover from natural disasters. SBA is also a financial institution with significant commitments and exposure. As of September 30, 2000, SBA’s total portfolio was about $52 billion, including $45 billion in direct and guaranteed small business loans and other guarantees and $7 billion in disaster loans. Since its inception, SBA has, among other things, made 1.1 million small business loans and has approved 1.4 million disaster loans to individual homeowners, renters, and businesses of all sizes. SBA also administers the 8(a) business development program, which is designed to assist small disadvantaged businesses become successful through counseling, training and assistance in obtaining federal contracts. SBA also provides entrepreneurial assistance through partnerships with private entities that offer small businesses counseling and technical assistance. This section discusses our analysis of SBA’s performance in achieving its selected key outcomes and the strategies the agency has in place, particularly strategic human capital management and information technology, for achieving these outcomes. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which the agency provided assurance that the performance information it is reporting is credible. SBA’s performance in achieving the goal of helping small businesses succeed is mixed. SBA reported that it met about half of its quantifiable measures relating to this goal. When goals were not met, SBA did not identify strategies for achieving the unmet goals in the future. At the time SBA issued its fiscal year 2000 report, data were not yet available for the percentage increase of federal prime contracts to small businesses, small disadvantaged businesses, women-owned businesses, and HUBZone small businesses. SBA provided us with data that showed that it met only the small disadvantaged business goal. We had difficulty assessing SBA’s progress for this outcome because SBA continues to use output measures without showing how strategies and measures relate to helping businesses succeed. In support of this goal, SBA lists four strategies, but does not provide an explanation as to how these strategies support the overall goal accomplishment of helping businesses succeed. In addition, SBA included several measures of the number of loans, but often did not correlate the impact of increasing the number of loans on small business success. For example, one goal is increasing the number of loans to veteran-owned businesses to 7,395. The narrative does not explain why this number is important. In the area of access to business development, SBA measures the number of clients being counseled or trained, but does not include outcome information on the impacts of the counseling or training. SBA’s IG suggested that SBA redefine current output measures because inconsistencies in the methods used to count clients mean that one client may be counted more than once. SBA recognizes that it still relies on output measures, and that for future reports, it will make an effort to explain how the accomplishment of these output measures support the established outcome. In addition, two of the performance indicators — “expand research, analyses, and publication of information,” and “improve small business impact analyses of regulatory alternatives” – were stated as qualitative indicators and the actual performance stated as “achieved” at 100 percent. However, SBA did not provide criteria or a performance indicator that would explain how it assessed accomplishment of these goals. SBA’s performance report includes the following strategies for the accomplishment of this outcome: (1) improving access to capital and credit, (2) increasing access to procurement opportunities, (3) act as a voice for America’s small businesses, and (4) providing access to entrepreneurial development assistance. These strategies were shown as goals in SBA’s fiscal year 1999 report. Because of the lack of explanation in the plan and report regarding how these strategies relate to helping businesses succeed, we were unable to assess whether they are clear and reasonable. SBA’s performance plan does not generally categorize its human capital or information technology strategies by outcome, so we could not determine specifically how this outcome of helping businesses succeed is impacted by these strategies. However, in its plan, SBA states that automation and asset sales will allow staff to shift their attention from the processing of transactions to using information to analyze programs, activities, and performance. SBA states that it plans to continue to develop and deliver training in marketing and outreach, commercial credit analysis, lender oversight, and lender relations. Additional strategies for human capital and information technology are discussed in the plan and reports as part of SBA’s overall internal goal of “improving SBA management.” In its performance plan, SBA provides a description of its use of interagency coordination as a strategy for this outcome. For example, SBA states that it meets regularly with the Commerce-directed Trade Promotion Coordinating Committee to discuss challenges, propose program initiatives, work on developing new products, and avoid duplication of effort. SBA reported that it succeeded in meeting its fiscal year 2000 goal of providing timely service to disaster victims, yet we have concerns about the quality of SBA’s measures. For example, while SBA’s fiscal year 2000 performance report shows that it met its 3-day field presence measure, SBA’s IG determined that this measure has not consistently been applied by disaster area offices. For example, two disaster area offices defined field presence as the date they arrived at the disaster scene, while one area office defined the term as the date it was available to assist disaster victims. SBA did not address this discrepancy in its performance report. Also, for fiscal years 1999 and 2000, SBA’s performance report shows that it met its underwriting compliance rate goal. However, SBA’s IG reported that it did not consider the underwriting compliance rate to be an objective indicator. SBA did not provide a discussion on this issue in its fiscal year 2000 performance report. Furthermore, as shown in table 1, SBA adjusted its target goal annually for its measure of processing disaster loans within 21 days, depending on the extent to which the goal was accomplished in the previous year. In its performance report, SBA explained that its performance deteriorated because of the need to respond to widespread multiple disasters in fiscal years 1998 and 1999, including Hurricane Floyd, that affected 10 states along the East Coast and caused major widespread flooding in Texas. However, the report does not explain SBA’s justification for changing its target goal rates in order to align them with their actual annual performance. SBA’s fiscal year 2000 report acknowledges that SBA considers the disaster assistance goal difficult to achieve, due to the unpredictability of disaster activity. Our 2001 Performance and Accountability Series Report on SBA pointed out that one step that would assist SBA in stabilizing the indicator for this goal would be to modernize its loan processing in order to consistently meet its timeliness goals. Presently, few of the processes followed by SBA loan officers are automated in an integrated manner, and this lack of automation contributes to processing time. SBA officials said that they are taking various actions to revise and/or clarify the measures for the disaster assistance goal. For example, the 2002 performance plan states that SBA has developed a draft definition of “effective field presence” to be applied by its Area Directors. Also, SBA plans to incorporate a “customer satisfaction indicator” as a measure for this goal, which will be designed to assess issues related to the quality and timeliness of services provided. In its 5-year strategic plan, SBA refers to two completed evaluations to assist in formulating its strategies for establishing this indicator. One survey evaluated customer satisfaction with the services provided to recipients of disaster loans approved after Hurricane George. Another survey was also done to measure the customer satisfaction level during the disaster loan making process. According to the strategic plan, the surveys indicated a high customer satisfaction rate. However, we identified the following limitations with the results from SBA’s disaster loan making survey: (1) the survey population may not have been representative of all recipients, (2) those who were denied loans were not surveyed, (3) the survey response rate was low, and (4) the selection of response categories may have skewed the responses toward higher ratings. Neither SBA’s plan nor its report discusses strategies for accomplishing this goal. However, SBA’s Strategic Plan for fiscal years 2001 through 2006 mentions strategies that include (1) developing a flexible infrastructure of resources that can be applied to a disaster area, (2) using the Internet to facilitate the disaster home loan application process, and (3) outsourcing disaster home loan servicing and carrying out asset sales. Since these strategies were not discussed in the plan and report, we could not determine how they relate directly to goal accomplishment. SBA’s performance plan does not generally categorize its human capital or information technology strategies by outcome, so we could not determine specifically how this outcome of providing assistance to families and businesses recovering from disasters is impacted by these strategies. Additional strategies for human capital and information technology are a part of SBA’s overall internal goal of “improving SBA management.” In its performance plan, SBA provides a description of its use of interagency coordination activities as a strategy for this outcome. For example, SBA states that systematic coordination among federal, state, and local agencies is necessary before and during a disaster to promote efficient, consistent action. SBA states that this coordination is described in the federal response plan and is overseen by the Federal Emergency Management Agency. SBA’s reported success in achieving the portion of its outcome that more eligible small disadvantaged businesses participate in its programs was mixed. SBA reported that it met its output measure that at least 60 percent of small disadvantaged firms (including 8(a) firms) receive federal contracts and its measure that at least 3.4 percent of 8(a) firms receive mentoring. However, SBA reported that it did not meet its measure of certifying a total of 12,000 small disadvantaged business firms as being eligible to receive price credits when bidding on prime contracts or to perform as subcontractors in certain industries. SBA said that it is reevaluating its goal for the number of small disadvantaged business firms it will certify because the number of firms seeking certification was much smaller than projected. SBA does not explain the strategies it is using to reevaluate this goal. SBA’s original projection of the number of firms it would certify was based on the number of firms that had previously self- certified as small disadvantaged businesses. Our work has shown that a variety of factors, including uncertainty about the program, the administrative and financial burden of applying, and questions regarding the benefits of the program may have contributed to the number of small disadvantaged business certifications being lower than anticipated by SBA. It is not possible to determine SBA’s progress in accomplishing the portion of its outcome that more eligible small disadvantaged businesses will become more successful because SBA’s current success measure is not aligned with the mission of the 8(a) business development program, SBA’s key program in this area. SBA’s measure for the 8(a) business development program does not capture program success in terms of the number of competitive firms that exit the program without being unreasonably reliant on 8(a) and that can compete in the mainstream economy, as required by the Small Business Act, as amended. SBA’s performance report states that it will measure achievement by the percentage of firms that are economically viable 3 years after graduation and states that SBA began capturing the data for this measure in fiscal year 2000. SBA’s basis for reporting that it has just begun to collect this data is unclear because SBA reported on actual performance in previous years. SBA includes this outcome in its plan as a part of its outcome of helping businesses succeed. The strategies include: (1) developing methods to improve access to contracting opportunities for 8(a) firms; (2) working with other agencies to reform and improve the program; and (3) developing legislative, regulatory, and procedural documentation for the reform recommendations. In the human capital area, SBA noted, among other things, that it plans to provide sufficient financial and analytical training to Business Opportunity Specialists to help them more accurately evaluate a company’s business profile and competitive potential. SBA’s performance plan does not generally categorize its information technology strategies by outcome, so we could not determine specifically how it is impacted by this strategy. Additional strategies for information technology are a part of SBA’s overall internal goal of “improving SBA management.” In its performance plan, SBA provides a description of its use of interagency coordination activities as strategies for this outcome. For example, SBA states that it participates in monthly meetings with other federal agencies to discuss strategies to increase small business participation in federal contracts. For the selected key outcomes, this section describes major improvements or remaining weaknesses in SBA’s (1) fiscal year 2000 performance report in comparison with its fiscal year 1999 report and (2) fiscal year 2002 performance plan in comparison with its fiscal year 2001 plan. This section also discusses the degree to which SBA’s fiscal year 2000 report addresses concerns and recommendations by the Congress, us, SBA’s IG, and others. SBA made some improvements in its performance reporting, but in certain areas, additional effort is warranted. The fiscal year 2000 performance report includes more clearly labeled headings and provides more guidance so that the reader can quickly identify specific information. For example, the 2000 performance report includes a section that summarizes SBA’s programs and a section that exclusively discusses SBA’s goals, resources, and outcomes. Another strength of the 2000 report is that it concisely presents SBA’s status in responding to management challenges identified by SBA’s IG, documents ongoing and closed GAO and IG reviews, as well as the number of recommendations associated with each. However, several weaknesses we previously noted remain in SBA’s report. For example, the 1999 performance report did not discuss data limitations that could affect the quality of data used by SBA to assess performance. Also, although the 1999 report generally discussed the reasons certain goals were not met, it did not include time frames or schedules for achieving the unmet goals. In comparison, the 2000 performance report did discuss data limitations, but did not include time frames or schedules for achieving the unmet goals. The 2000 performance report does include a narrative explanation of why the goal accomplishment fell short, but as with the 1999 report, it does not provide strategies for meeting unmet goals. In addition, SBA does not sufficiently link its strategies to indicators and measures and does not consistently provide summarized explanations about the data that are presented. Another limitation of the fiscal year 2000 report is that it did not provide a brief summarization in its section that addresses the number of indicators that were met. SBA did present this information in its fiscal year 1999 report. Although this information is included elsewhere in the report, we believe that such a narrative leading into the ‘Performance Indicators’ section for the 2000 report would have been helpful in determining SBA’s approach in presenting these indicators. For example, in fiscal year 1999, SBA had a total of 59 indicators, and in 2000, it only had 16. Since the data are presented without SBA’s explanation for the substantial reduction of indicators, we did not have any insight into SBA’s rationale for doing so. We believe that the lack of a discussion on this action inhibits our capability to track a clear link between the identified strategies and the corresponding indicators and measures in order for decisionmakers to determine if progress has been made in achieving outcomes. We noted that SBA’s presentation of information in the plan was an improvement from the fiscal year 2001 plan. The layout of data was better designed, and SBA included its organizational chart, as well as more graphics to illustrate its points. Also, the fiscal year 2002 plan addresses SBA’s mission, strategic goals and objectives, core values, and budgetary requirements. Another improvement from the fiscal year 2001 plan is that SBA’s fiscal year 2002 plan generally discusses SBA’s crosscutting activities with other agencies and discusses human capital resources needed to achieve SBA’s planned performance. However, the plan lacks a clear link of how strategies relate to outcomes and how they link to indicators and measures. Specifically, it is difficult to ascertain how SBA’s measures will indicate successful performance beyond meeting output targets. We have identified two governmentwide, high-risk areas: strategic human capital management and information security. Regarding strategic human capital management, we found that SBA’s performance plan did have goals, but not measures, related to strategic human capital management, but SBA’s performance report explained its progress in resolving strategic human capital management challenges. For example, in July 2000, we said that SBA had begun to take the steps necessary to better manage its human capital activities, but needed to do more. SBA reported that, among other things, it has (1) issued a comprehensive workforce transformation plan, (2) issued a contract to conduct a workload and staffing analysis of SBA headquarters, and (3) provided leadership training to executives and senior managers. With respect to information security, we found that SBA’s performance plan did have goals, but not measures, related to information security, and SBA’s performance report explained its progress in resolving its information security challenges. For example, SBA stated that it has, among other things, (1) committed more than $1.2 million in personnel and contract support to enhance computer security, (2) increased the number of authorized personnel for information technology security, and (3) issued an updated computer security policy document. As shown in table 2, we identified four major management challenges facing SBA. We found that SBA’s performance report discussed progress in resolving many of the challenges we identified, but it did not discuss SBA’s progress in resolving the challenge of streamlining and automating disaster loan assistance to improve timeliness. Of the four major management challenges we identified, SBA’s performance plan had (1) a goal and measures related to one of the challenges; (2) a goal, but no measures, directly related to one of the challenges; (3) a goal and measures indirectly applicable to one of the challenges; and (4) had no goals and measures related to the last challenge. GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. In order for such a shift to occur, the information presented in GPRA plans and reports needs to be presented in a logical format that allows the reader to easily discern how the agency plans to accomplish its goals and objectives and how the measures will indicate successful performance beyond meeting output targets. We had significant difficulty assessing SBA’s progress in achieving its outcomes because of weaknesses in the report and plan. Although improved over last year in terms of presentation issues, SBA’s fiscal year 2000 performance report and fiscal year 2002 performance plan do not follow GPRA guidance in several areas. SBA did not provide criteria or a performance indicator to explain the accomplishment of its qualitative measures. We believe, as we stated in our fiscal year 1999 report, that SBA is still relying heavily on outputs without sufficiently linking them to achievement of the outcome. SBA’s performance report also lacks information about time frames or schedules and strategies for achieving unmet goals. In our view, SBA’s fiscal year 2000 performance report and 2002 performance plan do not present information in a logical manner linking strategies to outcomes, indicators, and measures. To make SBA’s plan and report more useful for decisionmakers and more consistent with GPRA, OMB Circular A-11, and related guidance, we recommend that the Administrator of SBA ensure that the fiscal year 2001 performance report and fiscal year 2003 performance plan (1) clearly link strategies to outcomes, indicators, and measures; (2) present criteria or a performance indicator to explain the accomplishment of the goal when using qualitative measures; and (3) provide information about strategies, time frames, and schedules for achieving unmet targets. Our evaluation was generally based on the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from OMB for developing performance plans and reports (OMB Circular A-11, Part 2), previous reports and evaluations by us and others, our knowledge of SBA’s operations and programs, our identification of best practices concerning performance planning and reporting, and our observations on SBA’s other GPRA-related efforts. We also discussed our review with agency officials in the Office of the Administrator and with SBA’s IG. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Governmental Affairs Committee as important mission areas for the agency and generally reflect the outcomes for all of SBA’s programs or activities. We identified the major management challenges confronting SBA, including the governmentwide, high-risk areas of strategic human capital management and information security, in our January 2001 performance and accountability series and high-risk update and SBA’s IG identified them in December 2000. We did not independently verify the information contained in the performance report and plan, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of SBA’s performance data. We conducted our review from April through June 2001 in accordance with generally accepted government auditing standards. SBA provided written comments on a draft of this report. In its response, SBA said that it intended to conduct a major review of its current plan once it has an Administrator confirmed and senior political leadership in place. In revising the plan, SBA said that it would take into account our comments and would fully comply with GPRA, as well as promote President Bush's agenda. SBA disagreed with our conclusion that its fiscal year 2000 performance report and 2002 performance plan do not present information in a logical manner linking strategies to outcomes, indicators, and measures; however, SBA did not comment specifically on the report's recommendations. SBA said it believes the 2002 budget and performance plan offers a clear logical construct that helps the reader to understand how SBA activities can contribute to the success of a firm, as defined by job creation, revenue generation, and viability in the marketplace. SBA also said that it used logical diagrams extensively to convey how activities produce outputs, which in turn contribute to outcomes. We continue to believe that it is difficult for a reader to follow SBA's report and plan. While SBA employed diagrams and tables that should have helped the reader, inconsistencies in SBA's use of terms such as strategies and outcomes make following SBA's logic a laborious process. SBA also provided technical clarifications, which were incorporated as appropriate. SBA’s comments are in appendix II. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies to appropriate congressional committees, the Acting SBA Administrator, and the Director, Office of Management and Budget. Copies will also be made available to others on request. If you or your staff have any questions, please call me at (202) 512-8678. Key contributors to this report were Susan Campbell, Cheri Truett, and Tina Morgan. Table 3 identifies the major management challenges confronting the Small Business Administration (SBA), including the governmentwide high-risk areas of strategic human capital management and information security. The first column lists the management challenges that we and/or SBA’s Inspector General (IG) have identified. The second column discusses what progress, as discussed in its fiscal year 2000 performance report, SBA has made in resolving its challenges. The third column discusses the extent to which SBA’s fiscal year 2002 performance plan includes performance goals and measures to address the challenges that we and the IG identified. The SBA IG told us that in the fiscal year 2000 Performance and Accountability Report, SBA did not update its description of the actions it has taken in response to the challenges to reflect actions taken since the IG’s December 2000 management challenges report. We found that SBA’s performance report discussed the agency’s plans and progress in responding to most of its challenges, but it did not discuss the agency’s progress in resolving the challenge of streamlining and automating disaster loan processing to improve timeliness. The plan discusses SBA’s efforts to improve timeliness, but not as a result of automated loan processing. Of SBA’s 13 major management challenges, its performance plan had (1) goals, but no measures that were related to 11 of the challenges, and (2) no goals or measures related to two of the challenges. However, SBA discussed strategies for 12 of the 13 challenges.
This report reviews the Small Business Administration's (SBA) fiscal year 2000 performance report and fiscal year 2002 performance plan required by the Government Performance and Results Act of 1993 to assess SBA's progress in achieving selected key outcomes that are important to its mission. SBA's reported progress in achieving its outcomes is mixed. However, GAO had difficulty assessing SBA's progress due to weaknesses in its performance measures and data. GAO was unable to assess SBA's lack of an explanation about how the strategies relate to the outcomes or a discussion regarding strategies for the outcome. GAO identified some improvements from SBA's prior year report and plan, but several weaknesses persist in SBA's fiscal year 2000 performance report and performance plan. The performance report includes a section that summarizes SBA's programs, a matrix that identifies ongoing and closed audit reviews, and several recommendations associated with each. However, SBA omitted time frames or schedules for achieving unmet goals, lacked strategies for meeting unmet goals, and failed to adequately link strategies to indicators and measures.
The secretary of the Department of the Interior created OAS in 1973 to resolve several aviation program problems: numerous accidents, improper budgeting and financial management, and poor utilization of aircraft. A 1973 task force, comprising representatives from across the Interior bureaus, attributed these problems to the decentralized aviation program—with each bureau responsible for all aviation functions. The secretary of the Department of the Interior charged OAS with responsibility for (1) coordinating and directing all fleet and contract aircraft; (2) establishing and maintaining standards for safety, procurement, and utilization; (3) budgeting for and financially controlling fleet and contract aircraft; and (4) providing technical aviation services to the bureaus. As the program evolved, OAS assumed responsibility for policy oversight and aviation services, while the bureaus became responsible for implementing safety requirements, deciding on whether to use fleet or contract aircraft, and the scheduling and use of their aircraft. OAS works with the Aviation Management Board of Directors to involve the bureaus in formulating policy and managing aviation activities. In addition, since 1996, the bureaus’ aviation managers have also participated with OAS in setting fleet rates and planning for aircraft replacement and projected aviation program requirements. Eight Interior bureaus use OAS’s services in varying degrees to carry out their respective missions as shown in figure 1. The Bureau of Land Management—which accounted for over one-third of the OAS program in flight hours for fiscal year 2000— uses aircraft to carry out its fire-fighting and resource management missions. The Fish and Wildlife Service and the National Park Service depend heavily on OAS to manage fleet aircraft to achieve their respective missions. OAS is headquartered in Boise, Idaho, with significant operations located in Anchorage, Alaska. It has additional offices in Boise; Atlanta, Georgia; and Phoenix, Arizona. OAS operated with approximately 94 FTE in fiscal year 2000, 63 located in the lower 48 states and 31 located in the Anchorage office. In fiscal year 2000, OAS managed 95 government-owned aircraft, 42 based in the lower 48 states and 53 based in Alaska. OAS contracts for aircraft maintenance of fleet aircraft in the lower 48 states. In Alaska, OAS contracts for maintenance of fleet aircraft with private vendors, but maintains an in-house core maintenance staff. To fulfill its responsibilities, OAS set up functional divisions, including financial and information management, acquisition, and technical services. However, OAS accounts for and reports costs across four lines of business: fleet, contract, rental, and other. Of the $117 million spent on aviation services in fiscal year 2000, OAS received an appropriation of only $800,000 (or approximately seven FTE) to provide oversight of OAS department-wide aviation policies and procedures. Most of OAS’s costs are financed through a working capital fund, established in the Office of the Secretary to finance a continuing cycle of operations, and must be repaid to the fund by the bureaus and others using the services based on rates determined by OAS. Since 1975 Interior’s aviation accident rate has been cut in half, from 18.8 accidents per 100,000 flight hours in fiscal year 1975 to 8.7 accidents per 100,000 flight hours in fiscal year 2001. A number of OAS efforts have contributed to this reduction. Prior to the establishment of OAS’s aviation safety efforts, safety standards varied from bureau to bureau and between regions within bureaus; in some cases, standards did not exist at all. According to the 1973 task force, virtually no control over aviation operations existed within the department, which resulted in a high accident rate and higher operational costs. OAS officials attribute the department’s reduced accident rate, in part, to the implementation of a standard aviation operating policy. OAS sets pilot qualifications and proficiency standards as well as standards for aircraft maintenance and equipment inspections. These standards exceed the Federal Aviation Administration’s (FAA) requirements. In addition, OAS periodically evaluates the bureaus’ implementation of the aviation program, with a special emphasis on safe operations. The OAS Aviation Safety Management Office, reporting to the OAS director, is responsible for policy development, implementation, and review of the department’s (1) aviation safety management and aircraft accident/incident prevention programs; (2) accident and incident investigation; (3) management of the department’s reporting system for aircraft accidents, incidents, and hazards; and (4) management of the OAS aviation and occupational safety and health programs. Since April 1995, OAS is required to report accidents involving fatalities, serious injuries, or substantial damage to the National Transportation Safety Board and to assist the board with accident investigations when appropriate. The OAS Division of Technical Services oversees many day-to-day safety concerns, such as pilot training, aircraft engineering and maintenance, and technical policy development. The bureau directors are ultimately responsible for adherence to standards and the implementation of an effective accident prevention program. Since safety oversight was centralized under OAS, Interior has seen a dramatic decline in the rate of accidents, as shown in figure 2. OAS accepts applicable FAA regulations as baseline criteria for its aviation operations and then applies additional standards in order to reduce accidents that occur during hazardous flying conditions and specialized operations required by the bureaus’ unique missions. These standards are published in the department’s manual and in OAS’s operational procedures memoranda. Additional policy directives issued by the bureaus may be more restrictive but may never be less restrictive than OAS’s standards. These manuals specify more stringent pilot qualifications than those required by federal aviation regulations. For example, FAA requires pilots who fly passengers on commuter aircraft to have a commercial pilot certificate, which requires a minimum of 250 flight hours. However, OAS requires its contract pilots to have 1,500 flight hours to be eligible to fly missions for Interior. OAS also requires most of its fleet pilots to have a minimum of 500 hours of time commanding an aircraft to operate government-controlled aircraft, although there is no similar requirement in the federal aviation regulations. OAS has also developed additional aircraft maintenance standards for all Interior-owned aircraft and all contract aircraft that operate for Interior. For example, OAS requires a flight test following an aircraft overhaul, a major repair, or a replacement of engine or propeller. In addition to requirements for flight tests and 100-hour inspections, OAS developed standards for the inspection and maintenance of special use and mission- related equipment that is not covered by FAA regulations. Although OAS strives to meet or exceed all FAA regulatory standards on manufacturer requirements, OAS has granted exceptions to manufacturers’ weight requirements for certain aircraft—eight Cessna 206 Amphibians and one De Havilland DHC-2T Beaver aircraft. OAS granted these exceptions to the Fish and Wildlife Service to allow the aircraft to exceed the manufacturers’ weight limitations when the service conducts surveys of migratory birds. The exceptions were required to compensate for special equipment needed to conduct these surveys and to carry extra fuel during long flights over remote areas. OAS granted the exceptions with several stipulations designed to enhance the safety of these operations. Furthermore, to verify that the aircraft are operating under safe conditions, OAS had an engineering analysis conducted on the eight Cessna aircraft and has an engineering analysis in progress on the De Havilland Beaver. OAS also awarded a development contract on June 5, 2001, to provide a replacement aircraft that will meet all migratory bird mission requirements, thereby eliminating the need for all overweight exceptions to policy. From fiscal year 1997 through fiscal year 2000, OAS did not recover about $4 million from Interior’s bureaus. We found two primary reasons why OAS set rates too low to fully recover its costs: (1) actual flight hours were lower than the projected hours based on historical usage and (2) all costs were not included in the estimates. As a result, OAS had to subsidize the costs of the aircraft used by Interior bureaus in part with funds from its reserve accounts, collected in prior years, such as the reserve fund for replacing aircraft. OAS’s failure to recover all its costs from the bureaus was not attributable to any faults in OAS’s accounting system but to deficiencies in the fleet rate model and rate process. We found the accounting system capable of producing financial information that is reasonably complete, reliable, and useful to OAS management for the purposes of setting rates. OAS recovers its costs from users by charging for its services. Costs for fleet aircraft are recovered based on fleet rates, and costs for contract aircraft are recovered based on agreements for the cost of the contract plus OAS’s costs for servicing these agreements. OAS provides four lines of services—fleet, contract, rental, and miscellaneous (other)—to Interior’s bureaus and other agencies, such as those within the Departments of Defense and of Agriculture. For fiscal years 1997 through fiscal year 2000, OAS failed to recover about $4 million from the Interior sector of its business while realizing a slight overcharge of approximately $400,000 from agencies outside Interior. Table 1 shows, by business line, where these unrecovered costs occurred. As table 1 shows, the majority of unrecovered costs were in the fleet business line. The fleet business recovered less of its costs because OAS and the bureaus’ aviation managers had not correctly determined and set the appropriate rates. To determine the rates it needs to charge its users to recover the costs of its services, OAS captures the historical costs associated with each aircraft. OAS then projects the future costs based on its analysis of the historical costs, adjusted for inflation, and determines a means by which to allocate projected costs to the appropriate user. Based on this allocation, OAS calculates the hourly and monthly rates of the fleet rates using a fleet rate model. OAS then meets with the bureaus’ aviation managers to get their input on the rates and makes subsequent adjustments to its projections of future costs if necessary. Finally, the aviation managers and OAS agree to the fleet rates, and OAS and each bureau sign an interagency agreement that sets the rate. In order to allow the bureaus lead time to budget for future costs, rates are set 2 years in advance and adjusted, if necessary. OAS and the aviation managers do not have a process to monitor rates periodically to determine if the rates fully recover costs. Using this process for setting the fleet aircraft rates, OAS has not recovered all costs because it relies on 5-year historical averages of flight hours in its calculation of rates and has no provision for projecting flight hours in its rate-setting process. If OAS had solicited the bureaus for projected flight hours, which may change from year to year because of changes in mission requirements, it would have had a more accurate projection of usage and therefore could have set the rates more precisely. The use of 5-year historical averages has resulted in an overestimation of the number of flight hours when compared to declining actual usage in recent years. According to an OAS official, the bureaus accept this higher projection of flight hours based on 5-year historical usage, because it results in lower rates. For example, if an aircraft has (1) an estimated cost of $100,000 based on historical costs and (2) an estimated usage of 200 flight hours based on the historical averages, the resulting rate would be $500 per flight hour. However, if the actual usage were reduced to 100 flight hours, the actual cost recovery for that aircraft would only be $50,000 or one-half of the projected recovery. As a result, the rate set would not fully recover the costs. While it is to be expected that flight hours vary to some degree from the projected usage, the use of more accurate projections and resulting rates would result in more accurate recovery of the costs. Additionally, OAS did not include in its calculations all the costs that needed to be considered in setting rates. From 1991 through 2000, in the Alaskan operations, OAS omitted from its rate calculation approximately $1.9 million in costs for aircraft maintenance. Fleet rates were therefore significantly lower than needed to recover the costs. OAS did not have a process in place to recognize the error and the resulting underrecovery of costs in a timely fashion. OAS has since taken actions to recoup the costs of the Alaska fleet maintenance operations and now includes these costs in its rate calculations. OAS has also not included in its projection all the costs of employees’ postretirement health benefits and of the Civil Service Retirement System employee pension plan for current OAS employees engaged in work directly related to aviation services and therefore is not recovering these costs from its users. OAS has taken steps to control increases in program costs, but could potentially save several million dollars more annually if it implemented a more cost-effective approach to using aircraft. In an effort to control costs, OAS has reduced staff and implemented strategies to operate more efficiently. As a further effort, OAS conducted cost comparisons and determined that it was more cost effective to maintain aircraft under government ownership than to contract for aircraft. Despite these efforts, OAS has not managed the use and scheduling of aircraft, a major factor of the aviation program’s cost. We analyzed the savings attributable to improvements in fleet and contract utilization and found that a moderate increase in average annual flight hours per aircraft could translate into savings of several million dollars annually. However, until OAS sets results-oriented performance goals and measures as part of a strategic aviation planning process and monitors its performance on an ongoing basis, it cannot track its progress in achieving additional program savings. OAS has taken several actions to control the cost of operations to maintain fleet rate cost increases consistent with the producer price index for transportation since 1995. In particular: OAS decreased staffing levels from 124 staff in fiscal year 1992 to 94 staff in fiscal year 2000, a 24-percent decrease. Because most OAS costs are personnel-related, this reduction significantly decreased OAS’s costs. The OAS Acquisition Management Division implemented new contracting procedures to streamline the contracting process and established interdepartmental agreements with the Department of Agriculture’s Forest Service to facilitate aircraft sharing arrangements. OAS is developing Web-based training for bureau aviation personnel, reducing training cost by more than $100,000 during the first 6 months of program implementation. To examine the cost effectiveness of government ownership, OAS compared the costs of fleet aircraft with the costs of contracted aircraft. OAS found that, given the existing fleet aircraft, equipment, locations, and missions, retaining the fleet under government ownership to be $243 per flight hour less, on average, than contract aircraft. In making these comparisons, OAS contracted for two comprehensive studies—one in 1996 and one in 2001—that were to follow the standard requirements laid out in Office of Management and Budget Circular A-76, “Performance of Commercial Activities,” for ensuring that the cost comparisons between government and contracted operations were conducted appropriately. The 1996 study concluded that all but 2 of the 84 aircraft examined were, on average, significantly more cost effective under government ownership. The 2001 study found that all but 1 of the 89 aircraft reviewed to be cost effective. OAS also contracted for a cost comparison of aviation maintenance costs and solicited bids from private vendors to maintain the fleet in Alaska during 1995. As part of the A-76 process, OAS also prepared a bid proposing a streamlined government operation that would lower its maintenance costs by reducing the number of maintenance personnel. While several vendors expressed interest, none ultimately bid on the contract to assume maintenance operations for the Alaskan service. Some bidders took exception with the minimum wage provisions issued by the Department of Labor that were included in the solicitation. OAS requested a clarification regarding wage determination rates, but did not receive a reply; therefore, the wage provisions remained in the solicitation as issued. OAS won the bid to continue in-house maintenance and implemented the streamlined organization, reducing the number of maintenance personnel from 13 to 9. Although OAS was organized in 1973 to help improve the utilization of government-controlled aircraft, the use of fleet aircraft declined from about 350 hours per aircraft in fiscal year 1973 to 246 hours per aircraft in fiscal year 2000. The task force and several recent reports recommended more centralization of scheduling; however, OAS has not been able to fully implement these recommendations because the bureaus determine the aviation resources needed to accomplish their missions. In 1995, the inspector general of the Department of the Interior estimated that Interior spent $2.3 million throughout 1992 and 1993 in unnecessary costs because the bureaus did not schedule flights when fleet aircraft were available and did not coordinate these aircraft either within each bureau or among the bureaus. The report suggested that OAS could be a focal point for scheduling and use of the government-owned fleet, or designate a bureau as the schedule coordinator within specified regional areas. In 1996, the General Services Administration also reviewed the Interior aviation program and identified the potential for significant savings related to utilization. At the time, the Interior average of 252 hours was significantly less than the federal average of 350 hours per year, according to the report. The report estimated that increasing the average hours per aircraft to the federal average of 350 hours per year would result in an annual savings of $715,000 in fixed costs and more than $4 million from the disposal of multiple fleet aircraft. The General Services Administration did not estimate any savings for variable costs. We also analyzed the potential for program savings resulting from improved aircraft utilization. Our analysis is meant to illustrate the potential for savings—not to identify what utilization improvements should be made by OAS and the bureaus. We considered two strategies to increase the fleet’s average number of flight hours per year—either reduce the size of the fleet or increase the total hours flown. Reducing the number of fleet aircraft could reduce fixed program costs, while increasing the total number of hours flown by fleet aircraft could reduce the variable program costs. If fewer fleet aircraft could fly the required missions, then the utilization of the fleet could be increased and the fixed cost associated with some fleet aircraft could be eliminated. As shown in table 2, a 30-percent reduction in the size of the fleet increases average flight hours per aircraft per year from 221 to 316 hours per year based on actual fiscal year 2000 fleet flight hours. We also looked at the potential to realize variable cost savings. These savings could be achieved by using fleet aircraft instead of contract aircraft when fleet costs are less than contract costs. For example, according to the OAS 2001 cost comparison study, certain contract aircraft are 100 to 235 percent more expensive to operate. For these aircraft, the OAS’s estimated average net variable cost savings between the fleet and contract aircraft was $778 per flight hour. As shown in table 3, if it were possible to convert 4,425 flight hours to fleet operations, then the average utilization per fleet aircraft would increase by 20 percent, and the potential variable cost saving would be about $3.4 million annually. However, in order to determine the actual savings potential, OAS and the bureaus would need to conduct a detailed review of opportunities on an aircraft- by-aircraft basis. OAS and the bureaus have not been able to improve aircraft utilization. Citing its history and relationship with the bureaus, OAS did not implement all the utilization recommendations made in the prior studies because it believes it lacks the authority and responsibility to mandate bureau program and mission requirements—and hence, utilization—under departmental regulations. While bureau aviation managers point to some examples in which improved utilization has resulted in savings, they have not attempted to make a systemwide improvement in utilization. Bureau aviation managers noted that improvements in utilization are difficult to implement because of other factors: weather, high-priority or time-critical missions, workload peaks, mission-required equipment, and the aircraft’s physical location. OAS does not set results-oriented performance goals and measures as part of a strategic aviation planning process and does not monitor its performance on an ongoing basis. As a result, it cannot effectively track its performance or measure its results on a consistent basis. OAS has tracked its performance on a sporadic basis in response to requests for information, legislative requirements, or, most recently, as part of the rate-setting process, but it has not linked performance measurement to results-oriented goals. For example, OAS tracked the cost and performance of the Alaskan operations as part of the reorganization, but discontinued monitoring the operations’ performance after 2 years. Rate setting is a critical component of OAS’s program operations because OAS must recover its costs and maintain adequate funding for operations, future aircraft replacement, and accident reserves. Shortfalls in program costs, such as those resulting from inaccurately setting rates, would have been less likely to occur year after year if the bureaus had evaluated whether their reliance on historical averages correctly predicted future costs and usage. Consideration of both historical and projected data would help OAS bring the best available information to bear in estimating usage and setting rates. Periodic comparisons of the rates set with the actual costs incurred would have helped ensure that all costs were recovered. OAS acting alone cannot improve the utilization of aircraft. Traditionally, the bureaus have not coordinated their efforts to use their aviation resources in a more cost-effective manner. As a result, fleet aircraft are not being fully utilized; better utilization could lead to significant savings. Absent a strategic aviation plan for the department, it is difficult to analyze future requirements by mission and flight hours. OAS and the bureaus could begin the process for fuller utilization if they established a strategic aviation plan that, among other things, sets results-oriented performance goals and measures for the department and then, following that plan, analyzed future requirements for the department. Such an analysis could help them identify new opportunities to reduce cost, maintain the quality of services, and maximize the value of the aviation program for the department. To ensure that all program costs are fully recovered and to improve the rate-setting process, we recommend that the secretary of the Department of the Interior direct OAS to obtain forecasts of future usage from the bureaus and use these forecasts, as well as other relevant information, to set rates; and direct OAS and the bureaus, upon completion of the rate-setting process and calculation of associated payments, to determine whether the rates recovered all costs and, if not, whether adjustments in the process used to calculate the rates are necessary. We also recommend that the secretary of the Department of the Interior instruct the directors of the Office of Aircraft Services and of each bureau to improve scheduling and use of aircraft and establish performance measures to monitor and assess progress. We provided the Department of the Interior with a draft of this report for review and comment. Interior agreed with the information presented in the draft, and stated that our findings and recommendations are reasonable. It stated that the department’s aviation program is complex and multi-faceted due to the widely diverse missions of the bureaus. Further, it stated that our report recognizes that successful aviation management within the department depends on a partnership between OAS and the bureaus to seek more efficient and cost-effective ways to manage the program. The comments of the Department of the Interior and our responses to those comments are included in appendix I. We performed our review at OAS’s headquarters in Boise, Idaho, and at OAS, Fish and Wildlife Service, and the National Park Service offices located in Anchorage, Alaska. We discussed the OAS aviation program with aviation managers and others from Interior’s Bureau of Land Management, Fish and Wildlife Service, and National Park Service. For additional perspective, we interviewed private-sector maintenance vendors in Alaska and representatives of the state of Alaska aviation program. We reviewed OAS’s and bureaus’ aviation program documents and prior audit reports, including laws, regulations, program plans, financial data, fleet rate meeting minutes, and other documents. Although we did not conduct audit procedures designed to completely evaluate or give an opinion on the OAS accounting system and corresponding internal controls, we did review work conducted by the Office of the Inspector General and also performed limited testing of data reliability. We examined OAS’s cost comparisons as part of the A-76 process; we did not, however, evaluate the bureaus’ future mission needs or flight hour forecasts on which the study was based. To illustrate the potential improvements in aircraft utilization, we relied on OAS’s most recent comparison of contract and fleet costs and applied the estimated costs to actual OAS fiscal year 2000 aircraft and flight hours. We conducted our work from July 2001 through April 2002 in accordance with generally accepted government auditing standards. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to other interested parties and make copies available to others who request them. If you or your staff has any questions about this report, please call me or Peg Reese at (202) 512-3841. Key contributors to this report are listed in appendix II. The following are GAO comments on the Department of the Interior’s letter dated March 27, 2002. 1. Interior agreed with our recommendation that historical and projected data should be used to set rates but stated that our report implies that fleet aircraft flight hour projections might be intentionally overestimated in an effort to reduce planned hourly rates. We disagree. Our report describes the process for making projections and attributes comments about projections to OAS, but draws no conclusions about the intent on the part of OAS or the bureaus. During our review, we noted that when total flight hours decline year after year, projections based on historical averages will inherently result in over-estimating future flight hour requirements. 2. Interior agrees with our findings and recommendation that periodic monitoring of fleet cost and subsequent adjustment of rates would result in more complete recovery of costs. Interior points out that, once rates are established for budgeting purposes, increasing rates after budget allocation would reduce flying hours, which in turn could adversely impact cost recovery. We agree. Our report, however, recommends that actual costs be compared with estimated costs and that adjustments be made as needed. We acknowledge Interior’s concurrence to work with the bureaus and periodically compare the rates set with actual cost incurred, examine usage, and establish a methodology that will assist in more fully recovering fleet costs. 3. We support Interior’s proposed actions to recover personnel costs, and its actions to improve use and scheduling of aircraft. 4. Interior agrees that there may be opportunities to improve the efficiency of its use of fleet aircraft. Interior stated that it will be reviewing its scheduling policies to identify such opportunities. We support this initiative. 5. Interior emphasizes that the department’s aviation program is complex and multi-faceted due to the diverse missions of the bureaus and the high priority of safety and mission accomplishment. We agree with this assessment. Aviation program responsibility is shared by OAS and the bureaus. We support OAS and bureau partnerships to seek more efficient and cost-effective ways to manage the aviation program. In addition to those named above, Mark Connelly, Robert E. Kigerl, Lisa Knight, Dawn Shorey, and Carol Herrnstadt Shulman made key contributions to this report.
The Department of the Interior has cut its aviation accident rate in half since 1975--from 18.8 accidents to 8.7 per 100,000 flight hours. The department's lower accident rate can be attributed to the implementation of a standard aviation operating policy and to aviation safety standards that exceed the Federal Aviation Administration's requirements. The Office of Aircraft Services (OAS) has not fully recovered aviation program costs. From fiscal years 1999 to 2000, OAS has charged bureaus about $4 million less than actual costs, representing an undercharge of about two percent. OAS set rates that were based on flight hour projections of actual usage that turned out to be low, and OAS did not include all the cost elements that needed to be considered. Periodic monitoring of the rates and actual costs would ensure that all costs are recovered. OAS has yet to develop a more cost-effective approach for using aircraft. To cut costs, OAS has reduced its staffing levels by 24 percent since 1992.
The retention allowance authority was established by section 208 of the Federal Employees Pay Comparability Act of 1990 (FEPCA). The act required OPM to issue governmentwide regulations on retention allowances, which it did on March 28, 1991. The act and OPM’s implementing regulations require agencies to document that (1) each allowance paid is based on a determination that unusually high or unique qualifications of the employee or a special need of the agency for the employee’s services makes it essential to retain the employee and (2) in the absence of such an allowance, the employee would be likely to leave federal employment. The agency must also document the extent to which the employee’s departure would affect the agency’s ability to carry out an activity or perform a function deemed essential to the agency’s mission. The regulations also require agencies to prepare retention allowance plans. The plans must include (1) criteria that must be met or considered in authorizing allowances, including criteria for determining the size of an allowance; (2) a designation of officials with authority to review and approve payment of retention allowances; (3) procedures for paying allowances; and (4) documentation and recordkeeping requirements sufficient to allow reconstruction of the actions taken to award the allowance. Agencies are permitted to pay employees allowances of up to an additional 25 percent of their basic pay. An agency may continue to pay a retention allowance as long as the conditions giving rise to the original determination to pay the allowance still exist, but it must conduct a formal review at least annually to determine whether the retention allowance is still warranted and document this review by means of an authorized official’s written certification. To identify which agencies gave the largest number of retention allowances and the highest amounts awarded, as well as to determine the total value of retention allowances and the number of SES employees awarded allowances, we reviewed OPM retention allowance reports for fiscal years 1991 through 1994, which were derived from OPM’s Central Personnel Data File (CPDF). We selected the five agencies that the data showed had the most allowances from fiscal years 1991 through 1994—DOD, Ex-Im Bank, SEC, DOE, and USDA. To assess whether agencies were preparing retention allowance plans in accordance with OPM regulations, we obtained and reviewed agencies’ retention allowance plans and compared the provisions and other information in these documents with requirements in OPM retention allowance regulations. In addition, we interviewed agency officials about their plans. To perform a limited review of agencies’ retention allowance awards, we interviewed agency officials about their award procedures and reviewed individual retention allowance justification documents for 43 selected awards at the five agencies. We did not evaluate the appropriateness of individual allowance amounts or the proportion of agencies’ employees who received allowances. The 43 awards, although randomly selected from groups of retention allowances that were stratified based on grade levels, are not projectable because we were unable to review sufficient numbers of awards at each agency due to time constraints. To determine the extent of OPM’s oversight efforts, we interviewed OPM program and oversight officials and reviewed documentation they provided, including reports statistically analyzing retention allowances by agency. We also informed OPM’s program and oversight officials of our preliminary compliance concerns at Ex-Im Bank. Subsequently, OPM officials decided to conduct an in-depth review of Ex-Im Bank’s use of retention allowances and recruitment bonus programs. We provided a draft of this report for comment to the heads of DOD, DOE, Ex-Im Bank, OPM, SEC, and USDA. Their comments are summarized on pages 12 through 14. Written comments from DOD, Ex-Im Bank, and SEC are reproduced in appendixes I through III, respectively. Our review was conducted in the agencies’ Washington, D.C., headquarters offices from November 1994 to September 1995 in accordance with generally accepted government auditing standards. As of September 30, 1994, 354 employees (excluding HHS employees), or about 0.01 percent of the approximately 2.9 million federal civilian employees, were receiving retention allowances. Of these allowances, 334 (94 percent) had been awarded by the five agencies we reviewed. The number and amount of retention allowances awarded at the five agencies in fiscal years 1991 through 1994 are presented in table 1. As shown in the table, the annualized value of retention allowances for these agencies increased from approximately $21,000 in fiscal year 1991 to about $2.8 million in fiscal year 1994. The average allowance at the five agencies during fiscal years 1991 through 1994 was $7,789 per employee. In fiscal year 1994, the highest allowance of $28,925 was awarded by DOD, and the average amounts awarded per agency varied from $4,989 at Ex-Im Bank to $14,928 at DOE. In addition, five retention allowances were awarded to SES employees in four of the five agencies during fiscal years 1991 through 1994. Table 2 presents the average and highest amounts for retention allowances awarded by each of the five agencies in fiscal years 1991 through 1994. Among the five agencies, Ex-Im Bank awarded allowances to the largest proportion of its employees. Ex-Im Bank awarded allowances to 21.7 percent of its 462 employees during fiscal year 1994, while none of the other agencies awarded allowances to more than 0.3 percent of their employees. Table 3 presents the percentage of employees receiving allowances at each of the five agencies during fiscal year 1994. Ex-Im Bank did not appear to comply with the statutory requirement that it determine that the employee was likely to leave if the employee did not receive an allowance, which could result in unnecessarily spending funds for allowances. None of the seven Ex-Im Bank allowances we reviewed contained information that indicated the employee was considering leaving the agency. Bank officials stated that approximately 90 percent of the 100 allowances awarded were initiated based on management’s recognition of the employees’ special talents and their attractiveness to other employers, rather than on more definitive information, such as whether the employees were considering other job offers. Ex-Im Bank officials said that high level performance is a major criterion for selecting award recipients; that is, allowance recipients are generally selected from those employees who have outstanding performance ratings because this group includes those most necessary to the Bank’s successful accomplishment of its mission. Officials said that they time the awards of new retention allowances and the recertification of existing allowances to coincide with the results of their performance appraisal process. Ex-Im Bank officials noted, however, that there is no direct linkage between a performance rating and a retention allowance. In justifying the use of performance ratings in awarding retention allowances, Ex-Im Bank officials said that high performing employees have been found to be particularly attractive to the private sector and, therefore, more likely to have opportunities to leave the agency. In 1992, prior to initiating its retention allowance program, Ex-Im Bank requested special pay rate authorities from OPM to pay certain of its employees more money. Ex-Im Bank officials said that OPM denied their request and encouraged them to consider other remedies to their staffing problems, including retention allowances. OPM officials told us that they had discussed various pay and nonpay flexibilities, including retention allowances, with Ex-Im Bank officials. OPM officials also provided us with copies of the governmentwide guidance that they had provided to Ex-Im Bank. They noted that, while they encourage agencies to use available pay flexibilities, agencies need to follow established regulations—for example, determining whether the employee was likely to leave without the retention allowance and documenting the extent to which the employee’s departure would affect the agency’s ability to carry out its mission. OPM officials said that the fact that an employee had a high performance rating is not sufficient to meet these requirements. We discussed with OPM officials our concern that, in the seven cases we reviewed, Ex-Im Bank did not appear to determine that the employee was likely to leave if the employee did not receive an allowance. After these discussions and in furtherance of its oversight responsibility, OPM initiated an in-depth review of Ex-Im Bank’s use of pay flexibilities, including retention allowances and recruitment bonuses. Because of OPM’s oversight role and its decision to review a larger number of Ex-Im Bank cases to pursue the compliance issue on a systemic basis, we decided to forgo further work on the issue. While the five agencies’ retention allowance plans included most provisions required by OPM regulations, including designating officials with authority to review and approve allowances and providing criteria for selecting allowance recipients, DOD, Ex-Im Bank, and SEC did not include their rationales for determining the amount of the retention allowances in any of their plans. Without the documented rationale, it is impossible for an approving official to readily assess the appropriateness of the proposed award amount and to ensure that the agency is not awarding higher amounts than are necessary to retain the employee. A DOD wage administration specialist told us that a specific DOD-wide rationale was not included in its plan because DOD wanted to give the individual approving officials flexibility in awarding allowances, including the authority to determine the amounts of retention allowances. The official said, however, that a planned revision of the plan will indicate that appointing officials should apply criteria for determining retention allowance amounts consistent with OPM’s regulations. SEC said that, as a small agency, it is able to handle the retention allowance process on a case-by-case basis and thus had not seen a need to formalize criteria for determining the size of an allowance. Both the Vice President for Management Services and a personnel specialist at Ex-Im Bank said that the omission of a rationale in their retention allowance plan was an oversight. Both individuals said that the agency wants the plan to comply with all of OPM’s regulations and that the plan would be revised accordingly. OPM regulations do not require written recertification when an employee receives an increase in basic pay. However, the agencies we reviewed generally believed that retention allowances should be recertified when their employees received significant increases in basic pay. For minimal increases, such as government-wide pay raises, DOD, DOE, Ex-Im Bank, and USDA do not specifically require recertification, thereby permitting the allowances to continue at the same percentage rates, recognizing that the allowances increase in amounts proportionate to the increases in employees’ basic pay. Ex-Im Bank said that it also allows for automatic recertification for promotions at lower grade levels. Conversely, SEC believed all allowances should be recertified whenever basic pay increases, regardless of the size of the increase. A USDA official told us that, while most approving officials recertify allowances when employees are promoted, some officials have interpreted OPM’s regulations as allowing the allowances to continue at the same percentage rate when any basic pay increase occurs, including those due to promotions. Similarly, DOD officials said that they believed most approving officials recertify promoted employees’ allowances, but that they could not be sure that some officials do not automatically increase allowances in proportion to promotions or other significant pay increases. DOE and Ex-Im Bank officials said that they believed that promotion to a new position with significantly higher pay results in changes to the conditions that justified the allowance and that the regulations therefore require that a new decision be made regarding the retention allowance. An SEC personnel official told us that he believed a recertification is required for any increase to an employee’s allowance. He added that it would be unlikely for SEC to increase the value of an allowance when the basic pay rates increased, because the initial award established an amount that the employee in effect agreed was sufficient to retain his/her services. Thus, it would be more likely that the allowance would be decreased or terminated when the employee’s basic pay was increased. OPM Compensation Administration Division officials said that OPM regulations do not require that the allowance percentage be changed when an employee receives an increase in his/her basic pay. OPM officials pointed out that the law (5 U.S.C. 5754(b)) requires that a retention allowance be stated as a percentage of the rate of basic pay and that this supports the notion that it may be appropriate to adjust retention allowances automatically based on changes in the rate of basic pay. One of the OPM officials told us that OPM intended to allow agencies flexibility in their approaches to these increases, including not necessarily requiring recertification, but that OPM believed that agencies would likely review employees’ allowances when employees received significant increases in basic pay. OPM noted that, as part of their responsibility for administering the program, agencies are expected to reduce or terminate a retention allowance whenever they become aware that the original set of conditions justifying the allowance have changed to the extent that the approved allowance is no longer warranted. Further, OPM believes that agency evaluations of changes in a variety of related factors—for example, the employee’s rate of basic pay, an agency’s continuing need for the services of the employee, the employee’s performance, and staffing and labor market factors—like the original determinations for granting retention allowances, are matters of judgment that cannot easily be reduced to a precise formula. Moreover, changes in a single factor, such as an increase in the rate of basic pay, do not necessarily mean that a full review and a new written certification are necessary. OPM believes that approving officials need to weigh all relevant factors and that they are in the best position to determine whether and when a formal review or changes are necessary. In any event, OPM’s regulations require agencies to review each retention allowance annually and to certify in writing whether the payment is still warranted. In carrying out its oversight responsibility, OPM has relied on agencies to report retention allowance activity to OPM’s CPDF. Most federal agencies report specific personnel-related information on the awarding of retention allowances, including the recipient’s name, pay plan, performance rating, basic pay rate, position, and the value of the allowance. OPM has used this information to produce quarterly reports showing active retention allowance data governmentwide. To monitor the program, OPM has done statistical analyses of the agency-provided information, which included determining whether the allowance exceeded the 25-percent limitation and whether the allowance—when added to the total compensation received by the employee during the calendar year—exceeded the rate payable for level I of the Executive Schedule, the current statutory maximum pay rate. OPM officials said that they had not identified any noncompliance using these analyses. Until March 1994, OPM also conducted periodic longitudinal studies of FEPCA’s incentive pay programs, including retention allowances, to examine both OPM’s and agencies’ implementation of the act. The studies, which began in 1991, resulted in three reports that addressed such issues as statistical comparisons, by sex and race, of retention allowances awarded. OPM officials said that they terminated these studies in fiscal year 1995 because they were not finding any significant problems and because of budget concerns. However, OPM said that it conducted on-site compliance reviews of FEPCA actions at randomly selected installations during this same period. As previously noted, we discussed with OPM our concerns about Ex-Im Bank’s retention allowance award process, and OPM subsequently decided to conduct an in-depth review of Ex-Im Bank’s use of retention allowances. Retention allowances were awarded to a limited number of employees governmentwide. With the exception of the Ex-Im Bank, the proportion of agencies’ employees who received allowances was low. Ex-Im Bank did not appear to comply with a statutory requirement in awarding retention allowances, and Ex-Im Bank’s, DOD’s, and SEC’s retention allowance plans did not satisfy an OPM planning requirement. Also, OPM’s regulations did not address whether agencies should review and/or recertify allowances when employees receive significant pay increases during the year. Ex-Im Bank appeared to award allowances without determining that employees would be likely to leave in the absence of allowances, a practice which could result in unnecessarily spending allowance funds. OPM, as the agency responsible for governmentwide oversight of retention allowances, is conducting a review of compensation practices at Ex-Im Bank that should enable it to determine whether Ex-Im Bank needs to more adequately address this issue. Accordingly, we decided to forgo further work on the issue. The retention allowance plans for DOD, Ex-Im Bank, and SEC did not include criteria for determining the amounts of allowances. Without a documented agencywide rationale, lower level managers did not have guidance for establishing the amounts of individual allowances. In addition, since the individual award justifications developed by these managers were not required to include the rationale for the award amount, and thus frequently did not, agency officials and others reviewing the awards lacked sufficient information with which to assess the appropriateness of the amounts awarded. Thus, the agencies could not ensure that the amounts awarded were not in excess of amounts necessary to retain the employee. OPM’s regulations do not require that allowances be reviewed or recertified in writing whenever there are significant increases to employees’ basic pay during the year. As a result, agencies may not be reviewing or recertifying allowances in conjunction with increases to employees’ basic pay in circumstances where such increases might affect the conditions justifying the allowances. In such circumstances, a review might make a significant difference. We recommend that the Chairman of Ex-Im Bank, the Secretary of Defense, and the Chairman of SEC include the required criteria for determining the value of retention allowances in their retention allowance plans. We recommend that the Director of OPM take action to ensure that retention allowance regulations are revised to explicitly address whether, and if so when, an agency should review or recertify the amount of an allowance as a result of basic pay rate increases or other relevant changes in the conditions justifying the allowance. DOD, DOE, Ex-Im Bank, OPM, SEC, and USDA provided comments on a draft of this report; these comments are summarized below. DOD, Ex-Im Bank, and SEC provided written comments, which are included in their entirety in appendixes I through III, respectively. We received oral comments from the Deputy Assistant Secretary for Human Resources, DOE, on September 25, 1995; the Chief of the Compensation Administration Division, OPM, on September 26, 1995; and the Director of Personnel, USDA, on September 26, 1995. DOD, DOE, SEC, and USDA concurred with the findings and conclusions in our report. In addition, DOD and SEC agreed to implement our recommendation to them and suggested some technical changes, which we have incorporated in the report. OPM offered a proposed revision to our recommendation that OPM revise its regulations to clearly define whether, and if so when, reviews or recertifications should be performed. OPM also provided technical comments, which we incorporated where appropriate. Ex-Im Bank granted that it may have “cut some procedural corners” but distinguished this from substance by asserting that its actions were consistent with legislative intent and regulatory guidelines as applied to its particular human resources requirements. Ex-Im Bank also expressed concern that we believed their rationales for determining allowance amounts were suspect or in some way unprincipled because the rationales were insufficiently documented. Ex-Im Bank did concur with our recommendation that it incorporate criteria for determining the amount of an allowance in its plan. While we agree that a failure to document retention allowance decisions—including the reasoning behind those decisions—is a procedural deficiency, we believe the Bank’s apparent failure to systematically determine that, in the absence of an allowance, an employee would be likely to leave would, if confirmed, be a deficiency of substance. This is the reason we decided to inform OPM of our concerns regarding this issue. Further, both the act and OPM regulations clearly require that each allowance paid should include a determination that, in the absence of such an allowance, the employee would be likely to leave. We note that the Ex-Im Bank’s First Vice President and Vice Chairman, in commenting on a draft of this report, confirmed that he did not typically base his award decisions on whether there might be an actual or imminent competing offer of employment. However, we neither state nor intend to imply in the report that Ex-Im Bank’s rationales for allowance amounts were suspect or unprincipled. To avoid the misinterpretation that we viewed Ex-Im Bank’s apparent noncompliance as a procedural rather than a substantive deficiency, we eliminated the wording in our draft report that could imply that all five agencies generally complied with federal requirements. We now make it clear that our review showed that Ex-Im Bank did not appear to comply with the “likely to leave” requirement, but we decided to forgo further work when OPM decided to start an in-depth review of Ex-Im Bank’s award decisions. Our draft wording that the agencies generally complied with the requirements was not intended to excuse the Ex-Im Bank’s apparent noncompliance with that specific requirement. OPM would prefer that we merely recommend that it consider revising the regulations. We continue to believe, however, that, given the agencies’ varying interpretations of OPM’s regulations, OPM needs to explicitly address the issue of whether and when retention allowance reviews and recertifications, other than the current annual requirement, should be conducted. We did modify the draft recommendation, as OPM suggested, to include other reasons for reviewing allowances in addition to the basic one of a pay rate increase. As arranged with your office, we plan no further distribution of this document until 14 days after the date of issuance unless you publicly announce its contents earlier. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the House Subcommittee on Civil Service; the Secretaries of Agriculture, Defense, and Energy; the Chairmen of Ex-Im Bank and SEC; and the Director of OPM; and will make copies available to other interested parties. Major contributors to this report are listed in appendix IV. If you have any questions about this report, please call me at (202) 512-7680. The following is GAO’s comment on Ex-Im Bank’s letter dated September 27, 1995. While we made most of the language changes proposed by Ex-Im Bank, we did not revise our report sections addressing allowance determinations. Our reasons for not revising the sections on determinations are addressed on page 13. Alan Belkin, Assistant General Counsel Robert Heitzman, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed federal agencies' use of retention allowances as salary supplements to retain essential employees, focusing on: (1) the total and average value of the allowances from 1991 to 1994; (2) the extent to which Senior Executive Service employees received retention allowances; (3) whether there were any compliance issues involved in retention allowance awards; (4) the agencies' adherence to Office of Personnel Management (OPM) retention regulations; and (5) the extent to which OPM oversees the use of retention allowances. GAO found that: (1) 354 civilian employees received retention allowances as of September 30, 1994; (2) although the Department of Health and Human Services did not report its allowance data, 20 of its employees received allowances during fiscal year (FY) 1994; (3) retention allowances totalled $2.8 million annually and averaged $7,789 annually per employee; (4) the Export-Import Bank (Eximbank) awarded allowances to 21.7 percent of its employees in FY 1994, while the other agencies awarded allowances to 0.3 percent or fewer of its employees; (5) Eximbank did not determine whether prospective recipients would have left their positions if they did not receive retention allowances; (6) the criteria the Department of Defense, Eximbank, and Securities and Exchange Commission (SEC) used to determine the amount of employee allowances could not be determined; (7) OPM regulations do not require agencies to review or recertify retention allowances affected by pay increases; and (8) OPM has developed regulations and conducted longitudinal studies of Federal Employees Pay Comparability Act (FEPCA) actions at selected agencies.
The Navy’s purchase card program is part of the governmentwide Commercial Purchase Card Program established to simplify federal agency acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from vendors. DOD has mandated the use of the purchase card for all purchases at or below $2,500 and has authorized the use of the card to pay for specified larger purchases. DOD has seen significant growth in the program since its inception and estimated that in fiscal year 2001 about 95 percent of its transactions of $2,500 or less were made by purchase card. The purpose of the program is to simplify the process of making small purchases. It allows cardholders to make micropurchases of $2,500 or less and pay for training of $25,000 or less without having to execute a contract. The government purchase card can also be used for larger transactions, but only under a contract. For larger transactions, the card is referred to as a “payment card” because it pays for an acquisition made under a legally executed contract. The Navy uses a combination of governmentwide, DOD, and Navy guidance as the policy and procedural foundation for its purchase card program. The Navy purchase card program operates under a governmentwide General Services Administration purchase card contract, as do the purchase card programs of all federal agencies. In addition, government acquisition laws and regulations such as the Federal Acquisition Regulation provide overall governmentwide guidance. DOD and the Navy have issued clarifying guidance to these regulations. The Under Secretary of Defense for Acquisition, Technology, and Logistics—in cooperation with the Under Secretary of Defense (Comptroller)—has overall responsibility for DOD’s purchase card program. The DOD Purchase Card Joint Program Management Office, in the office of the Assistant Secretary of the Army for Acquisitions Logistics and Technology, is responsible for overseeing DOD’s program. The Commander of the Naval Supply Systems Command (NAVSUP) has been designated the Navy’s chief contracting officer, and under his command is the Navy purchase card program manager. However, primary day-to-day management responsibility for the program lies with the agency program coordinators in the Navy’s major commands and local units. Figure 1 depicts the management hierarchy of the Navy purchase card program for the units at the four major commands that we audited. The figure shows each major command where we conducted audit work; for each location we selected for a case study analysis, the figure also shows the total number of subordinate level agency program coordinators, approving officials, and cardholders. It is important to note that at the major commands and the subordinate level units that we audited, most agency program coordinators, approving officials, and cardholders were not dedicated to the purchase card program on a full-time basis. Rather, most individuals had additional job responsibilities and performed purchase card duties when needed. At the major commands and units audited, personnel in three positions— agency program coordinator, cardholder, and approving official—are collectively responsible for providing reasonable assurance that purchase card transactions are appropriate and meet a valid government need. Agency program coordinators work at both the major command and unit levels. Major command agency program coordinators operate under the direction of the command’s director of the contracting office, and are responsible for the day-to-day management, administration, and oversight of the program. Unit level agency program coordinators develop local standard operating procedures, issue and cancel cards, train cardholders and approving officials, and work with other Navy units and the card- issuing bank. Cardholders are to make purchases, maintain supporting documentation, and reconcile their monthly statements. Approving officials, who typically are responsible for more than one cardholder, are to review each cardholder’s transactions and reconciled statements, and certify for payment their cardholders’ purchases. Appendix I provides additional details on the Navy purchase card program. Mangement and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. A positive control environment is the foundation for all other standards. It provides discipline and structure as well as the climate which influences the quality of internal control. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) We found that the Navy and Marine Corps units we audited had not established an effective internal control environment in fiscal year 2001, and although significant improvements have been made, further action in several areas is necessary. Specifically, we found that in fiscal year 2001, these locations did not (1) effectively evaluate whether approving officials maintained reasonable spans of control, (2) limit purchase card credit limits to historical procurement needs, (3) ensure that cardholders and approving officials were properly trained, (4) utilize the results of purchase card program monitoring efforts, and (5) establish an infrastructure necessary to effectively monitor and oversee the purchase card program. As a result of our July 30, 2001, testimony, the Navy and DOD have taken significant actions to improve purchase card controls, including reducing the number of cardholders by over 50 percent and establishing a Charge Card Task Force to further improve the purchase card processes and controls. Management plays a key role in demonstrating and maintaining an organization's integrity and ethical values, especially in setting and maintaining the organization's ethical tone, providing guidance for proper behavior, removing temptations for unethical behavior, and providing discipline when appropriate. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Since the July 30, 2001, congressional hearing, the DOD Comptroller, the DOD Purchase Card Joint Program Management Office, and NAVSUP have issued a number of directives and policy changes citing previous audit findings and the need to improve both the purchase card control environment and adherence to control techniques. Specifically, in response to our November 2001 report, the Navy has acted on or plans to implement all 29 of our recommendations to improve controls over the purchase card program. While we believe that some of the Navy’s actions to implement our recommendations are not sufficient to achieve the necessary changes, planned and implemented actions to date are a significant step forward. In addition, the DOD Comptroller appointed a Charge Card Task Force, which issued its final report on June 27, 2002. The report identified many of the control weaknesses we identified in this and previous reports and testimonies. In the report, the DOD Comptroller stated that this “…is an excellent first step in an on-going process to continually seek ways to improve charge card programs. We must continue to identify new ways of reducing the government’s cost of doing business while at the same time ensuring that we operate in a manner that preserves the public’s trust in our ability to provide proper stewardship of public funds.” The task force report included a number of recommendations including establishing a purchase card concept of operations; accelerating the electronic certification and bill paying process; improving training materials; identifying best practices in areas such as span of control and purchase card management skill sets; and establishing more effective means of disciplining those who abuse the purchase cards. These recommendations address many of the concerns that we previously identified and provide management at the Pacific Fleet, Atlantic Fleet, Naval Sea Systems Command (NAVSEA), and Marine Corps the opportunity to take a proactive role in correcting control weaknesses and ensuring that the purchase card remains a valuable tool. Although the Navy significantly reduced the number of purchase cards since our July 30, 2001, testimony, it continued to have approving officials who were responsible for reviewing more cardholder statements than allowed by either DOD or Navy guidance, which limits the number of cardholders that an approving official should review to seven. The convenience of the purchase card must be balanced against the time and cost involved in the training, monitoring, and oversight of cardholders. It must also be balanced against the exposure of the Navy to the legally binding obligations incurred by those transactions. The proliferation of purchase cards and high cardholder-to-approving-official ratios increase the risks associated with the purchase card program. In response to the July 2001 hearing, DOD’s Director of Procurement instructed the directors of Defense agency procurement and contracting departments on August 13, 2001, to limit purchase cards to only those personnel who need to purchase goods and services as part of their jobs. As a result of this heightened concern, the Navy reduced the number of cardholders by more than half— from about 59,000 in June 2001 to about 28,000 by September 2001. In October 2001, the Navy followed up the initial reduction in cardholders with an interim change to the NAVSUP existing purchase card instructions that established minimum criteria for prospective purchase card holders. As shown in figure 2, the Navy continued to reduce the number of cardholders and was down to about 25,000 as of March 2002. Agency program coordinators at the commands we audited told us that the reduction was a result of (1) employee attrition, and (2) cancellation of cards of individuals who no longer needed them. NAVSUP’s interim change limiting purchase cards also established a maximum ratio of seven cardholders to each approving official, and required that Navy and Marine Corps units establish local policies and procedures for approving purchase cards and for issuing them to activity personnel. The Navy’s requirement of a maximum 7-to-1 ratio of cardholders to an approving official is consistent with guidance issued by the Department of Defense Purchase Card Joint Program Management Office on July 5, 2001, shortly before the congressional hearing. As shown in table 1, at the four locations we audited, the average ratio of cardholders to approving officials was well in line with the DOD and Navy limit of seven cardholders per approving official. This average, however, masks the wide range of ratios across units, including those that far exceeded the DOD and Navy prescribed ratio of cardholders to approving official. The problem of high cardholder-to-approving-official ratios remains especially acute at NAVSEA, which at some locations used one approving official to certify a single payment for all the unit’s cardholders. This resulted in approving officials certifying monthly bills that contained thousands of transactions and regularly exceeded $1 million a month. While total financial exposure as measured in terms of purchase card credit limits has decreased in the units we audited, as shown in figure 3, it continues to substantially exceed historical purchase card procurement needs. Limiting credit available to cardholders is a key factor in managing the purchase card program and in minimizing the government’s financial exposure. Therefore, to determine the maximum credit available we analyzed the credit limits available to both cardholders and approving officials. In August 2001, the Under Secretary of Defense for Acquisition, Technology, and Logistics sent a memorandum to the directors of all defense agencies stating that supervisors should set reasonable limits based on what each person needs to buy as part of his or her job and that every cardholder does not need to have the maximum transaction or monthly credit limit. Similarly, in October 2001, NAVSUP issued an interim change to the purchase card program instruction which requires agency program coordinators to monitor cardholder credit limits and ensure that the credit limits are appropriate for mission requirements. We concur with both the Under Secretary’s statement and NAVSUP’s interim change to the purchase card instructions, and continue to believe that limiting cardholder spending authority is an effective way of minimizing the federal government’s financial exposure. However, we have not seen adequate progress in this area at the locations that we audited. None of the units we visited tied either the cardholder’s or the approving official’s credit limit to the unit’s historical spending. Rather, they often established arbitrary credit limits of $10,000 to $25,000. In some instances, we found cardholders and approving officials who had credit limits that far exceeded historical spending needs. For example, as of September 2001, we identified over 60 cardholders with $9.9 million credit limits, and more than 2,300 approving officials with $9.9 million credit limits at the four commands we audited. As shown in table 2, the four commands that we audited had credit limits that clearly exceeded historical needs. Effective management of an organization's workforce-its human capital-is essential to achieving results and an important part of internal control… Training should be aimed at developing and retraining employee skill levels to meet changing organizational needs. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Most of the units we audited did not have documented evidence that their purchase card holders had received the initial or supplemental training required by the Navy purchase card program guidance. Training is key to ensuring that the workforce has the skills necessary to achieve organizational goals. In accordance with NAVSUP Instruction 4200.94, all cardholders and approving officials must receive purchase card training. Specifically, NAVSUP Instruction 4200.94 requires that prior to the issuance of a purchase card account, the prospective cardholder and approving official must receive training regarding both Navy policies and procedures and local procedures. The instruction also requires all cardholders and approving officials to receive refresher training every 2 years. In response to the July 30, 2001, hearing, the Assistant Secretary of the Navy for Research, Development and Acquisition sent a message in August 2001 to all Navy units directing them to train all of their cardholders on or about September 12, 2001, concerning the proper use of the purchase cards. While acknowledging this need, the Navy does not have a database that would enable agency program coordinators to monitor training for cardholders and approving officials. Therefore, the Navy does not have a systematic means to determine whether NAVSUP Instruction 4200.94 or its directives are being carried out. As shown in table 3, we found that from about 56 percent of the fiscal year 2001 transactions at the Marine Corps to about 87 percent of the transactions at the Atlantic Fleet were made by cardholders or approved for payment by approving officials for whom there was no documented evidence of either initial training or refresher training at the time the transaction was made. Managers at all four locations told us that they require all cardholders to receive training prior to receiving their purchase cards. Not all managers were as confident that cardholders and approving officials received follow-up training. Without a centralized training database it would be extremely difficult to track when each cardholder needed the required 2-year refresher training. Further, for training to be effective it should be tailored to provide the knowledge needed for the different tasks in purchase card management. However, we found that, even though the functions performed by the agency program coordinators, approving officials, and cardholders are substantially different, the training curriculum for the three positions was identical. Neither NAVSUP nor the major commands had specific guidance or training concerning the role and responsibilities of agency program coordinators or approving officials. Agency internal control monitoring assesses the quality of performance over time. It does this by putting procedures in place to monitor internal control on an ongoing basis as a part of the process of carrying out its regular activities. It includes ensuring that managers and supervisors know their responsibilities for internal control and the need to make internal control monitoring part of their regular operating processes. Ongoing monitoring occurs during normal operations and includes regular management and supervisory activities, comparisons, reconciliations, and other actions people take in performing their duties. GAO's Internal Control Standards: Internal Control Management and Evaluation Tool (GAO-01-1008G, August 2001) We found evidence that the Pacific Fleet, Atlantic Fleet, Naval Sea Systems Command units, and the Marines Corps base that we audited conducted reviews of the fiscal year 2001 purchase card program. However, we did not find that these commands used the results of those reviews to resolve identified internal control weaknesses. Further, an August 2001 NAVSUP- mandated review of 12 months of purchase card transactions failed to identify the extent of potentially fraudulent, improper, and abusive or questionable transactions identified in either Naval Audit Service or GAO audits. NAVSUP Instruction 4200.94 calls for agency program coordinators to perform semiannual reviews of their units’ purchase card program, including the program’s adherence to internal operating procedures, applicable training requirements, micropurchase procedures, receipt and acceptance procedures, and statement certification and prompt payment procedures. These reviews are to serve as a basis for agency program coordinators’ to initiate appropriate action to improve the local program or correct problem areas. Throughout fiscal year 2001, the Navy purchase card instructions did not require that written reports on the results of internal reviews be submitted to either local management or a central Navy office for monitoring and oversight. As a result, the Navy did not have a consistent process for documenting the results of purchase card reviews, identifying systemic problems, or monitoring corrective actions to help provide assurance that the actions are effectively implemented. In October 2001, in response to our previous audit work, the Navy issued an interim change to NAVSUP Instruction 4200.94 that requires each command twice a year to summarize the results of monitoring in their subordinate commands and to forward each summary to NAVSUP. Although agency program coordinators and the Naval Audit Service have conducted periodic reviews of the purchase card program that showed cardholders and approving officials were not adhering to required control procedures, we found no evidence that the commands or units that we audited used the results of those reviews to improve the control environment or adherence to control procedures. The internal control weaknesses identified by agency program coordinators included (1) a lack of independent documentation that the Navy received items ordered, (2) accountable items not recorded in the property records, (3) inadequate documentation for transactions, and (4) split purchases. In addition, the Naval Audit Service issued a report dated May 29, 2002, that was critical of the controls that the Naval Sea Systems Command exercised over the purchase card transactions at eight locations. The Naval Audit Service report not only highlighted findings similar to those listed here, but also identified 265 questionable transactions for such items as gift certificates, clothing, watches, and rental cars. In contrast to the findings of the agency program coordinators and the Naval Audit Service, the four major commands reported relatively few, if any, inappropriate purchase card transactions when they conducted a self assessment of transactions in response to a NAVSUP August 2001 directive. In that directive, NAVSUP required that each Navy unit conduct a stand- down review of all purchase card transactions the unit made during the previous 12 months and report the results to NAVSUP by November 15, 2001. Based on the results of the reviews conducted by the units we audited, we question the design and performance of the review. Its results do not indicate a thorough and critical analysis of the nature and magnitude of the control weaknesses and of the extent to which fraudulent, improper, or abusive transactions were occurring during the period reviewed. As shown in table 4, the four major commands that we audited represented that they reviewed about 1,225,000 transactions but reported that they found only 1,355 purchases—about 0.1 percent of the transactions reviewed—were for personal use or for prohibited items, or were not bona fide mission requirements. In our statistical sample of 624 fiscal year 2001 transactions, we found 102 potentially fraudulent, improper, and abusive or questionable transactions—about 15 percent of the transactions audited. Furthermore, we found numerous examples of abusive and improper transactions (discussed in more detail in the following section of this report) as part of our data mining. In response to this issue, command level agency program coordinators told us that they did not have sufficient time to perform their transaction reviews. Effective management of an organization's workforce its human capital is essential to achieving results and an important part of internal control. Management should view human capital as an asset rather than a cost. Only when the right personnel for the job are on board and are provided the right training, tools, structure, incentives and responsibilities is operational success possible. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) The Navy has not provided sufficient human capital resources to enable effective monitoring of purchases and to develop a robust oversight program. The three key positions for overseeing the program and monitoring purchases are the command-level agency program coordinator, the unit-level agency program coordinator, and the approving official. During the period of our review, none of the major command agency program coordinators we audited worked full time in that position. This is despite the fact that they were responsible for managing procurement programs that incurred between 227,000 and 380,000 transactions totaling from about $137 million to about $268 million annually. Further, these agency program coordinators were responsible for managing the procurement activities of cardholders who were located not only on the East and West Coasts of the United States but in some instances on other continents. In addition, these part-time major command coordinators generally had one or two staff in their immediate office—who were also assigned other responsibilities—that helped monitor the program. Considering that the major command agency coordinators are responsible for procurement programs involving hundreds of thousands of transactions and hundreds of millions of dollars, as shown in table 5, the human capital resources at the major command level are inadequate. We also found that the major commands we audited did not provide the subordinate level agency program coordinators and approving officials with the time, training, tools, or incentives—also human capital resources—needed to perform their monitoring responsibilities necessary for the operational success of the program. Rather, the responsibilities of approving officials and many subordinate level agency program coordinators fell into the category of “other duties as assigned,” with minimal time, training, or tools to carry out these responsibilities. Further, we found that approving officials and most agency program coordinators generally had other duties of higher priority than monitoring purchases and reviewing cardholders’ statements. This was especially true for approving officials, some of whom were engineers and computer technicians, whose annual ratings generally did not cover their approving official duties. One subordinate level agency program coordinator told us that she knows that some approving officials do not review the cardholder statements because (1) some cardholders make thousands of purchases in a month and (2) the approving officials have other responsibilities. Another agency program coordinator told us that some agency program coordinators and approving officials fear that questioning certain purchases could be career-limiting decisions. Further, neither the Navy nor the major commands have established a position description, an adequate statement of duties, or other information on the scope, duties, or specific responsibilities for subordinate-level agency program coordinators and approving officials. Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Basic internal controls over the purchase card program were ineffective at the units in the major commands we audited during fiscal year 2001 primarily because they were not effectively implemented. Based on our tests of statistical samples of purchase card transactions, we determined that key transaction-level controls were ineffective, rendering the purchase card transactions at the units we audited vulnerable to fraudulent and abusive purchases and to the theft and misuse of government property. The problems we found primarily resulted from inadequate guidance and a lack of adherence to valid policies and procedures. The specific controls that we tested were (1) screening for required vendors, (2) documenting independent receipt and acceptance of goods and services, (3) documenting cardholder reconciliation and approving official review prior to certifying the monthly purchase card statement for payment, and (4) recording pilferable property in accountable records. As shown in table 6, the failure rates for the first three attributes that we tested ranged from 58 percent to 98 percent respectively for the Atlantic Fleet units in Norfolk for documenting independent receipt and acceptance obtained with a purchase card and reviewing cardholder statements prior to certifying them for payment. Most transactions in our statistical sample did not contain pilferable property. Thus, we are not projecting the results of that test to the population of transactions that we tested at those units. Despite DOD and Navy requirements to give priority to certain required vendors, we found that the failure rate to document the necessary screening of purchases ranged from about 70 percent at the Pacific Fleet to about 90 percent at NAVSEA. Because of the units’ failure to document screening for statutory vendors, the Navy and Marine Corps do not know the extent to which cardholders failed to acquire items from these required vendors. The Navy’s purchase card instructions require that prior to using the purchase card, cardholders must document that they have screened all their intended purchase card acquisitions for availability from statutory sources of supply. These sources of supply include vendors qualifying under the Javits-Wagner-O’Day Act (JWOD), Federal Prison Industries, and DOD’s Document Automation and Production Service (DAPS). JWOD vendors are nonprofit agencies that employ people who are blind or have other severe disabilities. JWOD vendors primarily sell office supplies and calendars, which often cost less than items sold by commercial vendors. In a June 2001 letter to all procurement officials, DOD’s Director of Procurement reminded cardholders of the need to purchase listed items from JWOD sources unless they have a specific waiver. Federal Prison Industries employ and provide skills training to inmates of federal prisons. They sell a wide variety of products including textiles, electronics, industrial products, and office furniture. Finally, DAPS is responsible for document automation and printing within DOD, encompassing electronic conversion, retrieval, and output and distribution of digital and hardcopy information. We cannot determine the precise amount spent on purchases that were not made from required vendors; however, as shown in table 7, our analysis of fiscal year 2001 vendor activity showed that the units we audited spent about $235,000 with five vendors (Franklin Covey, Kinko’s, PIP Printing, Kwik Kopy, and Sir Speedy) that sold items or services that are also sold by required vendors. Further, some of the items purchased at Franklin Covey were personal items that are considered to be abusive purchases. We performed a similar vendor analysis of the fiscal year 2001 Navy-wide purchase card activity and found that during fiscal year 2001, the Navy spent about $1.6 million with those five vendors. Due to the diverse nature of items sold by Federal Prison Industries, we did not attempt to identify vendors that sell similar products. We found that NAVSUP and some units provided cardholders with examples of how to document the screening process; however, cardholders failed to use the NAVSUP-suggested purchase log or complete local purchase request forms containing a section to document screening for required sources of supply. For example, the NAVSUP sample purchase card log included in NAVSUP Instruction 4200.94 contains a column for the cardholder to document whether or not he or she screened the items purchased for availability from statutory sources of supply. However, we found that the suggested purchase card log was often not used, or if used, many cardholders did not complete that column. Key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for ... handling any related assets. Simply put, no one individual should control all the key aspects of a transaction or event. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) The units we audited generally did not have evidence documenting that someone independent of the cardholder received and accepted items ordered and paid for with a purchase card, as required by NAVSUP Instruction 4200.94. That is, the units generally did not have a receipt, invoice, or packing slip for the acquired goods and services that was signed and dated by someone other than the cardholder. As a result, there is no documented evidence that the government received the items purchased or that those items were not lost, stolen, or misused. Some units have developed a system using ink stamps that need to be completed to document receipt and acceptance; however, these systems have not been implemented effectively. As shown in table 6, we estimated that about 58 percent to 67 percent of the units’ fiscal year 2001 transactions did not have documented evidence of independent receipt and acceptance of goods and services acquired with the purchase card. While some of the items for which these units did not have independent documented receipts were consumable office supplies, other items that failed this key internal control test included laptop computers, digital cameras, and personal digital assistants, which could be subject to theft or misuse. Transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. This is the principal means of assuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) Control activities ensure that only valid transactions … are initiated or entered into …. Control activities are established to ensure that all transactions … that are entered into are authorized and executed only by employees acting within the scope of their authority. GAO’s Internal Control Standards: Internal Control Management and Evaluation Tool (GAO-01-1008G, August 2001) We found little evidence of cardholder reconciliation or approving official reviews to confirm that cardholders had reconciled the monthly statement of purchase card transactions back to the supporting documents throughout fiscal year 2001. All levels of the purchase card program recognize effective cardholder reconciliation and approving official review of the monthly statement as a key control activity. DOD’s Purchase Card Joint Program Management Office, the Navy, command procedures, and the units’ operating procedures recognize that cardholder reconciliation and approving official review are central to ensuring that purchase card transactions are appropriate. Under 31 U.S.C. 3325 and DOD’s Financial Management Regulation,disbursements are required to be made on the basis of a voucher certified by an authorized agency official. The certifying official is responsible for ensuring (1) the adequacy of supporting documentation, (2) the accuracy of payment calculations, and (3) the legality of the proposed payment under the appropriation or fund charged. The certification function is a preventive control that requires certifying officers to maintain proper controls over public funds. It also helps prevent fraudulent and improper payments, including unsupported or prohibited transactions, split purchases, and duplicate payments. Further, section 2784 of title 10, United States Code, requires the Secretary of Defense to prescribe regulations that ensure that each purchase card holder and approving official is responsible for reconciling charges on a billing statement with receipts and other supporting documentation before certification of the monthly bill. Consistent with these requirements, Navy purchase card guidance calls for cardholders to reconcile the monthly purchase card statements to supporting records. It calls for approving officials to ensure that all cardholder purchases were appropriate and all charges were accurate, and to resolve all questionable purchases with the cardholder. According to NAVSUP Instruction 4200.94, after the approving official reviews the monthly bill, the approving official will certify it for payment. Because certification is necessary for payment, it is likely to occur whether or not cardholders and approving officials have performed required reconciliations and reviews. Thus, when we tested whether the cardholder reconciled the monthly statement and whether the approving official reviewed the monthly statement, we did not simply look for a physical or electronic signature on a form. Rather, for this test we considered that proper reconciliation and review occurred if: the cardholder signed and dated the monthly bill before it was paid, and the monthly bill contained any markings or notes linking the amounts billed to a credit card receipt, invoice, packing slip, or a purchase log; and the approving official’s review of the cardholders’ monthly statements was signed and dated prior to certification for payment, and there were virtually any markings or notes on the monthly statements evidencing that review. Our testing revealed that documented evidence of adequate cardholder reconciliation or approving official review of cardholder transactions did not exist for most of our sample transactions. Examples of inadequate documentation included missing statements, invoices, signatures, or dates, or a lack of evidence of cardholder reconciliation or approving official review. Without such evidence, we—and the program coordinators, who are required to semiannually review approving official records—cannot determine whether officials are complying with review requirements. As shown in table 6, the failure rate for this internal control activity at the units in the four commands audited was among the highest of the controls we tested. The failure rates for this attribute were similar to the failure rates that we reported for this attribute in our previous testimony related to two San Diego-based Navy units. The Navy agreed with our initial recommendations concerning the need to clarify the payment certification portion of the purchase card instruction. Based on this audit’s broader review of the Navy’s purchase card program, we believe that the high failure rate may also be attributable to the fact that approving official and cardholder responsibilities fall into the category of “other duties as assigned” without any specific time allocated for their performance, as discussed previously. Further, cardholders and approving officials are not necessarily in the same geographic location. Consequently, while an approving official might be able to review cardholder transactions electronically, the approving official will not necessarily be able to review the documentation supporting the transaction. Approving officials and cardholders told us they had many duties of a higher priority than reviewing the monthly purchase card statements. A large workload, especially in “other duties as assigned,” and geographical separation of cardholders and approving officials can lead to less attention than expected or desired. For example, one NAVSEA approving official’s ability to promptly and accurately review cardholders’ monthly statements was hampered because (1) the approving official was responsible for reviewing the statements of nearly 400 cardholders and regularly certified for payment monthly statements exceeding $1 million and (2) the approving official who was located in Rhode Island was responsible for reviewing the statements of cardholders not only located in Rhode Island but also cardholders located in Virginia, Washington, and Florida. We identified numerous instances of purchases that had not been adequately reviewed and reconciled to the monthly statements, but in which the statements were, nonetheless, certified for payment. Such activities allow potentially fraudulent, improper, and abusive or questionable purchases (discussed in more detail in the following section of this report) to go undetected. Also, mistaken or other improper charges by vendors might not be detected. The following are examples of such charges that we identified: At Camp Lejeune, we found 29 transactions totaling over $50,000 for which the Marine Corps was unable to provide any supporting documentation concerning what was purchased or whether the items purchased had a legitimate government use. The vendors that the Marine Corps paid without adequate supporting documentation included Internet vendors, rental car companies, gift stores, and a stereo store. Considering that Camp Lejeune did not have documentation that cardholders and approving officials routinely reconciled or reviewed the monthly statements prior to payment, neither the Marine Corps nor we can determine whether these accounts had been compromised and someone was using them to fraudulently obtain goods or services at the government’s expense. Navy purchase card instructions require cardholders to retain documentation received from the vendor, such as a sales slip or cash register receipt to verify the accuracy of the charges made. The purpose of maintaining this documentation is to provide an audit trail that supports each decision to use the card and any required special approvals. In December 2000, NAVSEA paid a hotel $12,200 despite the fact that neither the cardholder nor the approving official had any evidence concerning how the hotel arrived at the $12,200 amount. When we questioned the cardholder concerning the charge, he gave us a written statement that the transaction was for the rental of a conference room and audiovisual equipment. The statement also said that he did not authorize the purchase of any food. However, a copy of the bill we obtained from the hotel showed that the Navy paid $8,260 for food. An agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for and limited access to assets such as cash, securities, inventories, and equipment which might be vulnerable to risk of loss or unauthorized use. Such assets should be periodically counted and compared to control records. GAO's Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) We found accountable items acquired with purchase cards that were often not recorded in property records of the units we audited. In addition, officials at three of the four major commands could not locate some of the property items included in our statistical samples. While some or all of the items might, in fact, be at the installation, officials could not provide conclusive evidence that they were in the possession of the government. Unrecorded property and items that cannot be located indicate a weak control environment and problems in the property management system. Consistent with GAO’s internal control standards, DOD’s Property, Plant and Equipment Accountability Directive and Manual, which was issued in draft for implementation on January 19, 2000, requires accountable property to be recorded in property records as it is acquired. Accountable property includes items that can be easily pilfered, such as computers and related equipment, and cameras. Entering such items in the property records is an important step to help ensure accountability and financial control over these assets and, along with periodic inventory, to deter theft or improper use of government property. Table 8 contains the results of our review of property management records and inspection of accountable property. One example of the Navy’s failure to record pilferable property in property management records involved Atlantic Fleet transactions with a computer vendor, GTSI. On September 30, 2000, the Navy contracted with GTSI to purchase 430 computers, 213 flat panel monitors, and other computer hardware and software using the GSA Multiple Award Schedule pricing. GTSI shipped the computers, monitors, and equipment to the Atlantic Fleet warehouse in November and December 2000, and the Atlantic Fleet paid GTSI about $757,000 for those items in January 2001. While the Atlantic Fleet’s documents concerning these two transactions show that an employee at the warehouse signed as receiving the computers, the Atlantic Fleet did not record the serial numbers of the computers or the monitors, and did not record the computers or monitors in any type of property accountability system. After we contacted GTSI and obtained the serial numbers, we were able to determine that between January 2001 and January 2002, the Atlantic Fleet shipped 243 of the computers and 126 flat panel monitors to land- and sea- based Atlantic Fleet users. However, the Atlantic Fleet could not provide us adequate evidence confirming the location of the 187 remaining computers and 87 flat panel monitors. Effectively managing accountable property has long been a problem area, and the use of the purchase card has added further difficulties. With about 25,000 Navy cardholders, the number of people buying accountable property has greatly expanded as the purchase card program has grown. Cardholders are responsible for reporting on the accountable property they buy—so that it can be recorded in the unit’s accountable property books— but they often do not. As we previously reported, on August 1, 2001, the Navy modified its policy concerning pilferable property by changing the definition of what it considered pilferable property. This change in the definition has contributed to the lack of accountability over such property. Unlike the previous policy, which specifically defined pilferable items, the new policy provides commanding officers with latitude in determining what is and what is not pilferable. The new policy defines pilferable to be an item— regardless of cost—that is portable, can be easily converted to personal use, is critical to the activity’s business/mission, and is hard to repair or replace. Citing the “hard to repair or replace” criterion in the new policy, some unit commanders told us they have determined that only desktop and laptop computers would be considered pilferable items. Thus, these units do not maintain accountability over numerous pilferable items, such as digital cameras and personal digital assistants (PDAs), leaving them vulnerable to possible theft, misuse, or transfer to personal use. However, not all unit commanders made this assertion and continued to maintain accountability over items that were considered pilferable under the previous policy. We identified numerous purchases at the installations we audited and through our Navy-wide data mining that were potentially fraudulent, improper, and abusive or questionable. As discussed in appendix II, our work was not designed to identify, and we cannot determine, the extent of potentially fraudulent, improper, and abusive or otherwise questionable transactions. However, considering the control weaknesses identified at each unit audited, it is not surprising that these transactions were not detected or prevented. In addition, the existence of similar improper, and abusive or questionable transactions in our Navy-wide data mining of selected transactions provides additional indications that a weak control environment and ineffective specific controls exist throughout the Navy. In addition, appendix IV contains an update on two fraud investigations involving Navy units based in San Diego that we discussed in our March 2002 testimony. We considered potentially fraudulent purchases to include those made by cardholders that were unauthorized and intended for personal use. Potentially fraudulent purchases can also result from compromised accounts in which a purchase card or account number is stolen and used to make a potentially fraudulent purchase. Potentially fraudulent transactions can also involve vendors charging purchase cards for items that cardholders did not buy. The Navy and the major commands we audited had policies and procedures that were designed to prevent and detect potentially fraudulent purchases. For example, as discussed previously, approving officials are required to review the supporting documentation for each transaction for legality and proper government use of funds. However, our testing showed that these control activities had not been implemented as intended. Although collusion can circumvent what otherwise might be effective internal control activities, a robust system of guidance, internal control activities, and oversight can create a control environment that provides reasonable assurance of preventing or quickly detecting fraud, including collusion. However, in auditing the Navy’s internal control at units assigned to four major commands during fiscal year 2001, we did not find that the processes and activities were operating in a manner that provided such assurance. The Navy does not have an automated system that identifies, captures, and reports key information on potentially fraudulent purchases that have been identified or are being investigated within the purchase card program. In table 9, we identified instances of fraudulent and potentially fraudulent transactions at the commands we audited and by making inquiries with Naval Criminal Investigative Service. All of the purchases that we discuss in this section were included in monthly cardholder statements that were certified and paid by the Navy. The following examples of fraud are illustrative of the cases in table 9: An approving official’s failure to review a cardholder’s statements promptly contributed to an Atlantic Fleet cardholder making over $250,000 in unauthorized purchases between September 2000 and July 2001. In July 2001, when a command supply official began reviewing the cardholder’s monthly statements, he noticed that over $80,000 of those charges were unsupported. Included in those unsupported charges were numerous transactions with suspicious vendors. After command supply officials asked the cardholder about the unsupported purchases, the cardholder admitted to making thousands of dollars of illegal Internet purchases and illegally purchasing EZ Pass prepaid toll tags, expensive remote control helicopters, and a dog. The Navy decided to prosecute the cardholder, and a court martial is pending. An approving official’s failure to review a cardholder’s statements and the cardholder’s failure to keep evidence of what was purchased contributed to an Atlantic Fleet cardholder fraudulently using his purchase card from January 2000 through October 2000 to purchase an estimated $150,000 in automotive, building, and home improvement supplies. The cardholder sold some of the items to generate cash. According to Navy investigators, the cardholder destroyed many of the requisitions, receipts, and purchase logs for the stolen items in an attempt to cover up his actions. In addition, according to Navy criminal investigators, if the monthly purchase card billing statements were properly reviewed, the cardholder’s fraudulent activities would have been exposed. In exchange for pleading guilty to multiple counts of larceny and other criminal violations, the cardholder’s jail time was reduced to 24 months. An approving official’s failure to adequately review a cardholder’s statements contributed to two Atlantic Fleet cardholders conspiring with at least seven vendors to submit about $89,000 in fictitious and inflated invoices. The cardholders had the vendors ship supply items to an Atlantic Fleet warehouse and the personal items directly to their residences. The cardholders also had vendors inflate the price and/or quantity of items purchased. According to Navy investigators, the cardholders would sell, use, and barter the illegally obtained items, while the vendor sales representatives received inflated sales commissions and an estimated $3,000 to $5,000 in Navy property that was given to them as bribes. One vendor sales representative who admitted to conspiring to supply false invoices said that he could not get sufficient business until he altered the invoices like the other vendors. According to the caller who informed NCIS of the illegal activity, it was common knowledge that the cardholders were getting kickbacks because of their positions as Navy buyers. Based on the results of the NCIS investigations, one of the cardholders received 24-months confinement and a bad conduct discharge while the other received a 60- day restriction and reduction in rank. In another case of potential fraud, we found that in March 1999 the Navy inappropriately issued five government purchase cards to individuals who did not work for the government. The individuals who received the Navy purchase cards worked for a consulting company that occasionally provided services to the Navy. NAVSUP Instruction 4200.94 limits the Navy purchase card to authorized government personnel in support of official government purchases. Between March 1999 and November 2001 these individuals used the Navy purchase cards to make purchases totaling about $230,000 with vendors including airlines, hotels, rental car companies, gas stations, restaurants, a florist, and golf courses. We discovered these charges in November 2001 as part of our data mining for suspicious transactions at the Pacific Fleet. Within a week of our inquiries to the Pacific Fleet concerning the charges on these accounts, the Pacific Fleet agency program coordinator instructed Citibank to (1) immediately deactivate the accounts and (2) close the accounts once the balances were paid. While the consulting company ultimately paid Citibank for all charges made with those cards, the consulting company was 30 days past due on the account 28 times during the 38 months that the accounts were open. Further, the Navy was contractually liable for all purchases made with the cards and would have been responsible for payment if the consulting company had failed to pay. The risk to the Navy was real because, when the Navy had Citibank deactivate the accounts in November, the company, which still owed $8,600, threatened to withhold payment unless the Navy reopened the accounts. In addition, the consulting company contacted Citibank directly and tried to assume control of the accounts by claiming the company had “spun off from the Navy.” While the consulting company did eventually pay Citibank, it was not until March 2002—4 months after the accounts were deactivated. Our Office of Special Investigations researched some of the charges and found that, by using a Navy purchase card, the consulting company avoided paying state sales taxes and obtained discounts at airlines and hotels that are typically offered only to the federal government. The airline discounts are particularly advantageous because airlines offer the federal government significantly discounted tickets that are not encumbered with the penalties and limitations that are imposed upon private sector companies and the general public. Finally, Citibank does not post an interest charge on past due accounts. Thus, by using the Navy purchase card, the company avoided paying interest on the past due accounts. Based on the results of our work, we referred this case to the Naval Criminal Investigative Service for further investigation. We attempted to obtain examples of other potentially fraudulent activity in the Navy purchase card program from NCIS in Washington, D.C. NCIS investigators acknowledged that they have investigated a number of purchase card fraud cases; however, their investigation database does not permit a breakdown of fraud cases by type, such as purchase cards. Purchase card program officials and NCIS officials said that they had no information on the total number of purchase card fraud investigation cases throughout the Navy that had been completed or were ongoing. Based on our identification of a number of fraudulent and potentially fraudulent cases at the installations that we audited, we believe that the number of cases involving fraudulent and potentially fraudulent transactions could be significant. Without such data, the Navy does not know the significance, in numbers or dollar amounts, of fraud cases that have been or are being investigated and is hampered in taking corrective actions to prevent such cases in the future. Our audit work at the four commands and our Navy-wide data mining identified numerous examples of improper transactions. Improper transactions are those purchases that, although approved by the Navy officials and intended for government use, are not permitted by law, regulation, or DOD policy. We identified the following three types of improper purchases. Purchases that do not serve an authorized government purpose. Split purchases, in which the cardholder circumvents cardholder single- purchase limits. The Federal Acquisition Regulation guidelines prohibit splitting purchase requirements into more than one transaction to avoid the need to obtain competition on purchases over the $2,500 micropurchase threshold. Cardholders also split purchases to circumvent higher single-transaction limits for payments on contracts exceeding the micropurchase threshold. Purchases from improper sources as previously discussed. Various federal laws and regulations require procurement officials to acquire certain products from designated sources such as JWOD vendors. The JWOD program is a mandatory source of supply for all federal entities. It generates jobs and training for Americans who are blind or have other severe disabilities, requiring federal agencies to purchase supplies and services furnished by nonprofit agencies—such as the National Industries for the Blind and the National Industries for the Severely Handicapped—who employ such individuals. The improper transactions that resulted from purchasing items from nonstatutory sources were previously discussed in the section on adherence with control procedures. We believe that if the Navy better monitored the vendors with which its cardholders conducted business, the Navy could minimize its number of improper purchases. Such monitoring could also provide the Navy the opportunity to leverage its purchase volume and negotiate discounts with frequently used vendors. We found several instances in which cardholders purchased goods, such as clothing, that were not authorized by law or regulation. The Federal Acquisition Regulation, 48 C.F.R. 13.301(a), provides that the governmentwide commercial purchase card may be used only for purchases that are otherwise authorized by law or regulations. Therefore, a procurement using the purchase card is lawful only if it would be lawful using conventional procurement methods. Under 31 U.S.C. 1301(a), “ppropriations shall be applied only to the objects for which the appropriations were made…” In the absence of specific statutory authority, appropriated funds may only be used to purchase items for official purposes, and may not be used to acquire items for the personal benefit of a government employee. Improper transactions, as shown in table 10, were identified as part of our review of fiscal year 2001 transactions and related activity at the four commands and as part of our Navy-wide data mining of transactions with questionable vendors. The following examples of improper transactions are illustrative of the type of cases included in table 10. We identified a Pacific Fleet cardholder who used the purchase card in January 2001 to buy a $199 leather flight jacket as a personal gift for an official visitor. Secretary of the Navy (SECNAV) Instruction 7042.7J specifically identifies flight jackets as a prohibited personal gift to a visitor. In November 2001, when we questioned the deputy commander concerning the flight jacket, he told us that the purpose of the gift was to recognize the individual’s contributions to the Navy’s San Diego installations. The deputy commander subsequently told us that the personnel involved with the gift were counseled, and that he, the deputy commander, had reimbursed the Navy for the jacket in January 2002. We identified purchases of clothing by NAVSEA that should not have been purchased with appropriated funds. Generally, agencies may not use appropriated funds to purchase clothing for civilian employees. One exception is 5 U.S.C. 7903, which authorizes agencies to purchase protective clothing for employee use if the agency can show that (1) the item is special and not part of the ordinary furnishings that an employee is expected to supply, (2) the item is essential for the safe and successful accomplishment of the agency’s mission, not solely for the employee’s protection, and (3) the employee is engaged in hazardous duty. Further, according to a Comptroller General decision dated March 6, 1984, clothing purchased pursuant to this statute is property of the U.S. government and must only be used for official government business. Thus, clothing purchases, except for rare circumstances in which the purchase meets stringent requirements, is usually considered a personal item for which appropriated funds should not be used. In one transaction, a NAVSEA cardholder purchased polo shirts and other gifts for a “Bring-Your-Child-to-Work Day” at a total cost of about $1,600. In another example of clothing for personal use from our Navy-wide data mining, several charges for amounts from $70 to $230 were identified at Hecht’s and Nordstrom. We were informed that these were for purchases of civilian clothes—slacks, shirts, and suits—for enlisted personnel who were serving in an official capacity as assistants to admirals and general officers, and to wear when playing in a jazz band. The Director, Purchase Card Unit, Defense Contracting Command Washington, informed us that this appears to be a fairly widespread practice. Clothing needs of military personnel are covered by the clothing allowances that they receive. As part of our data mining of Navy-wide purchase card transactions, we identified two purchases in which cardholders purchased Bose headsets at $300 each. The headsets were for personal use—listening to music— while taking commercial flights and, therefore, should not have been purchased with the Navy purchase card. At NAVSEA, we identified charges to hotels in Newport News and Portsmouth, Virginia, totaling about $8,000 for locally based NAVSEA employees to attend meetings at which they were inappropriately provided meals and refreshments at the government’s expense. The cardholders told us that they authorized the hotels to bill for audiovisual equipment and conference room rental. The cardholders said the hotel was not authorized to bill for food. However, despite the cardholders’ assertion, the detailed bills showed that the hotels charged NAVSEA about $7,000 for meals including breakfasts, lunches, and snacks. Pursuant to 31 U.S.C. 1301(a), "ppropriations shall be applied only to the objects for which the appropriations were made . . . ." In the absence of specific statutory authority, appropriated funds may only be used to purchase items for official purposes, and may not be used to acquire items for the personal benefit of a government employee. For example, without statutory authority, appropriated funds may not be used to furnish meals or refreshments to employees within their normal duty stations. Free food and other refreshments normally cannot be justified as a necessary expense of an agency's appropriation because these items are considered personal expenses that federal employees should pay for from their own salaries. Three of the four commands audited paid improper and abusive phone charges. For example in June 2001, the Atlantic Fleet paid $1,175 for monthly service charges for 22 phones. We determined that some cell phone calls were long distance toll calls that were not for legitimate government business. The Navy’s and Atlantic Fleet’s command level procedures prohibit the use of cell phones for other than officially approved uses. In addition, even though Atlantic Fleet guidance requires subordinate units to verify monthly cell phone usage, the units were not reviewing the monthly bills as required. Our audit of the calls made using the cell phones determined that some were to personal residences—not military facilities or merchants supplying goods and services to the Navy. In addition, we found wasteful charges for cell phones. For example, the Navy paid for 13 months of service at $15 per month for a cell phone that had been returned to the vendor. It was not until we inquired about the lack of use on the phone that the Navy realized it was paying for a phone that had been returned over 1 year earlier. Another category of improper transaction is a split purchase, which occurs when a cardholder splits a transaction into segments to avoid the requirement to obtain competition for purchases over the $2,500 micropurchase threshold or to avoid other established credit limits. The Federal Acquisition Regulation prohibits splitting a purchase into more than one transaction to avoid the requirement to obtain competition for purchases over the $2,500 micropurchase threshold. Navy purchase card instructions also prohibit splitting purchases to avoid other established credit limits. Once items exceed the $2,500 threshold, they are to be purchased through a contract in accordance with simplified acquisition procedures that are more stringent than those for micropurchases. Our analysis of data on purchases at the four major commands we audited and our data mining efforts identified numerous occurrences of potential split purchases. In addition, internal auditors at all four commands that we audited identified split purchases as a continuing problem. In some of these instances, the cardholder’s purchases exceeded the $2,500 limit, and the cardholder split the purchase into 2 or more transactions of $2,500 or less. For example, a Camp Lejeune cardholder made 8 transactions totaling about $17,000 on 1 day to purchase combat boots. In addition, a NAVSEA cardholder made 14 purchases totaling over $30,000 in 1 day from an electronic supply store. All the commands that we audited said that cardholders splitting purchases to circumvent the micropurchase threshold was a problem. As we previously reported, by circumventing the competitive requirements of the simplified acquisition procedures, the commands may not be getting the best prices possible for the government. For the Navy to reduce split transactions, it will need to monitor the vendors with whom cardholders are conducting business. The Navy has not proactively managed the purchase card program to identify opportunities for savings. Purchase card sales volume has grown significantly over the last few years with the Navy now using the purchase card to procure nearly $2 billion a year in goods and services. We believe that the Navy could better leverage its volume of purchases and negotiate discounts with frequently used vendors. For example, during fiscal year 2001, the Navy paid over $1 million each to 122 different vendors using the purchase card. In total during fiscal year 2001, the Navy paid those 122 vendors about $330 million. However, the Deputy Director of the Navy eBusiness Operations Office told us that, despite this heavy sales volume, the Navy had not negotiated reduced-price contracts with any of the vendors. As previously stated, one of the benefits of using purchase cards versus traditional contracting and payment processes is lower transaction processing costs and less red tape for both the government and the vendor. Through increased analysis of purchase card procurement patterns, the Navy has the opportunity to leverage its high volume of purchases and achieve additional savings from vendors by negotiating volume discounts similar to those the General Service Administration (GSA) has negotiated in its Multiple Award Schedule program. Under GSA’s Multiple Award Schedule, participating vendors agree to sell their products at preferred customer prices to all government purchasing agents. According to the Deputy Director of the Navy’s eBusiness Operations Office, 74 of the 122 vendors with which the Navy spent more than $1 million using the purchase card during fiscal year 2001 did not participate in the Multiple Award Schedule program. In addition, for 48 of the vendors with which Navy spent more than $1 million and that did participate in the Multiple Award Schedule, the opportunity existed for the Navy to negotiate additional savings. GSA encourages agencies to enter into blanket purchase agreements (BPAs) and negotiate additional discounts with Multiple Award Schedule vendors from which they make recurring purchases. By analyzing Navy-wide cardholder buying patterns, the Navy should be able to achieve additional savings by identifying vendors and vendor categories for which it uses the purchase card for significant amounts of money and negotiate discounts with them. For example, during fiscal year 2001, the Navy spent about $65 million with 5 national computer vendors (Dell, Gateway, CDW Computer Centers, Micro Warehouse, and GTSI), $22 million with 3 office supply companies (Corporate Express, Staples, and Office Depot), and $9 million with 2 national home improvement stores (Home Depot and Lowe’s). While 8 of these 10 vendors participate in GSA’s Multiple Award Schedule program, the Navy could not tell us whether its purchases from these vendors were made using that program’s preferred price schedules. Further, considering the Navy’s volume of purchases, it is reasonable to assume that it could negotiate additional savings with these and other vendors if it used historical purchase card sales data as a bargaining tool. We identified numerous examples of abusive and questionable transactions at each of the four installations we audited. We defined abusive transactions as those that were authorized, but in which the items were purchased at an excessive cost (e.g., “gold plated”) or for a questionable government need, or both. Abuse can be viewed when the conduct of a government organization, program, activity, or function falls short of societal expectations of prudent behavior. Often, improper purchases such as those discussed in the previous section are also abusive. Transactions that are both improper and abusive were discussed previously, such as the excessive cell phone charges at the Atlantic Fleet. Questionable transactions are those that appear to be improper or abusive but for which there is insufficient documentation to conclude either. We consider transactions to be questionable when they do not fit within the Navy guidelines on purchases that are acceptable for the purchase card program, and when there is not a reasonable or documented justification to acquire the item purchased. When we examined the support for questionable transactions, we usually did not find evidence of why the Navy or Marine Corps needed the item purchased. Consequently, the cardholder provided an after-the-fact rationale that the item purchased was not improper or abusive. To prevent unnecessary costs, these types of questionable purchases require scrutiny before the purchase, not after. Table 11 identifies examples of both abusive and questionable purchases. The following include details of some of the abusive and questionable purchases in table 11. Computer and related equipment exceeding documented need—The Navy used the purchase card to acquire computer and computer-related items far in advance of its needs. Considering that computer prices decrease over time while their capabilities improve, warehousing computers and related items is an especially ineffective use of government funds. Despite this time, price, and capability relationship, we found in our statistical sample that the Atlantic Fleet, Pacific Fleet, and NAVSEA purchased computers, monitors, and printers that often remained unused for more than 12 months. For example, the computers purchased by the Atlantic Fleet in September 2000 that were discussed in the section on pilferable property had Pentium III microprocessors. By the time the Atlantic Fleet issued some of those computers in January 2002, the manufacturer was selling computers with Pentium IV microprocessors at a cost of less than what the Atlantic Fleet paid for the Pentium IIIs. Further, our statistical sample at the Atlantic Fleet identified 22 other computers that the Navy purchased in April 2001 that, as shown in figure 4, were unused and still in their original boxes in June 2002. Similarly, we found two $3,500 laser printers purchased in September 2000 that were selected in our statistical sample of Pacific Fleet transactions still in their original boxes at a Pacific Fleet warehouse in January 2002. We have previously identified DOD’s inventory management as an area at high risk for fraud, waste, and abuse. In our report on DOD major management challenges and program risk we stated that because of its unreliable inventory management systems managers may request funds to obtain additional items that were on hand. Our review of fiscal year 2001 Atlantic Fleet transactions found that despite having these unopened items, the unit had in fact purchased additional computers after September 2000 and the Pacific Fleet purchased laser printers after June 2001. Designer leather goods—In September and October 2000, NAVSEA made two separate transactions totaling nearly $1,800 to obtain designer leather folios and PDA holders costing up to $300 each made by Coach and Dooney and Bourke. Two of the folios were given as gifts to a visiting officer in the Australian Navy, while other designer items were personal preferences of the cardholders and requesting individuals. Flat panel monitors—Our statistical sample selected transactions containing 243 flat panel monitors purchased by the Atlantic Fleet, Pacific Fleet, and NAVSEA. The cost of the monitors selected in our sample ranged from $550 to $2,200. Conversely, the 17-inch standard monitors selected in the sample cost about $200. As we have reported in the past, we believe the purchase of flat panel monitors—particularly those that cost far in excess of standard monitors—to be abusive and not an effective use of government funds in the absence of a documented need based on technical, space, or other considerations. Further, in our statistical sample, we found that some of the flat panel monitors that the Atlantic Fleet purchased were placed in a warehouse and not issued for more than a year after the Navy took possession. Warehousing flat panel monitors is especially inefficient because, like computers, as time passes the price of flat panel monitors decreases and technology increases. The flat panel monitors that we found still in the box cost the Navy $709 each. As of June 2002, the GSA price for the same flat panel monitors was about $480. Personal digital assistants—We found that the Atlantic Fleet and Pacific Fleet purchased PDAs for staff without documenting why the staff needed them to perform their official duties. In one instance, the Atlantic Fleet purchased 90 PDAs for $32,500 in October 2000 without any documented justification of need. As of June 1, 2002, 14 of the 90 PDAs had not been issued and were still in inventory. Further, the competitive bid price worksheet showed that the Navy did not accept the lowest bid for the PDAs. According to the competitive bid worksheet, the Atlantic Fleet received three bids for the PDAs that ranged from a low of $30,400 to a high of $32,850. The Atlantic Fleet accepted the middle bid of $32,500. Federal Acquisition Regulation allows purchasing agents to reject lower bids if the purchasing agent determines that the items being delivered do not conform to the applicable specifications, or if the vendors cannot deliver the goods or services within the specified time requirements. We saw no evidence that quality or timeliness were a factor in selecting the higher priced bid. Clock radios—As part of our Navy-wide data mining we inquired about a $2,443 transaction with Bose Corporation on September 30, 2000. In response to that inquiry, the Navy command that made the purchase told us that it purchased seven Bose “Wave Radios” costing $349 each. The command justified the purchase by stating that Navy regulations require all visiting office quarters to be supplied with a clock radio. While we do not question the need to supply visiting officer quarters with clock radios, we do question the judgment of purchasing $349 clock radios when there are numerous models of clock radios made by GE, Sony, and Panasonic costing about $15 from GSA. In our November 30, 2001, report and March 13, 2002, testimony on the purchase card controls at the Space and Naval Warfare Systems Command (SPAWAR) Systems Center and NPWC, we recommended that action be taken to help provide assurance that cardholders adhere to applicable purchase card laws, regulations, internal control and accounting standards, and policies and procedures. Specifically, we recommended that the Commander, Naval Supply Systems Command, revise NAVSUP Instruction 4200.94 to include specific consequences for noncompliance with purchase card policies and procedures. In response to the November 2001 report, DOD did not concur with that recommendation and stated that existing Navy policy clearly identifies consequences for fraud, abuse, and misuse. On May 29, 2002, the Navy told us that in response to our recommendation, the Assistant Secretary of the Navy for Research, Development and Acquisition issued a Naval Message reiterating compliance; accountability; and consequences of fraud, abuse, and misuse. While we would agree that the issuance of such a message has benefits, we continue to believe that the Navy needs to establish specific consequences for these purchase card problems because existing Navy policy does not identify any specific consequences for failure to follow control requirements. Currently, the Navy has not established specific disciplinary and/or administrative consequences—such as withdrawal of cardholder status, reprimand, suspension from employment for several days, and, if necessary, firing. Unless cardholders and approving officials are held accountable for following key internals controls, the Navy is likely to continue to experience the types of fraudulent, improper, and abusive and questionable transactions identified in our work. As part of this audit, we asked the agency program coordinators at each command that we audited whether any cardholders referred to in this report were disciplined for improper, abusive, or questionable purchases; or if the reduction in the number of cardholders could be attributed to individuals who lost the card because they made improper, abusive, or questionable purchases. However, according to the agency program coordinators, only one of the cardholders referred to in this report lost a card for improper, abusive, or questionable purchases, and no one has received any disciplinary actions for abusing the purchase card. We support the use of a well-controlled purchase card program. It is a valuable tool for streamlining the government’s acquisition processes. However, the Navy program is not well controlled and as a result is vulnerable to fraud, waste, and abuse. The primary cause of the control breakdowns is the lack of adherence to valid policies and procedures. Nonetheless, the control environment at the Navy has improved over the last year. For example, the Navy has reduced the number of cardholders by over 50 percent, from 59,000 to 25,000, thus improving the prospects for effective program management. However, further actions are needed to achieve an effective control environment. For example, leadership by major command and unit management and a strong system of accountability must be established for effective program control. Strengthening the control environment will require a commitment by the Navy to build a robust purchase card control infrastructure. In our November 30, 2001, report on control weaknesses at two units in San Diego, we made 29 recommendations to improve management of the purchase card program primarily at the two locations audited. Based on the broader scope of our current work we are making the following additional recommendations to strengthen the overall control environment and improve internal controls for the Navy’s purchase card program. We recommend that the Director of the Department of the Navy eBusiness Operations Office take the following actions. Direct all agency program coordinators to review the number of cardholders who report to an approving official and make the changes necessary to prevent approving officials from having the responsibility of reviewing more cardholders than allowed by Navy and DOD policies. Establish a database that maintains information on all purchase card training taken by cardholders, approving officials, and agency program coordinators. Require that agency program coordinators update that database whenever these purchase card program officials take training. Establish specific training courses for cardholders, approving officials, and agency program coordinators tailored to the specific responsibilities associated with each of those roles. Direct agency program coordinators to review an approving official’s overall workload and determine whether the approving official has the time necessary to perform the required review functions. Establish job descriptions that identify responsibility and performance standards for cardholders, approving officials, and agency program coordinators. Link the cardholders’, approving officials’, and agency program coordinators’ performance appraisals to achieving their performance standards. Work with the Naval Audit Service and Command Evaluation staff to begin periodic audits of the purchase card program to provide Navy management—at the command and unit level—an independent assessment of the control environment and whether the agency program coordinators, approving officials, and cardholders are adhering to control procedures. Identify vendors with which the Navy or Marine Corps uses purchase cards to make frequent purchases, evaluate Navy purchasing practices with those vendors, and forward the results of that evaluation to the Assistant Secretary of the Navy for Research, Development and Acquisition to contract with them, when applicable, to optimize Navy purchasing power. We recommend that the Secretary of the Navy modify the definition of “Pilferable Personal Property” in SECNAV Instruction 7320.10 dated August 1, 2001, by eliminating the requirement that a portable item easily converted to personal use also be difficult to repair or replace, and specifically identify items such as computers, cameras, personal digital assistants, and audiovisual equipment as meeting the definition of being pilferable and thus accountable. We recommend that the Director of the Department of the Navy eBusiness Operations Office modify NAVSUP Instruction 4200.94 to provide cardholders, approving officials, and agency program coordinators detailed instructions on the following specific control activities: timely and independent receiving and acceptance of items obtained with a purchase card and documenting the results of that process, screening purchases for their availability from required vendors and documenting the results of the screening, promptly reconciling of the monthly purchase card statements to supporting documentation and documenting the results of that reconciliation, promptly reviewing of a cardholder purchase card statement by the approving official prior to certifying the statement for payment and documenting the results of that review, and prompt cardholder notification to property accountability officer of the pilferable property obtained with the purchase card, and approving official responsibility for monitoring that the pilferable property has been recorded in the accountability records. We recommend that the Director, Department of Navy eBusiness Operations Office take the following actions. Modify NAVSUP Instruction 4200.94 to require cardholders to maintain documented justification and advanced approval of purchases that fall outside the normal procurements of the cardholder in terms of either dollar amount or type of purchase. Establish a Navy-wide database of known purchase card fraud cases by type of fraud that can be used to identify deficiencies in existing internal control and to develop and implement additional control activities, if warranted or justified. Establish a Navy-wide data mining, analysis, and investigation function to supplement other oversight activities. This function should include providing oversight results and alerts to major commands and installations when warranted. Modify NAVSUP Instruction 4200.94 to include a schedule of disciplinary actions as a guide for taking actions against cardholders who make improper or abusive acquisitions with the purchase card. We also recommend that the Under Secretary of Defense (Comptroller) direct the Charge Card Task Force to assess the above recommendations and to the extent applicable, incorporate them into its future recommendations to improve purchase card policies and procedures throughout DOD. In written comments on a draft of this report, which are reprinted in appendixes VI and VII, DOD concurred with 16 of our 19 recommendations. DOD partially concurred with the remaining 3 recommendations dealing with (1) linking the performance appraisals of purchase card officials to achieving performance standards, (2) maintaining accountability over pilferable property, and (3) establishing a schedule of disciplinary actions that will be taken against cardholders who make improper or abusive acquisitions. However, the actions that DOD plans to take on these 3 recommendations appear to address their most significant aspects. Concerning linking staff performance to purchase card performance standards, DOD responded that the Department of Navy eBusiness Operations Office will work with the Navy Human Resources Office to determine the need, legality, and feasibility of adding cardholder, approving official, and agency program coordinator performance standards to performance appraisals. Such measures are an important aspect of the purchase card program and are responsive to our recommendation. We encourage the Navy eBusiness Operations Office to work expeditiously with the Navy Human Resources Office to develop performance standards that make carrying out this fiduciary responsibility a matter to be considered in assessing staff. Regarding property more susceptible to theft, DOD stated that Navy would modify its definition of pilferable property to be the same as the DOD definition of pilferable property. By adopting the DOD definition, the Navy will remove the requirement that the item be “hard to repair or replace” from its definition of pilferable property. DOD did not, however, agree with the aspect of our recommendation that it provide commanders examples of items considered pilferable. DOD stated that a listing of specific pilferable items would require continual update and vigilance, and prove ultimately to be subjective and unscientific. We agree that a listing of specific pilferable items would require periodic updating. That was not the intent of our recommendation. Instead, we believe that providing commanders examples of types of pilferable property that can be easily removed from Navy facilities and have immediate use outside the Navy, would help ensure that items such as camera equipment and laptop computers are consistently included in accountable records. Concerning our recommendation to establish a schedule of disciplinary actions that will be taken against cardholders who abuse their purchase card privileges, DOD stated that the department has already taken actions to deal with improper and abusive uses of purchase cards, and that the Navy’s eBusiness Operations Office will examine whether actions that have already been taken are appropriate. However, DOD also stated that to the extent the recommendation contemplates the prescription of mandatory disciplinary or other actions, it is objectionable. DOD stated that disciplinary and other actions in response to improper and abusive purchase card use are properly addressed as matters of command and supervisory discretion. We never contemplated that the schedule would prescribe mandatory actions. Rather, we intended the schedule to be a guide of disciplinary actions to be taken against cardholders. It would also serve as an important internal control feature that clearly identified the consequences associated with improper and abusive purchase card use and would serve as a deterrent to such abuse. Further, the schedule of disciplinary actions could include a range of actions that would be appropriate for various types of purchase card misuse. Commanders and supervisors would still maintain their discretion to select the specific disciplinary action, if any, depending on the circumstances of individual cases. To eliminate any confusion concerning the intent of our recommendation, we made a slight modification to the text of the recommendation. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense, Comptroller; the Secretary of the Navy; the Assistant Secretary of the Navy for Research, Development and Acquisition; the Director of the Defense Finance and Accounting Service; and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. Please contact Gregory D. Kutz at (202) 512-9505 or [email protected], John Ryan at (202) 512-9587 or [email protected], or John Kelly at (202) 512-6926 or [email protected] if you or your staffs have any questions concerning this report. Major contributors to this report are acknowledged in appendix VIII. The Navy’s purchase card program is part of the Governmentwide Commercial Purchase Card Program, which was established to streamline federal agency acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from vendors. According to the General Services Administration (GSA), the Department of Defense (DOD) reported that during fiscal year 2001 it used purchase cards for more than 10.7 million transactions, valued at $6.1 billion. The Navy’s reported purchase card activity—MasterCards issued to civilian and military personnel—totaled about 2.8 million transactions, valued at $1.8 billion, during fiscal year 2001. This represented nearly 30 percent of DOD’s purchase card activity for fiscal year 2001. According to unaudited fiscal year 2001 purchase card data, four commands included in our current audit—Atlantic Fleet, Pacific Fleet, Naval Sea Systems Command, and the Marine Corps—made about $173 million, $137 million, $268 million, and $224 million, respectively, in purchase card acquisitions. In addition, according to unaudited fiscal year 2001 purchase card data for the two commands included in our March 2002 testimony, the Naval Facilities Engineering Command (Public Works Center) and Space and Naval Warfare Systems Command made about $117 million and $85 million in purchase card acquisitions respectively. See table 12 for further detail on fiscal year 2001 purchase card spending. Because the four commands have cardholders located throughout the world, we limited our testing of the transactions made by the units of those commands to cardholders located in specific geographical areas, and used a case study approach to evaluate each command’s local purchase card program. According to unaudited fiscal year 2001 purchase card data, Pacific Fleet cardholders in San Diego made about $35 million in purchase card acquisitions; Atlantic Fleet cardholders located in the Norfolk area made about $48 million in purchase card acquisitions; Naval Sea Systems Command cardholders in the Norfolk area made about $49 million in purchase card acquisitions; and Marine Corps cardholders at Camp Lejeune made about $36 million in purchase card acquisitions. The Pacific Fleet, Atlantic Fleet, and the Marine Corps are warfighting units, while the Naval Sea Systems Command is a support command. The Pacific Fleet is responsible for providing trained and combat-ready naval forces to the Commander-in-Chief U.S. Pacific Command. Its headquarters is in Honolulu, and its lower echelon commands are located in Honolulu and San Diego. The Atlantic Fleet provides trained, combat-ready forces to support the United States and the North Atlantic Treaty Organization (NATO) commanders in regions of conflict. Its headquarters and lower echelon commands are located in Norfolk. The Naval Sea Systems Command is responsible for providing the Navy operationally superior and affordable ships, systems, and ordnance. NAVSEA is headquartered in Washington, D.C., and has major shipbuilding and repair facilities on both the East and West Coasts including Norfolk. The Marine Corps is responsible for providing a highly flexible, combat-ready force in a high state of readiness, prepared to support the military’s strategy. The Marine Corps has bases located throughout the United States, including Camp Lejeune. The purchase card can be used for both micropurchases and payment of other purchases. Although most cardholders have limits of $2,500, some have limits of $25,000 or higher. The Federal Acquisition Regulation, Part 13, “Simplified Acquisition Procedures,” establishes criteria for using purchase cards to place orders and make payments. DOD and the Navy have supplements to this regulation that contain sections on simplified acquisition procedures. U.S. Treasury regulations govern purchase card payment certification processing and disbursements. DOD’s Purchase Card Joint Program Management Office, which is in the Office of the Assistant Secretary of the Army for Acquisition Logistics and Technology, has issued departmentwide guidance related to the use of purchase cards. However, each service has its own policies and procedures governing the purchase card program. The Naval Supply Systems Command (NAVSUP) is responsible for the overall management of the Navy’s purchase card program, and has published NAVSUP Instruction 4200.94, Department of the Navy Policies and Procedures for Implementing the Governmentwide Purchase Card Program. Under the NAVSUP instruction, each Navy command’s head contracting officer authorizes agency purchase card program coordinators in local Navy units to obtain purchase cards and establish credit limits. The program coordinators are responsible for administering the purchase card program within their designated span of control and serve as the communication link between Navy units and the purchase card issuing bank. The other key personnel in the purchase card program are the approving officials and the cardholders. Figure 5 illustrates the standard process in which the Navy purchase card is used to acquire goods and services and certify the monthly bill for payment. If operating effectively, the approving official is responsible for ensuring that all purchases made by the cardholders within his or her cognizance are appropriate and that the charges are accurate. The approving official is supposed to resolve all questionable purchases with the cardholder before certifying the bill for payment. In the event an unauthorized purchase is detected, the approving official is supposed to notify the agency program coordinator and other appropriate personnel within the command in accordance with the command procedures. After reviewing the monthly statement, the approving official is to certify the monthly invoice and send it to the Defense Finance and Accounting Service (DFAS) for payment. A purchase cardholder is a Navy employee who has been issued a purchase card. The purchase card bears the cardholder’s name and the account number that has been assigned to the individual. The cardholder is expected to safeguard the purchase card as if it were cash. When a supervisor requests that a staff member receive a purchase card, the agency program coordinator is to first provide training on purchase card policies and procedures and then establish a credit limit and issue a purchase card to the staff member. Purchase cardholders are delegated limited contracting officer ordering responsibilities. As limited contracting officers, purchase cardholders do not negotiate or manage contracts. Rather, cardholders use purchase cards to order goods and services for their units and their customers as well. Cardholders may pick up items ordered directly from the vendor or request that items be shipped directly to an end user (requesters). Upon receipt of purchased items, the cardholder is to record the transaction in his or her purchase log and obtain documented independent confirmation from the end user, the supervisor, or another individual that the items have been received and accepted by the government. The cardholder is also to notify the property book-officer of accountable items received so that these items can be recorded in the accountable property records. The purchase card payment process begins with receipt of the monthly purchase card billing statements. Section 2784 of title 10, United States Code, requires DOD to issue regulations that ensure that purchase cardholders and each official with authority to authorize expenditures charged to the purchase card reconcile charges with receipts and other supporting documentation before paying the monthly purchase card statement. NAVSUP Instruction 4200.94 states that upon receipt of the individual cardholder statement, the cardholder has 5 days to reconcile the transactions appearing on the statement by verifying their accuracy documentation supporting the transaction and to notify the approving official in writing of any discrepancies in the statement. In addition, under NAVSUP Instruction 4200.94, before the credit card bill is paid, the approving official is responsible for (1) ensuring that all purchases made by the cardholders within his or her cognizance are appropriate and that the charges are accurate and (2) the timely certification of the monthly summary statement for payment by the DFAS. The instruction further states that within 5 days of receipt, the approving official must review and certify for payment the monthly billing statement, which is a summary invoice of all transactions of the cardholders under the approving official’s purview. The approving official is instructed to presume that all transactions on the monthly statements are proper unless notified in writing by the purchase cardholder to the contrary. However, the presumption does not relieve the approving official from reviewing the statements for blatantly improper purchase card transactions and taking the appropriate action prior to certifying the invoice for payment. In addition, the approving official is responsible for forwarding disputed charge forms for submission to Citibank for credit. Under the Navy’s task order, Citibank allows the Navy up to 60 days after the statement date to dispute invalid transactions and request a credit. Upon receipt of the certified monthly purchase card summary statement, a DFAS vendor payment clerk is to (1) review the statement and supporting documents to confirm that the prompt-payment certification form has been properly completed and (2) subject it to automated and manual validations. DFAS effectively serves as a payment processing service and relies on the approving-official certification of the monthly bill as support to make the payment. The DFAS vendor payment system then batches all of the certified purchase card payments for that day and generates a tape for a single payment to Citibank by electronic funds transfer. Figure 5 illustrates the current design of the purchase card payment process for the Navy command units we reviewed. We reviewed key purchase card controls for units of four Navy commands. In addition, as you requested, we followed up on (1) the status of the recommendations we made in our November 30, 2001, report, (2) the status of the former commanding officer of SPAWAR Systems Center San Diego, (3) the status of two potential fraud cases that we reported on in the March 2002 testimony, and (4) any other fraud cases we identified as part of this audit. Our review of key purchase card controls for the Atlantic Fleet, Pacific Fleet, NAVSEA, and the Marine Corps covered the overall management control environment, including (1) management’s attitude in establishing the needed controls, (2) the numbers of cardholder and approving officials, (3) cardholder and approving official credit limits, (4) training for cardholders and approving officials, (5) monitoring and audit of purchase card activity, and (6) effectiveness of purchase card infrastructure; tests of statistical samples of key controls over purchase card transactions made during the first 11 months of fiscal year 2001 including (1) screening for required vendors, (2) documentation of independent confirmation that items or services paid for with the purchase card were received, (3) proper certification of the monthly purchase card statement for payment, and (4) substantive tests of pilferable property items included in our sample transactions to verify whether they were recorded in an accountable record; analytical reviews of transactions entered into during the last month of fiscal year 2001; data mining of the population of fiscal year 2001 transactions to identify potentially fraudulent, improper, and abusive or questionable transactions; analysis and audit work related to invoices and other information obtained from Franklin Covey, from which, based on interviews with cardholders and our review of other transactions, we had reason to believe that the units at the four commands had made improper and abusive or questionable purchases during fiscal year 2001; analysis of the population of fiscal year 2001 purchase card transactions, for the four command units, to identify purchases that were split into two or more transactions to avoid the micropurchase threshold or other spending limits; and analysis of the population of Navy-wide fiscal year 2001 purchase card transactions to determine whether the Navy was effectively managing its purchases with frequently used vendors. In addition, our Office of Special Investigations worked with DOD’s criminal investigative agencies, Citibank, and credit card industry representatives to identify known and potentially fraudulent purchase card cases. Our Office of Special Investigations also investigated potentially fraudulent or abusive purchase card transactions that we identified while analyzing fiscal year 2001 purchase card transactions at the units in the four commands. Because the Navy does not have a database of purchase card fraud cases, we were unable to determine the extent of known fraud cases at either the case study locations or Navy-wide. We used as our primary criteria applicable laws and regulations; our Standards for Internal Control in the Federal Government; and our Guide for Evaluating and Testing Controls Over Sensitive Payments. To assess the management control environment, we applied the fundamental concepts and standards in the GAO internal control standards to the practices followed by management in the areas reviewed. To test controls, we selected stratified random probability samples from the population of purchase card transactions by cardholders who had mailing addresses at the four locations we visited. All purchase card transactions subjected to sampling occurred during the first 11 months of fiscal year 2001. We performed analytical reviews on the transactions that occurred during the last month of fiscal year 2001. Specifically, we selected 150 transactions of Atlantic Fleet cardholders based in Norfolk from a population of 72,000 transactions totaling $43 million; 166 transactions of Pacific Fleet cardholders based in San Diego from a population of 46,000 transactions totaling $30 million; 158 transactions from Naval Sea Systems Command cardholders based in Norfolk from a population of 46,000 transactions totaling $41 million; and 150 Marine Corps cardholders based in Camp Lejeune from a population of 52,000 transactions totaling $32 million. Within each command we stratified the population of transactions by the dollar value of the transaction and by whether the transaction was likely to be for a purchase of computer-related equipment. With this statistically valid probability sample, each transaction in the population had a nonzero probability of being included, and that probability could be computed for any transaction. Each sample element was subsequently weighted in the analysis to account statistically for all the transactions in the population, including those that were not selected. Table 13 presents our test results on cardholder and approving official training and three key transaction- level controls, and shows the confidence intervals for the estimates for the population of purchase card transactions made by units at the four commands during the first 11 months of fiscal year 2001. Although we projected the results of those samples to the populations of transactions at the respective commands, the results cannot be projected to the population of Navy-wide transactions or commands. Because most of the sampled transactions did not contain pilferable property, we did not project the results of that attribute test to the populations of transactions tested. In addition to selecting statistical samples of Pacific Fleet, Atlantic Fleet, Naval Sea Systems Command, and Marine Corps transactions to test specific internal controls, we also made nonrepresentative selections of fiscal year 2001 transactions from these commands and the Navy as a whole. We conducted separate analysis of acquisitions that were potentially fraudulent, improper, and abusive or otherwise questionable. Our data mining for potentially fraudulent, improper, and abusive or questionable transactions was limited in scope. For this review, we scanned the population of transactions at the four commands visited and the overall Navy database for vendors that are likely to sell goods or services (1) on NAVSUP’s list of prohibited items, (2) that are personal items, and (3) that are otherwise questionable. Our expectation was that transactions with certain vendors had a more likely chance of being fraudulent, improper, and abusive or questionable. Because of the large number of transactions that met these criteria we did not look at all potential abuses of the purchase card. Rather, we made nonrepresentative selections of transactions with the vendors that fit these criteria. For example, we reviewed, and in some cases made inquiries, concerning 443 transactions and other related transactions on the same monthly purchase card statement with vendors that sold such items as sporting goods, groceries, luggage, flowers, and clothing. While we identified some potentially fraudulent, improper, and abusive or questionable transactions, our work was not designed to identify, and we cannot determine, the extent of fraudulent, improper, and abusive or questionable transactions. Our data mining also included nonrepresentative selections of acquisitions that these units made during fiscal year 2001 that may have been split into multiple transactions to circumvent either the micropurchase competition requirements or cardholder single-transaction thresholds. We briefed DOD managers (including officials in DOD’s Purchase Card Joint Program Management Office) and Navy managers (including NAVSUP, Pacific Fleet, Atlantic Fleet, Naval Sea Systems Command, and the Marine Corps officials) on the details of our review, including our objectives, scope, and methodology and our findings and conclusions. We received comments from the Director, DOD Purchase Card Joint Program Management Office dated September 16, 2002, and have reprinted those comments in appendix VI. We also received comments from the Under Secretary of Defense (Comptroller) dated September 23, 2002, and have reprinted those comments in appendix VII. We conducted our audit work from November 2001 through July 2002 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. In our November 30, 2001, report we made 29 recommendations to the Navy to address key findings related to the weak management control environment and internal control discussed in our July 30, 2001, testimony. In response to our November 2001 report, DOD concurred with 19 recommendations, partially concurred with 7 recommendations, and disagreed with 3 recommendations. On May 29, 2002, the Navy provided us with its assessment of its corrective actions to implement all 29 recommendations. The following chart summarizes (1) those recommendations, (2) the Navy’s representations as to actions taken, and (3) our observations on the status of the recommendations. We noted in many instances the Navy had taken positive steps to improve the purchase card controls; however, given the significant control problems that exist, the Navy will need to diligently monitor the purchase card program to attain and maintain a high level of adherence to the policies and directives. In our March 13, 2002, testimony, we identified a number of cases of potentially fraudulent purchase card use that we referred to our Office of Special Investigations for further investigation. One case involved transactions totaling about $164,000 with a communication contractor. A second case involved payment for food and refreshments at a local hotel. The information provided below summarizes the status of those cases. In the March testimony, we reported that during fiscal year 2001, SPAWAR made 75 transactions totaling about $164,000 for what appeared to be advance payments for services to a telecommunications contractor. Our Office of Special Investigations subsequently learned that the transactions were payments made by SPAWAR based on cost estimates provided by the contractor. In almost all 75 transactions, the amount paid by SPAWAR was more than the actual cost incurred by the contractor—even after the actual expenses were adjusted for estimated overhead and standard profit margin. Further, the work paid for by the purchase card was work that should have been paid for under an existing delivery order contract. According to contractor and SPAWAR personnel, the purchase card was used because it was a faster vehicle to get work done. The SPAWAR official also told us that when there was no funding remaining on a particular delivery order contract line item, SPAWAR used the purchase card rather than modifying the contract. Our Office of Special Investigations determined that SPAWAR overpaid the contractor about $34,000 for the 75 transactions identified in the March testimony. A SPAWAR official told us that on September 10, 2002, it received a check from the contractor in the amount of $9,862. The payment represented a refund for work the contractor did not perform on 4 transactions. The SPAWAR official also told us that the contractor disagrees with our Office of Special Investigations assessment that the contractor over charged SPAWAR about $24,000 on the 71 other transactions we reviewed. The SPAWAR Inspector General and Staff Judge Advocate investigated the use of a Navy purchase card at a San Diego hotel for an off-site meeting in which SPAWAR used appropriated funds to pay for meals provided to Navy personnel. As we have previously reported, without statutory authority, appropriated funds may not be used to furnish meals or refreshments to employees within their normal duty stations. The SPAWAR Inspector General told us the investigation determined that a SPAWAR Deputy Program Manager, Assistant Program Manager, and cardholder used the purchase card to improperly purchase food. Further, the SPAWAR Inspector General found that both the Assistant Program Manager and the cardholder made false statements to GAO when asked about purchasing food. However, the investigation has not been completed and we are not aware of any actions—administrative or disciplinary—that have been taken against the SPAWAR Deputy Program Manager, the Assistant Program Manager, or the cardholder for improper use of the purchase card. In our March 13, 2002, testimony on the purchase card controls at SPAWAR Systems Center and NPWC, we reported that the commanding officer of the Space and Naval Warfare Command, Systems Center San Diego, was relieved of his command in December 2001 for matters unrelated to the purchase card program. According to the SPAWAR Inspector General, on December 8, 2001, the admiral in charge of SPAWAR held a nonjudicial punishment hearing and found that the SPAWAR System Center commanding officer had violated two articles of the Uniform Code of Military Justice, including dereliction of duty and conduct unbecoming an officer. The admiral issued the commanding officer a punitive Letter of Reprimand, relieved him of his command at SPAWAR Systems Center San Diego, and endorsed the captain’s request for retirement from the Navy. Subsequently, information came to our attention that the former commanding officer of SPAWAR System Center San Diego was still employed by SPAWAR at the same rank he held—captain—when the admiral determined he was derelict in his duties and acted in a manner unbecoming an officer. In June 2002, our Office of Special Investigations contacted a DOD official and inquired if the former commanding officer was still on the Navy payroll as a captain. On July 31, 2002, a senior Navy official informed us that the former commanding officer was still on the Navy payroll and employed by SPAWAR as a Navy captain, but his retirement would become effective August 1, 2002. Individuals who made key contributions to this testimony include Fannie Bivins, Bertram Berlin, Kriti Bhandari, Francine DelVecchio, K. Eric Essig, Michael Hansen, Kenneth Hill, Jeffrey Jacobson, Tram Le, John Ledford, Latrealle Lee, Stephen Lipscomb, Susan Mason, Melissa McDowell, Sidney Schwartz, Carolyn Voltz, and Robert Wagner. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Department of Defense (DOD) is promoting departmentwide use of purchase cards for obtaining goods and services. It reported that for the year ended September 30, 2001, purchase cards were used by 230,000 cardholders to make 10.7 million transactions valued at more than $6.1 billion. The benefits of using purchase cards versus traditional contracting and payment processes are lower transaction processing costs and less red tape for both the government and the vendor community. Although GAO supports the purchase card program concept, it is important that agencies have adequate internal controls in place to protect the government from fraud, waste, and abuse. A weak overall control environment and breakdowns in key internal controls leave the Navy vulnerable to potentially fraudulent, improper, and abusive purchases. In response to GAO's previous findings, DOD and Navy have begun improving the control environment over the purchase card program. However, further improvements are needed to achieve an effective control environment. GAO determined that the Navy did not provide cardholders, approving officials, and agency program coordinators with sufficient human capital resources--time and training--to effectively perform oversight and manage the program. The weaknesses in the Navy's purchase card control environment at the units audited led to a significant breakdown in key control activities in fiscal year 2001. GAO determined that (1) cardholders did not screen for the availability of goods from required sources, (2) cardholders did not document that someone independent of the cardholder received and accepted the goods and services, (3) many Navy units did not maintain accountability over pilferable property acquired with the purchase card, and (4) cardholders did not reconcile monthly purchase card statements to supporting documentation and approving officials did not review the cardholders' reconciled bills prior to payment certification. The weak control environment and breakdown in key internal controls contributed to potentially fraudulent, improper, and abusive or questionable transactions that went undetected at units in all three Navy commands and the Marine Corps base GAO audited. GAO's site-specific and Navy-wide data mining transactions reviews identified other potentially fraudulent transactions including the purchase of computers, cell phones, food, cameras, power tools, televisions, personal digital assistants, clothing, and stereos. GAO also identified abusive and questionable transactions at all three Navy commands and the Marine Corps base audited and in GAO's Navy-wide data mining. The purchase card transactions that GAO considered to be questionable generally did not include and explanation or advance authorization that would justify these purchases or permit a determination of whether the purchases were improper or abusive.
The Council on Foreign Relations study sets the stage for rethinking the federal role in assisting communities prepare for homeland security. Although acknowledging that the nation’s preparedness has improved, the Council’s report highlights some of the significant gaps in preparedness including shortfalls in personnel, equipment, communications, and other critical capabilities in local services. The Council’s report attempts to fill a void by estimating unmet needs for emergency responders. The Council’s 5-year estimate of approximately $98 billion across all levels of government was developed in concert with The Concord Coalition and the Center for Strategic and Budgetary Assessments. It was based on data made available by professional associations and others in the areas of fire service, urban search and rescue, hospital preparedness, public health, emergency 911 systems, interoperable communications, emergency operations centers, animal/agricultural emergency response, emergency medical services systems, emergency management planning and coordination, and emergency response regional exercises. However, the report clearly states that it does not include estimates for certain costs such as overtime for training and other estimated needs in several critical mission areas, such as the needs of police forces, because national police organizations were unable to provide the information. The total estimate is characterized in the report as being very preliminary and imprecise given the absence of comprehensive national preparedness standards. As the report itself acknowledges, the analysis is intended to foster national debate by focusing on the baseline of preparedness and steps needed to promote higher levels of readiness. The report performs a service in beginning an important dialogue on defining standards to assess readiness and recommends the development of a better framework and procedures to develop more precise estimates of national requirements and needs. The report concludes that the basis for funding decisions would be improved by agreement on a more detailed and systematic methodology to determine national requirements grounded in national standards defining emergency preparedness. We at GAO have not evaluated the methodology used in the Council’s report. However, we have issued a report evaluating needs assessments performed by other agencies in the area of public infrastructure. That report highlights best practices that may prove useful if used by the Department of Homeland Security or other public or private entities in analyzing homeland security preparedness needs in the future. The practices used by these agencies to estimate funding needs varied widely, but we were able to benchmark their assessments against best practices used by leading public and private organizations. They also reflect requirements that the Congress and the Office of Management and Budget have placed on federal agencies that are aimed at improving capital decisionmaking practices. Among these best practices for infrastructure, there are several that might be considered useful and relevant when conducting homeland security capability assessments. For example, some agencies’ assessments focus on resources needed to meet the underlying missions and performance goals. This type of results-oriented assessment is based on the actions needed to attain specific outcomes, rather than being simply a compilation of all unmet needs regardless of their contribution to underlying outcomes and goals. Assessments might also consider alternative approaches to meeting needs for cost effectiveness such as reengineering existing processes and improving collaboration with other governments and the private sector. Best-practice agencies use cost-benefit analysis to include only those needs for which benefits exceed costs; in cases where benefits are difficult to quantify, assessments could include an analysis that compares alternatives and recommends the most cost-effective (least-cost) option for achieving the goal. Some agencies also rank projects based on established criteria such as cost-effectiveness, relative risk, and potential contribution to program goals. Finally, we found that best-practice agencies have a process to independently review the quality of data used to derive estimates. GAO’s work over the years has repeatedly shown that mission fragmentation and program overlap are widespread in the federal government and that crosscutting program efforts are not well coordinated. As far back as 1975, GAO reported that many of the fundamental problems in managing federal grants were the direct result of the proliferation of federal assistance programs and the fragmentation of responsibility among different federal departments and agencies. While we noted that the large number and variety of programs tended to ensure that a program is available to meet a defined need, we found that substantial problems occur when state and local governments attempt to identify, obtain, and use the fragmented grants-in-aid system to meet their needs. Such a proliferation of programs leads to administrative complexities that can confuse state and local grant recipients. Like GAO, Congress is aware of the challenges facing grantees in the world of federal grants management. In 1999, it passed the Federal Financial Assistance Management Improvement Act (P.L. 106-107), with the goal of improving the effectiveness and performance of federal financial assistance programs, simplify federal financial assistance application and reporting requirements, and improve the delivery of services to the public. The 108th Congress faces the challenge to redesign the nation’s homeland security grant programs in light of the events of September 11, 2001 and the establishment of the Department of Homeland Security (DHS). In so doing, Congress must balance the needs of our state and local partners in their call for both additional resources and more flexibility with the nation’s goals of attaining the highest levels of preparedness. At the same time, we need to design and build in appropriate accountability and targeting features to ensure that the funds provided have the best chance of enhancing preparedness. Funding increases for combating terrorism have been dramatic and reflect the high priority that the administration and Congress place on this mission. As the Council’s report observes, continuing gaps in preparedness may prompt additional funds to be provided. The critical national goals underlying these funding increases bring a responsibility to ensure that this large investment of taxpayer dollars is wisely applied. We recently reported on some of the management challenges that could stem from increased funding and noted that these challenges—including grants management—could impede the implementation of national strategies if not effectively addressed. GAO has testified before on the development of counter-terrorism programs for state and local governments that were similar and potentially duplicative. Table 1 shows many of the different grant programs that can be used by first responders to address the nation’s homeland security. To illustrate the level of fragmentation across homeland security programs, we have shown in table 1 the significant features for selected major assistance programs targeted to first responders. As the table shows, substantial differences exist in the types of recipients and the allocation methods for grants addressing similar purposes. For example, some grants go directly to local first responders such as firefighters while at least one goes to state emergency management agencies and another directly to state fire marshals. The allocation methods differ as well—some are formula grants while the others involve discretionary decisions by federal agency officials on a project basis. Grant requirements differ as well—DHS’ Assistance to Firefighters Grant has a maintenance of effort requirement (MOE) while the State Fire Training Systems Grant has no similar requirement. Table 2 shows that considerable potential overlap exists in the activities that these programs support—for example, funding for training is provided by most grants in the table and several provide for all four types of needs. The fragmented delivery of federal assistance can complicate coordination and integration of services and planning at state and local levels. Homeland security is a complex mission requiring the coordinated participation of many federal, state, and local government entities as well as the private sector. As the national strategy issued by the administration last summer recognizes, preparing the nation to address the new threats from terrorism calls for partnerships of many disparate actors at many levels in our system. Within local areas, for example, the failure of local emergency communications systems to operate on an interoperable basis across neighboring jurisdictions reflects coordination problems within local regions. Local governments are starting to assess how to restructure relationships along contiguous local entities to take advantage of economies of scale, promote resource sharing, and improve coordination on a regional basis. Our previous work suggests that the complex web of federal grants used to allocate federal aid to different players at the state and local level may continue to reinforce state and local fragmentation. Some have observed that federal grant restrictions constrain the flexibility state and local officials need to tailor multiple grants to address state and local needs and priorities. For example, some local officials have testified that rigid federal funding rules constrain their flexibility and cannot be used to fund activities that meet their needs. We have reported that overlap and fragmentation among homeland assistance programs fosters inefficiencies and concerns in first responder communities. State and local officials have repeatedly voiced frustration and confusion about the burdensome and inconsistent application processes among programs. We concluded that improved coordination at both federal and state and local levels would be promoted by consolidating some of these first responder assistance programs. Using grants as a policy tool, the federal government can engage and involve other levels of government and the private sector in enhancing homeland security while still having a say in recipients’ performance and accountability. The structure and design of these grants will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. Addressing the underlying fragmentation of grant programs remains a challenge for our federal system in the homeland security area. Several alternatives have been pursued in the past to overcome problems fostered by fragmentation in the federal aid structure. I will discuss three briefly here – block grants, performance partnerships, and streamlining planning and administrative requirements. Block grants are one way Congress has chosen to consolidate related programs. Block grants currently are used to deliver assistance in such areas as welfare reform, community development, social services, law enforcement, public health, and education. While such initiatives often involved the consolidation of categorical grants, block grants also typically devolve substantial authority for setting priorities to state or local governments. Under block grants, state and local officials bear the primary responsibility for monitoring and overseeing the planning, management, and implementation of activities financed with federal grant funds. Accordingly, block grant proposals generally call for Congress to make a fundamental decision about where power and authority to make decisions should rest in our federal system for a particular program area. While block grants devolve authority for decisions, they can and have been designed to facilitate some accountability for national goals and objectives. Since federal funds are at stake, Congress typically wants to know how federal funds are spent and what state and local governments have accomplished. Indeed, the history of block grants suggests that the absence of national accountability and reporting for results can either undermine continued congressional support or prompt more prescriptive controls to ensure that national objectives are being achieved. Given the compelling national concerns and goals for homeland security, Congress may conclude that the traditional devolution of responsibility found in a pure block grant may not be the most appropriate approach. Congress might instead choose a hybrid approach—what we might call a “consolidated categorical” grant which would consolidate a number of narrower categorical programs while retaining strong standards and accountability for discrete federal performance goals. State and local governments can be provided greater flexibility in using federal funds in exchange for more rigorous accountability for results. One example of this model involves what became known as “performance partnerships,” exemplified by the initiative of the Environmental Protection Agency (EPA). Under this initiative, states may voluntarily enter Performance Partnership Agreements with EPA regional offices covering the major federal environmental grant programs. States can propose to use grants more flexibly by shifting federal funds across programs but they are held accountable for discrete or negotiated measures of performance addressing EPA’s national performance goals. This approach has allowed states to use federal funds more flexibly and support innovative projects while increasing the focus on results and effectiveness. However, in 1999 we reported that the initiative had been hampered by an absence of baseline data against which environmental improvements could be measured and the inherent difficulty in quantifying certain results and linking them to program activities. The challenge for developing performance partnerships for homeland security grants will be daunting because the administration has yet to develop clearly defined federal and national performance goals and measures. We have reported that the initiatives outlined in the National Strategy for Homeland Security often do not provide performance goals and measures to assess and improve preparedness at the federal or national levels. The strategy generally describes overarching objectives and priorities but not measurable outcomes. The absence of such measures and outcomes at the national level will undermine any effort to establish performance based grant agreements with states. The Council on Foreign Relations report recommends establishing clearly defined national standards and guidelines in consultation with first responders and other state and local officials. Another alternative to overcome grant fragmentation is the simplification and streamlining of administrative and planning requirements. In June 2003, the Senate Governmental Affairs Committee passed a bill (S. 1245, The Homeland Security Grant Enhancement Act of 2003) intended to better coordinate and simplify homeland security grants. The bill would establish an interagency committee to coordinate and streamline homeland security grant programs by advising the Secretary of DHS on the multiple programs administered by federal agencies. The interagency committee would identify all redundant and duplicative requirements to the appropriate committees of Congress and the agencies represented in the interagency committee. The bill also establishes a clearinghouse function within the Office for State and Local Government Coordination for grant information that would gather and disseminate information regarding successful state and local homeland security programs and practices. The bill seeks to streamline the application process for federal assistance and to rationalize and better coordinate the state and local planning requirements. The bill provides for a comprehensive state plan to address the broad range of emergency preparedness functions currently funded from separate programs with their own separate planning requirements. A statewide plan can be used as a tool to promote coordination among federal first responder programs that continue to exist as separate funding streams. One option could be to require recipients of federal grants for homeland security within each state to obtain review and comment by the central state homeland security agency to attest to consistency with the statewide plan. Whatever approach is chosen, it is important that grants be designed to (1) target the funds to states and localities with the greatest need, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as “supplantation,” with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. As Congress goes forward to consider how to design a grant system to promote a stronger federal, state, local and regional partnership to improve homeland security, it faces some of the traditional dilemmas in federal grant design. One is targeting. How do you concentrate funds in the places with the highest risks? A proclivity to spread money around, unfortunately, may provide less additional net protection while actually placing additional burdens on state and local governments. Given the significant needs and limited federal resources, it will be important to target to areas of greatest need. The formula for the distribution of any new grant could be based on several considerations, including relative threats and vulnerabilities faced by states and communities as well as the state or local government’s capacity to respond to a disaster. The Council on Foreign Relations report recommends that Congress establish a system for allocating scarce resources based on addressing identified threats and vulnerabilities. The report goes on to say that the federal government should consider factors such as population and population density, vulnerability assessments, and the presence of critical infrastructure within each state as the basis for fund distribution. By comparing three of the grants listed in table 2, one can see differences in the way funds have been allocated thus far. For example, under the State Homeland Security Grant Program allocations are determined by using a base amount of .75 percent of the total allocation to each state (including the District of Columbia and Puerto Rico) and .25 percent of the total to the territories. The balance of the funds goes to recipients on a population- share basis. In contrast, the Urban Area Security Initiative funds are distributed according to a formula from the Department of Homeland Security as being a combination of weighted factors including current threat estimates, critical assets within the urban area, population and population density—the results of which are ranked and used to calculate the proportional allocation of resources. For Byrne Grants, each participant state receives a base amount of $500,000 or .25 percent of the amount available for the program, whichever is greater, with the remaining funds allocated to each state based on the state’s relative share of the total U.S. population. A second dilemma in federal grant design involves preventing fiscal substitution or supplantation. In earlier work, we found that substitution is to be expected in any grant and, on average, every additional federal grant dollar results in about 60 cents of supplantion. We found that supplantation is particularly likely for block grants supporting areas with prior state and local involvement. However, our work on the Temporary Assistance to Needy Families block grant found that a strong maintenance of effort provision can limit states’ ability to supplant since recipients can be penalized for not meeting a maintenance of effort requirement. It seems obvious to say that grant recipients should maintain the effort they were making prior to receiving the grant and use the grant to add to, rather than replace, their own contribution. However, since September 11, 2001, many local jurisdictions have taken it upon themselves to take the initiative to dramatically increase their own-source funding in an effort to enhance security. Should the federal grant system now penalize them by locking in their increased spending levels and at the same time reward state and local governments that have taken a “wait and see” attitude concerning enhancing security? This is one of the design dilemmas that Congress will need to address to ensure that scarce federal resources in fact are used to promote increased capability. A third challenge is sustainability. Local governments think of sustainability as keeping the federal spigot permanently turned on. They may argue that the urgent needs they face will drive out the important needs of enhanced homeland security without continued federal aid. However, from a broader, national perspective there is an expectation that the responsibility for sustaining homeland security responsibility would at least be shared by all levels of government since state, local, and regional governments receive benefits from these grants in addition to the national benefit of improving homeland security. Several options can be considered to further shared fiscal responsibility. A state and local match could be considered to reflect both the benefits received by state and local taxpayers from preparedness as well as to encourage the kind of discipline and responsibility that can be elicited when a government’s own funds are at stake. An additional option—the “seed money” approach—could be to lower the federal match over time to encourage ownership, support, and long term sustainability at the state and local level for funded activities. However, at their best grants can stimulate state and local governments to enhance their preparedness to address the unique threats posed by terrorism. Ideally, grants should stimulate higher levels of preparedness and avoid simply subsidizing local functions that are traditionally state or local responsibilities. The literature on intergovernmental management suggests that federal money can succeed in institutionalizing a commitment to aided goals and purposes over time within states and communities, as professional administrators and clients of these programs take root and gain influence within local political circles. Ultimately, the sustainability of government funding can be promoted by accountability provisions that provide clear and transparent information on results achieved from the intergovernmental partnership. At the federal level, experience with block grants shows that grant programs are sustainable if they are accompanied by sufficient performance and accountability information on national outcomes to enable them to compete for funding in the congressional appropriations process. Accountability can be performance and results oriented to provide focus on national goals across state and local governments while providing for greater flexibility for those governments in deciding how best to meet those goals. Last summer, the Administration released a national strategy for homeland security that placed emphasis on security as a shared national responsibility involving close cooperation among all levels of government. We noted at the time that the national strategy’s initiatives often did not provide a baseline set of performance goals and measures for homeland security. Then and now—over a year later—the nation does not have a comprehensive set of performance goals and measures against which to assess and upon which to improve prevention efforts, vulnerability reduction, and responsiveness to damage and recovery needs at all levels of government. We still hold that given the need for a highly integrated approach to the homeland security challenge, national performance goals and measures for strategy initiatives that involve both federal and nonfederal actors may best be developed in a collaborative way involving all levels of government and the private sector. At this point, there are few national or federal performance standards that can be defined, given the differences among states and lack of understanding of what levels of preparedness are appropriate given a jurisdiction’s risk factors. The Council on Foreign Relations recommended that national standards be established by federal agencies in such areas as training, communications, and response equipment, in consultation with intergovernmental partners. Communications is an example of an area for which standards have not yet been developed, but various emergency managers and other first responders have highlighted that standards are needed. State and local government officials often report that there are deficiencies in their communications capabilities, including the lack of interoperable systems. The national strategy recognizes that it is crucial for response personnel to have and use equipment, systems, and procedures that allow them to communicate. Therefore, the strategy calls for a national communication plan to establish protocols (who needs to talk to whom), processes, and national standards for technology acquisition. Just as the federal government needs to rationalize its grant system for first responders, state and local governments are also challenged to streamline and better coordinate their efforts. As pointed out in the recent report from the Century Foundation, ultimately the nation’s homeland defense will be critically dependent on the ability of state and local governments to act to overcome barriers to coordination and integration. The scale of homeland security threat spills over conventional boundaries of political jurisdictions and agencies. Effective response calls on local governments to reach across boundaries to obtain support and cooperation throughout an entire region or state. Promoting partnerships among key players within each state and even across states is vital to addressing the challenge. States and local governments need to work together to reduce and eliminate barriers to achieving this coordination and regional integration. The federal government is, of course, a key player in promoting effective preparedness and can offer state and local governments assistance beyond grant funds in such areas as risk management and intelligence sharing. The Office for State and Local Government Coordination has been established within DHS to facilitate close coordination with state and local first responders, emergency services and governments. In turn, state and local governments have much to offer in terms of knowledge of local vulnerabilities and resources, such as local law enforcement personnel, available to respond to threats in their communities. Local officials emphasized the importance of regional coordination. Regional resources, such as equipment and expertise, are essential because of proximity, which allows for quick deployment, and experience in working within the region. Large-scale or labor-intensive incidents quickly deplete a given locality’s supply of trained responders. Some cities have spread training and equipment to neighboring municipal areas so that their mutual aid partners can help. We found in our work last year that to facilitate emergency planning and coordination among cities in metropolitan areas officials have joined together to create task forces, terrorism working groups, advisory committees and Mayors’ caucuses. Cities and counties have used mutual aid agreements to share emergency resources in their metropolitan areas. These agreements may include fire, police, emergency medical services, and hospitals and may be formal or informal. These partnerships afford economies of scale across a region. In events that require a quick response, such as a chemical attack, regional agreements take on greater importance because many local officials do not think that federal and state resources can arrive in sufficient time to help. Forging regional arrangements for coordination is not an easy process at the local level. The federal government may be able to provide incentives through the grant system to encourage regional planning and coordination for homeland security. Transportation planning offers one potential model for federal influence that could be considered. Under federal law, Metropolitan Planning Organizations are established to develop regionally based transportation plans from which, generally, projects that are to be federally funded must be selected. Improving the partnership among federal and nonfederal officials is vital to achieving important national goals. The task facing the nation is daunting and federal grants will be a central vehicle to improve and sustain preparedness in communities throughout the nation. While funding increases for combating terrorism have been dramatic, the Council’s report reflects concerns that many have about the adequacy of current grant programs to address the homeland security needs. Ultimately, the “bottom line” question is: What impact will the grant system have in protecting the nation and its communities against terrorism? At this time, it is difficult to know since we do not have clearly defined national standards or criteria defining existing or desired levels of preparedness across the country. Our grant structure is not well suited to provide assurance that scarce federal funds are in fact enhancing the nation’s preparedness in the places most at risk. There is a fundamental need to rethink the structure and design of assistance programs, to streamline and simplify programs, improve targeting, and enhance accountability for results. Federal, state, and local governments alike have a stake in improving the grant system to reduce burden and tensions and promote the level of security that can only be achieved through effective partnerships. The sustainability and continued support for homeland security initiatives will rest in no small part on our ability to demonstrate to the public that scarce public funds are in fact improving security in the most effective and efficient manner. This concludes my prepared statement. I would be pleased to answer any questions you or the members of the subcommittee may have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The challenges posed in strengthening homeland security exceed the capacity and authority of any one level of government. Protecting the nation calls for a truly integrated approach bringing together the resources of all levels of government. The Council on Foreign Relations study--Emergency Responders: Drastically Underfunded, Dangerously Unprepared--states that in the aftermath of the September 11 attacks, the United States must prepare based on the assumption that terrorists will strike again. Although it acknowledges the nation's preparedness has improved, the Council's report highlights gaps in preparedness including shortfalls in personnel, equipment, communications, and other critical capabilities. Given the many needs and high stakes, it is critical that the design of federal grants be geared to fund the highest priority projects with the greatest potential impact for improving homeland security. This testimony discusses possible ways in which the grant system for first responders might be reformed. The federal grant system for first responders is highly fragmented, which can complicate coordination and integration of services and planning at state and local levels. In light of the events of September 11, 2001 and the establishment of the Department of Homeland Security, the 108th Congress faces the challenge of redesigning the homeland security grant system. In so doing, Congress must balance the needs of our state and local partners in their call for both additional resources and more flexibility with the nation's goals of attaining the highest levels of preparedness. Given scarce federal resources, appropriate accountability and targeting features need to be designed into grants to ensure that the funds provided have the best chance of enhancing preparedness. Addressing the underlying fragmentation of grant programs remains a challenge for our federal system in the homeland security area. Several alternatives might be employed to overcome problems fostered by fragmentation in the federal aid structure, including consolidating grant programs through block grants, establishing performance partnerships, and streamlining planning and administrative requirements. Grant programs might be consolidated using a block grant approach, in which state and local officials bear the primary responsibility for monitoring and overseeing the planning, management, and implementation of activities financed with federal grant funds. While block grants devolve authority for decisions, they can be designed to facilitate accountability for national goals and objectives. Congress could also choose to take a more hybrid approach that would consolidate a number of narrowly focused categorical programs while retaining strong standards and accountability for discrete federal performance goals. One example of this model involves establishing performance partnerships, exemplified by the initiative of the Environmental Protection Agency in which states may voluntarily enter into performance agreements with the agency's regional offices covering the major federal environmental grant programs. Another option would be to simplify and streamline planning and administrative requirements for the grant programs. Whatever approach is chosen, it is important that grants be designed to target funds to states and localities with the greatest need, discourage the replacement of state and local funds with federal funds, and strike the appropriate balance between accountability and flexibility.
The Army National Guard of the United States and the Air National Guard of the United States are two components of the armed forces Selected Reserve. The National Guard Bureau is the federal entity responsible for the administration of both the Army National Guard and the Air National Guard. The Army National Guard, which is authorized 350,000 soldiers, makes up more than one-half of the Army’s ground combat forces and one- third of its support forces (e.g., military police, transportation units). Army National Guard units are located at more than 3,000 armories and bases in all 50 states and 4 U.S. territories. Traditionally, the majority of Guard members are employed on a part-time basis, typically training 1 weekend per month and 2 weeks per year. However, after September 11, 2001, the President authorized reservists to be activated for up to 2 years. As of July 2005, more than 70,000 Army National Guard personnel were activated under this authority to support ongoing operations. The Guard also employs some full-time personnel who assist unit commanders in administrative, training, and maintenance tasks. Army National Guard personnel may be ordered to perform duty under three general statutory frameworks: Title 10 or 32 of the United States Code or pursuant to state law in a state active duty status. In a Title 10 status, Army National Guard personnel are federally funded and under federal command and control. Personnel may enter Title 10 status by being ordered to active duty, either voluntarily or under appropriate circumstances involuntarily (i.e., mobilization). Personnel in Title 32 status are federally funded but under state control. Title 32 is the status in which National Guard personnel typically perform training for their federal mission. Personnel performing state active duty are state-funded and under state command and control. Under state law, the governor may order National Guard personnel to perform state active duty to respond to emergencies, civil disturbances, and for other reasons authorized by state law. While the Army National Guard performs both federal and state missions, the Guard is organized, trained, and equipped for its federal missions, and these take priority over state missions. The Global War on Terrorism, a federal mission, is a comprehensive effort to defeat terrorism and protect and defend the homeland and includes military operations such as Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom. As we reported in our November 2004 report on the National Guard, the Army National Guard’s involvement in federal operations has increased substantially since the September 11 terrorist attacks, and Army National Guard members have participated in overseas warfighting operations in Afghanistan and Iraq, peacekeeping operations in Bosnia and Kosovo, and homeland missions, such as guarding Air Force bases. Figure 1 shows that while the number of activated Army National Guard personnel has declined since its peak in December 2004 and January 2005, it continues to provide a substantial number of personnel to support current operations. As of July 2005, about 35,500 of the 113,000 soldiers, or nearly one-third of the soldiers serving in Operation Iraqi Freedom, were Army National Guard members. In a June 30, 2005, testimony before the Senate Armed Services Committee the Army’s Chief of Staff said that the Army National Guard’s participation in overseas operations is expected to decrease somewhat in the near future. Although the Army National Guard is expected to continue its participation in ongoing operations, decisions as to the level of participation have not been made. The Department of the Army is responsible for equipping the Army National Guard. DOD policy requires that equipment be provided to units according to their planned wartime mission, regardless of their component. However, based on the Army’s funding priorities, the most modern equipment is usually provided to units that would deploy first. Later deploying units, such as most Army National Guard units, are equipped with older items from the Army’s inventory as active forces receive newer and more modern equipment. Army National Guard units are responsible for conducting some maintenance of their equipment. While deploying Army National Guard units have had priority for getting the equipment they needed, readying these forces has degraded the equipment inventory of the Guard’s nondeployed units and equipment shortages threaten the Guard’s ability to prepare forces for future deployments. Among nondeployed National Guard units, the amount of essential warfighting equipment on hand has continued to decrease since we last reported on the Army National Guard in 2004. Equipment shortages have developed because most Army National Guard units are still structured with lesser amounts of equipment than they need to deploy. To ready deploying units for overseas missions, the Guard has had to transfer large numbers of equipment items from nondeployed units—a practice that has left nondeployed units with increasing shortages of equipment and made it difficult to prepare units for future missions and maintain readiness for any unplanned contingencies. Moreover, the equipment requirements for deploying Army National Guard units have evolved as the nature of current operations has changed. This has meant that in some cases, the Army National Guard has had little time to identify sources of equipment and transfer needed items to deploying units. The Army is adapting some of its processes to help units address the evolving equipment requirements. Most Army National Guard units mobilized for recent overseas operations had equipment shortages that had to be filled so that the unit could meet the combatant commander’s equipment requirements for their mission. These shortages exist because the Army, following DOD planning guidance, has historically equipped all Army units, including the Army National Guard, according to a tiered resourcing strategy. Under tiered resourcing, those units expected to deploy overseas early in a conflict receive first priority for equipment, and most Army National Guard units were expected to deploy after the active component units to serve as follow-on forces. The Army therefore accepted some operational risks by providing lower priority Army National Guard units with less equipment than they would need for their mission under the assumption that there would be time to provide additional equipment to these units before they would be deployed. For example, Army National Guard enhanced separate brigades are generally supplied with about 75 percent of the equipment they require for their warfighting missions and divisional units, which comprise the majority of the Guard’s combat forces, are supplied with about 65 percent. In addition to being given less equipment, most Army National Guard units did not have priority for the newest, most modern equipment, so much of the Guard’s equipment is older and less modern than that of the active Army and is not always compatible with more modern items. However, for recent operations, combatant commanders have required Army National Guard units to deploy with 90 to 100 percent of the equipment they are expected to need and with equipment that is compatible with active Army units. As an increasing number of Army National Guard forces have been needed to support current operations, the Army National Guard has supplied the equipment its deploying units need to meet combatant commander requirements by transferring equipment from within the Army National Guard. The Army National Guard first tries to identify the needed equipment within the same state as the deploying unit. If the equipment cannot be found within the state, the National Guard Bureau requests the equipment from Army National Guard units across the United States. If the equipment is not available in the Army National Guard, the Army National Guard notifies the Army that the equipment is not available, and the Army takes over the task of providing the equipment to the mobilized unit. For example, although the 30th Brigade Combat Team needed about 8,810 night vision goggles to deploy, it only had about 40 percent of its requirement on hand when it was alerted to prepare to deploy, so the Army National Guard had to identify and transfer about 5,272 pairs of goggles to fully equip the unit. In another case, the Army tasked the National Guard to convert 40 nonmilitary police units, including field artillery companies, to security units capable of performing selected military police missions in Iraq during 2004 and 2005. While a military police company typically has 47 humvees in its inventory, field artillery companies have only about 3 humvees that are suitable for this new mission. Therefore, the converted units had to obtain armored humvees from other units already in Iraq because the Army National Guard had depleted its inventory of armored humvees. As current operations have continued, the pool of equipment from which the Army National Guard can draw has been reduced because so many items have been transferred to deploying units or left overseas. Shortages of some equipment items have forced the Army National Guard to take measures that have further exacerbated existing shortages in nondeployed units to provide training equipment for deploying units. For example, because the Army National Guard’s supply of armored humvees was depleted, the Army directed the Army National Guard to transfer more than 500 humvees from nondeployed Guard units to create training sets for units to use when preparing for deployment. Significant numbers of equipment transfers have persisted as operations overseas have continued. We previously reported that as of June 2004 the Army National Guard had transferred more than 35,000 pieces of equipment to ready units for recent operations. By July 2005, the number of equipment items transferred among Army National Guard units had grown to more than 101,000. As a result of these transfers, the equipment readiness of nondeployed Army National Guard units has declined. As figure 2 shows, the percentage of nondeployed units that reported having the minimum amount of equipment they would need to deploy dropped from 87 percent in October 2002 to 59 percent in May 2005. However, this estimate includes units that have older, less modern equipment referred to as substitute equipment. While these substitute items are useful for training purposes, commanders may not allow these items in the theater of operations because they may not be compatible with the equipment other units are using and cannot be sustained logistically in theater. In addition, this estimate includes units that have equipment that is undergoing maintenance after returning from deployment or was left overseas, so these items are not readily available for deployment. The National Guard Bureau estimates that when substitute items, equipment undergoing maintenance, and equipment left overseas for follow-on forces are subtracted, its nondeployed units had available only about 34 percent of essential warfighting equipment as of July 2005. With respect to some equipment items, transfers of equipment to deploying units have depleted the inventories of many key items in nondeployed units. Table 1 shows selected items needed for current mobilization for which inventory levels in nondeployed Guard units have fallen below 20 percent of authorized levels. As of July 2005, the Army National Guard reported that equipment transfers had reduced its inventory of more than 220 items to less than 5 percent of the required amount or a quantity of fewer than 5 items. Among these 220 high-demand items are generators, trucks, and radios. While the Army can supply deploying forces with additional equipment after they are mobilized, nondeployed units will be challenged to maintain readiness for future missions because they do not have the equipment to train with or to use for other contingencies. The effect of equipment shortages on nondeployed units’ ability to perform homeland defense missions is not known because, as we reported in 2004, DOD has not developed requirements or preparedness standards and measures for the homeland missions in which the Army National Guard is expected to participate. However, as previously reported, some of these items such as humvees, night vision goggles, and chemical protective suits are useful for the Guard’s domestic missions, such as responding to potential terrorist threats. As current military operations have evolved, equipment requirements for the Global War on Terrorism have continued to change. This has challenged Guard units preparing to deploy because equipment requirements are not defined and communicated to them until close to their deployment dates. Equipment that was not considered essential for some units’ expected missions has become important for ongoing operations, and units have been required to have equipment that is newer than or different from that on which they have been trained. For example, the 30th Brigade Combat Team from North Carolina, which deployed in the spring of 2004, and the 48th Brigade Combat Team from Georgia, which deployed in 2005, were directed to deploy as motorized brigade combat teams with humvees instead of the heavy-tracked equipment, such as Bradley fighting vehicles and tanks, with which they had trained for their expected missions. Overall, the combatant commander required that the 30th Brigade deploy to Operation Iraqi Freedom with more than 35 types of items that were previously not authorized for the unit, including different radios and weapons. Due to changing conditions in theater and a desire to tailor a unit’s equipment as closely as possible to its expected mission, the Army has continued to modify equipment requirements after units are alerted. These changes have resulted in requirements not being communicated to some Army National Guard units in a timely manner so that the units could be equipped as efficiently as possible for current operations or be provided ample time for training. In some instances, Army National Guard units have not known exactly what equipment they would require to deploy and what they could expect to receive in theater until close to their deployment dates, which has made it more difficult for Army National Guard officials to gather the equipment deploying units need to fill equipment shortages. For example, the 48th Brigade Combat Team, which was preparing for deployment in May 2005, had still not received a complete list of all of the equipment it would need at the time of our visit in April 2005. Because officials did not know exactly what they would need to take with them overseas, the brigade packed and transported 180 different vehicles to be shipped to theater. When officials learned that this equipment was already available in theater, these vehicles had to be shipped back to the brigade’s mobilization station at Fort Stewart, Georgia. In some cases, delays caused by the changing equipment requirements reduced the amount of time units had to train with their new equipment. For example, the 30th Brigade did not have a chemical agent identification set to train with until its final exercise before deploying, and it did not have access to a Blue Force Tracker, a digital communications system that allows commanders to track friendly forces across the battlefield in real time, for training until the unit was in theater. In some cases, the 30th Brigade did not receive some items until they could be transferred from nondeployed units or they were provided in theater. For example, the unit received the 4,000 ceramic body armor inserts needed to protect soldiers from small arms fire upon arrival in Kuwait. According to Army officials, in such instances units may undergo training upon arrival in the theater of operations to acquaint them with new equipment. However, we did not evaluate the adequacy of the training units received in the theater of operations. To address critical equipment shortages and the evolving equipment requirements for current operations, the Army has adapted its equipping process in two ways. First, rather than having units bring all their equipment to the theater of operations and take it back to their home stations when they return home, the Army now requires units, in both the active and reserve components, to leave certain essential equipment that is in short supply in theater for follow-on units to use. This is intended to reduce the amount of equipment that has to be transported from the United States to theater, to better enable units to meet their deployment dates, and to maintain stocks of essential equipment in theater where it is most needed. While this equipping approach has helped meet current operational needs, it has continued the cycle of reducing the pool of equipment available to nondeployed forces for unplanned contingencies and for training. Second, the Army has instituted a process, known as a predeployment site survey, to allow large units preparing to deploy to send a team to the mission area to determine equipment needs. The team generates a list of equipment, known as an operational needs statement, which the unit will need in theater but was not previously authorized and will need to obtain before deployment. Once the Army has approved the items, the unit can obtain them through transfers from other units or procurement. Over the course of current operations, the Army has improved the operational needs statement process by pre-approving packages of equipment that are in high-demand for current operations so that deploying units do not have to request these items separately. For example, more than 160 items, such as interceptor body armor; Javelin, a medium antitank weapon system; kits to add armor to humvees; and night vision goggles, among other items, are pre-approved. For example, in 2003, the 30th Brigade Combat Team prepared about 35 lists of additional equipment it would need to deploy in January 2004. By the time the 48th Brigade was preparing for deployment in 2005, changes to the process resulted in the unit preparing only one operational needs statement. In addition, an existing Army program, the Rapid Fielding Initiative, has provided individual equipment to soldiers, including those in the Army National Guard, more quickly than the standard acquisition process by fielding commercial-off-the-shelf technology. The Army provides 49 items such as body armor, helmets, hydration systems, goggles, kneepads, and elbow pads through this initiative to units preparing to deploy at their home stations and in theater. Filling shortages in deploying units has left nondeployed forces with worsening equipment shortages and hampers their ability to train for future missions. Growing shortages make it unclear whether the Guard will be able to maintain acceptable levels of equipment readiness for missions overseas or at home. The Army National Guard estimates that, since 2003, it has left more than 64,000 equipment items valued at over $1.2 billion overseas to support continuing operations. But, the Army lacks a full accounting of this equipment and has not prepared plans to replace it as required under DOD policy. As a result, the Guard is challenged in its ability to prepare and train for future missions. The policy reflected in DOD Directive 1225.6, Equipping the Reserve Forces, April 7, 2005, requires a replacement plan for reserve component equipment transferred to the active component for more than 90 days. According to Army officials, the Army did not initially track the Guard’s equipment or prepare replacement plans in the early phases of the war because the practice was intended to be a short-term measure and there were other priorities. In addition, the Army did not have a centralized process to develop plans to replace the equipment Army National Guard units left overseas and transfers of equipment between units were only documented at the unit level in unit property records. However, as operations have continued, the amount of Guard equipment retained in theater has increased, which has further exacerbated the shortages in nondeployed Army National Guard units. For example, when the North Carolina 30th Brigade Combat Team returned from its deployment to Iraq in 2005, it left 229 humvees, about 73 percent of its pre- deployment inventory of those vehicles, for other units to use. Similarly, according to Army National Guard officials, three Illinois Army National Guard units were required to leave almost all of their humvees, about 130, in Iraq when they returned from deployment. As a result, the units could not conduct training to maintain the proficiency they acquired while overseas or train new recruits. In all, the National Guard reports that 14 military police companies left over 600 humvees and other armored trucks which are expected to remain in theater for the duration of operations. While the Army has now instituted processes to account for certain high- demand equipment items that are being left in theater for the duration of the conflict and expects replacement plans for this equipment to be developed by August 2005, it does not appear that these replacement plans will account for all items transferred to the active component because the Army has not been tracking all Guard equipment left in theater in a centralized manner. In June 2004, six months after the first Army National Guard units left equipment overseas when they returned from deployment, the Army tasked the Army Materiel Command with overseeing equipment retained in theater. However, according to Army and National Guard officials, the Army Materiel Command developed plans to track only certain high- demand equipment items that are in short supply, such as armored humvees and other items designated to remain in theater for the duration of the conflict. However, Guard units have also left behind equipment that was not designated to stay for the duration of the conflict, but which may remain in theater for up to three years, such as cargo trucks, rough terrain fork lifts, and palletized load trucks, which the Army Materiel Command does not plan to track. Of the over 64,000 equipment items the Army National Guard estimates Guard units have left behind, the National Guard Bureau estimates that as of July 2005, the Army Material Command was only tracking about 45 percent of those items. Given the lack of tracking of all Guard equipment left in theater, it is not clear how the Army will develop replacement plans for these items as required by DOD policy. In May 2005 the Assistant Secretary of Defense for Reserve Affairs requested that the Army submit a replacement plan for all Army National Guard equipment retained in theater by June 17, 2005. The Assistant Secretary noted that while the exact amount of equipment transferred between the reserve and active components is unknown, overall the magnitude of these transfers has been significant and was an area of concern. The Assistant Secretary of Defense for Reserve Affairs subsequently extended the date replacement plans were due to August 15, 2005. According to Army officials, the equipment tracked by individual units may eventually be returned to the Guard. However, Army and Army National Guard officials said that even if it is eventually returned, equipment condition is likely to be poor given its heavy use during current operations and some of it will likely need to be replaced. The National Guard estimates it will cost at least $1.2 billion to replace the equipment it has left in Iraq, if it is not returned or is not useable. Until the Army develops plans to replace the equipment, including identifying timetables and funding sources, the National Guard will continue to face critical equipment shortages which reduce readiness for future missions. Army National Guard units are scheduled to convert to new designs within the Army’s modular force by 2008, but they are expected to convert with the equipment they have on hand and will lack some equipment for these designs until at least 2011. However, the Army is modifying the designs it tested and found to be as effective as current brigades to include the equipment it can reasonably expect to have based on current funding plans. As a result, Army National Guard units will continue to lack equipment items and have to use less modern equipment to fill gaps until at least 2011 and will not be comparably equipped with their active duty counterparts. While the Army estimated in June 2005 that it would cost about $15.6 billion to convert most of the Guard’s units, this estimate did not include all expected costs and the Army was unable to provide detailed information to support the estimate. Further, it has not developed detailed equipping plans that specify the Guard’s equipment requirements as it progresses through the new rotation cycle used to provide ready forces for ongoing operations. The Army is quickly implementing its initiatives to transform its forces into modular units and a rotational cycle of deployment without detailed plans and cost estimates because it views these initiatives as critical to sustaining current operations. In the short term, units nearing deployment will continue to receive priority for equipment, which may delay when units will receive the equipment needed for modular conversions. In 2004 and 2005, the Army published and subsequently updated the Army Campaign Plan, to establish the broad goals, assumptions, and time frames for converting to the modular force and implementing the rotational force model. However, the plan does not include detailed equipping plans, cost estimates, or resources needed for implementing the modular and rotational deployment initiatives. Our analysis of best practices in strategic planning has shown that detailed plans, which describe how the objectives will be achieved and identify resources, facilitate success and avoid unintended consequences, such as differing assumptions among key leaders in DOD and Congress about priorities or program performance. Until equipping requirements for implementing the modular designs and the rotational model are specified, costs are better defined, and funding is identified, the Guard faces risks as it prepares to implement the Army’s restructuring while supporting the high pace of operations at home and overseas. The Army has recognized that it needs to become more flexible and capable of achieving a wide range of missions. To this end, in 2004, the Army began to reorganize its forces from a structure organized around divisions to one based on standardized, modular brigades that can be tailored to meet the specific needs of the combatant commander. The Army is in the process of developing and approving detailed designs, including equipment requirements, for active and reserve combat units, support units, and warfighting headquarters so that the first Guard units can begin their scheduled conversions in September 2005. Among the goals of the new structure are to maximize the flexibility and responsiveness of the force by standardizing designs and equipment requirements for both active and reserve units and maintaining reserve units at a higher level of readiness than in the past. However, under current plans, Guard units will continue to be equipped with items that may be older than their active counterparts and less capable than the new modular unit designs require. The Army’s initial estimate for converting Guard units to modular designs is about $15.6 billion through 2011, but this estimate is incomplete because it does not include the costs for converting all units to the new structure or the full costs of equipping them for the design the Army tested and determined was as effective as current brigades. Moreover, the Army has not developed plans to equip Guard units to the tested modular unit design and instead plans to equip them for a less modern design. Without a detailed equipping plan that identifies funding priorities over time, the Army National Guard is likely to continue to face challenges in its ability to train and maintain ready forces in the future. The Army expects that the new modular brigades, which will include about 3,000 to 4,000 personnel, will be as capable as the current brigades of between 3,000 and 5,000 personnel through the use of enhanced military intelligence capability, introduction of key technology enablers, such as weapons and communications systems, and by having support capabilities contained in the brigade itself instead of at a higher echelon of command. The Army tested the new modular brigade designs and found that they were as effective as current brigades. However, the Army has modified the tested designs based on the equipment it can reasonably expect to provide to units undergoing conversion based on its current inventory of equipment, planned procurement pipelines, and other factors, such as expected funding. At the time of this report, the Army had not tested the modified designs to determine whether they are as capable as the current brigades or the tested design. The Army plans to equip modular Guard units for the modified design by 2011. In the meantime, modular Guard units are expected to continue the practice of using approved substitute equipment and will initially lack some of the key enablers, such as communications systems, which are the basis for the improved effectiveness of modular units. As of June 2005, the Army had approved modified designs for the 25 Army National Guard brigade combat teams and 25 support brigades scheduled to convert to the modular structure between 2005 and 2007, and all eight warfighting headquarters converting between 2005 and 2008. Under current plans, all the Army National Guard units will be converted to the modular organizational structure by 2008 with the exception of 3 support brigades which will be converted in 2011. The Army expects to complete modular designs for the remaining 9 brigade combat teams and 15 support brigades by September 2005. The Army had originally planned to convert Guard units on a slower schedule by 2010, but at the request of the Army National Guard, accelerated the plan so that Guard units would share the new standardized organizational designs with the active component at least two years earlier, avoid training soldiers for the previous skill mix, and better facilitate recruiting and retention efforts. However, our work indicates that accelerated modular conversions will exacerbate near-term equipment shortfalls. There are significant shortfalls in the Army’s ability to equip Guard units for the modified design in the short term for three key reasons. First, according to current plans, the units are expected to convert to their new designs with the equipment they have on hand. However, because of existing shortages and the large number of equipment items that deployed units left in Iraq or that need repair or replacement due to heavy use, units will not have the equipment needed to reach even the modified design. For example, converted Guard units expect initially to be without some equipment items, such as unmanned aerial vehicles, single channel ground and airborne radio systems, and Javelin antitank missiles that provide the basis for the improved capability of the new brigades. Second, the Army has not planned funding to provide equipment to the additional Guard units converting to the modular structure on the accelerated schedule. Although most Guard units are scheduled to be reorganized by 2008, they are expected to receive equipment for their new designs on a slower schedule, and in some cases are not expected to receive their equipment until 2 to 3 years after they reorganize. The lack of detailed plans for equipping Army National Guard units makes it difficult to determine how the Army intends to transition Guard units from the old to the new organizational structure effectively. Finally, the Army’s cost estimates for converting Guard units to the modular structure are incomplete and likely to grow. The Army’s current cost estimate for converting all its active and reserve units to the modular force is $48 billion, a 71 percent increase from its initial rough order of magnitude estimate of $28 billion made in 2004. Of the $48 billion, the Army estimated in June 2005 that Army National Guard modular conversions would cost about $15.6 billion. This estimate included costs to convert all eight of the Guard’s warfighting headquarters and 33 of the Guard’s 34 combat units between 2005 and 2011. It also includes procurement of some high-demand equipment such as tactical unmanned aerial vehicles, humvees, and antitank guided-missile systems. During our work, we obtained summary information on the types of cost and key assumptions reflected in the Army’s estimates; however, we were unable to fully evaluate the estimate because the Army did not have detailed supporting information. Our work highlighted several limitations to the Army’s cost estimate for Army National Guard modular force conversions. First, the estimate was based on a less modern design than both the modified design that the Army plans to use in the near term and the tested design it intends to evolve to over time. The estimate assumes that Guard units will continue to use substitute equipment items that may be older and less capable than that of active units and does not include costs for all the technology enablers that are expected to provide additional capability for modular units. As a result, the estimate does not include costs for all the equipment Guard units would require to reach the capabilities of the tested modular brigade design. Second, the estimate does not include costs for 10 of the Guard’s support units, nor does it include military construction costs associated with the Guard’s 40 support units. According to the Army National Guard, military construction costs for converted support units are expected to near the $1.4 billion in military construction costs already included for the Guard’s warfighting headquarters and combat units. Furthermore, current cost estimates assume that Guard equipment inventories will be at prewar levels and available for modular conversions. However, this may not be a reasonable assumption because, as discussed previously, Army National Guard units have left large amounts of equipment overseas – some of which will be retained indefinitely and the Army has not provided plans for its replacement. Further, the Army has currently identified funding sources for only about 25 percent ($3.9 billion) of the current estimate— $3.1 billion programmed in the fiscal year 2006-2011 future years defense program and $.8 billion expected from fiscal year 2005 supplemental funding. Approval for funding the remaining $11.7 billion is pending within DOD. However, equipping priorities and the amount designated for equipment have not been decided. In the long term, according to the Army, the intent is to equip all active and reserve component units to the tested design over time. However, it will take until at least 2011 under current plans for the Army National Guard units to receive the equipment they will need for the modified designs which are still less modern than the one the Army tested and found as effective as current brigades, and the pace of operations may further delay equipping Guard units. Moreover, the Army does not have detailed plans or cost estimates that identify the funding required for equipping Guard units for the tested design. Without detailed plans for when Guard units will get the equipment they need for the tested design, it is unclear when the Army National Guard will achieve the enhanced capabilities the Army needs to support ongoing operations. Further, without more complete equipment requirements and cost estimates, the DOD and Congress will not have all the information they need to evaluate funding requests for the Army National Guard’s transition to the modular force. The Army’s initiative to transform into a rotational force, which is intended to provide units with a predictable cycle of increasing readiness for potential mobilization once every 6 years, involves a major change in the way the Army planned to use its reserve forces and has implications for the amount and types of equipment that Army National Guard units will need over time. Historically, Army National Guard units have been provided only a portion of the equipment they needed to train for their wartime missions because they were generally expected to deploy after active units. However, current military operations have called for the Army National Guard to supply forces to meet a continuing demand for fully equipped units, a demand the Army National Guard met through transfers of equipment to deploying units and which undermined the readiness of nondeployed units. Under the rotational force concept, the Army would provide increasing amounts of equipment to units as they move through training phases and near readiness for potential deployment so they would be ready to respond quickly with fully equipped forces if needed. However, the Army has not yet finalized equipping requirements for Army National Guard units as they progress through the rotational cycle. In addition, it is not clear how the equipment needed to support units in the new rotational cycle will affect the types and quantities of items available for modular conversions and affect the pace of the Army National Guard’s transformation. Without firm decisions as to requirements for both the new modular structure and rotational force model and a plan that integrates requirements, the Army and Army National Guard are not in the best position to develop complete cost estimates or to determine whether the modular and rotational initiatives are working together to reach the goal of improving Army National Guard readiness. While the Army has developed a general proposal to equip units according to the readiness requirements of each phase of the rotational force model, it has not yet detailed the types and quantities of items required in each phase. Under this proposal the Army National Guard will have three types of equipment sets: a baseline set, a training set, and a deployment set. The baseline set would vary by unit type and assigned mission and the equipment it includes could be significantly reduced from the amount called for in the unit design, but plans call for it to provide at least the equipment Guard units need for domestic missions. Training sets would include more of the equipment units will need to be ready for deployment, but units would share the equipment that would be located at training sites throughout the country, so the equipment would not be readily available for units’ state or homeland missions. The deployment set would include all equipment needed for deployment including theater specific equipment, items provided through operational needs statements, and equipment from Army prepositioned stock. At the time of this report, the Army was still developing the proposals for what would be included in the three equipment sets and planned to publish the final requirements in December 2005. Army resourcing policy gives higher priority to units engaged in operations or preparing to deploy than those undergoing modular conversions. As a result, the requirements of ongoing operations will continue to drain the Army National Guard’s equipment resources and affect the pace at which equipment will be available for nondeployed units to transform to their new design. At the present time, it is not clear how the equipment requirements associated with supporting deployment under the new rotational readiness cycle will affect the types and quantities of equipment available to convert the Army National Guard to a modular force. Until the near-term requirements for the rotational force and long-term requirements for a modular force are fully defined, the Army and Army National Guard will not be in a position to prioritize funding to achieve readiness goals in the near and long term. Further, although Army leaders have made it a priority to ensure that Army National Guard units have the equipment they need to continue to perform their domestic missions, it is not possible to assess whether units will have the equipment they need until unit designs and training set configurations are finalized and homeland defense equipment requirements are known. Evolving equipment requirements for the Global War on Terrorism have challenged the Army National Guard in equipping its units for deployment while trying to maintain the readiness of its nondeployed force for training and future missions. While strategies such as transferring needed equipment from nondeploying units to ready deploying units, completing operational needs statements, and leaving equipment overseas when Guard units return home have helped to equip deploying units, these strategies may not be sustainable in the long term, especially as the Guard’s equipment inventories continue to diminish. In the meantime, as the Army National Guard’s equipment stocks are depleted, risks to its ability to perform future overseas and domestic missions increase. The Army’s lack of accountability over the Guard’s equipment stocks retained in theater has created a situation in which deploying Guard units face considerable uncertainty about what equipment they need to bring overseas and what equipment they will have for training when they return from deployment. DOD Directive 1225.6 requires a plan to replace reserve component equipment that is transferred to the active component, but the Army has not prepared these plans. Without a replacement plan, the Army National Guard faces depleted stocks of some key equipment items needed to maintain readiness and is unable to plan for how it will equip the force for future missions. Supporting ongoing operations will continue to strain Army National Guard equipment inventories, which will likely delay the pace of its transformation to a modular force. Further, current modular plans for the Guard’s conversion will not provide for equipping Guard units to the less modern modified design and there are no plans to equip the Guard for the design the Army found as capable as current brigades. As a result, Guard units will continue to face equipment shortages and have to use older equipment than their active counterparts. If units are not comparable, the Army National Guard will have to continue its current practice of transferring equipment to fill the shortfalls in deploying units, thereby undermining the readiness of nondeployed forces. With lower readiness of Guard forces, the nation faces increased risk to future overseas operations, unplanned contingencies, and the homeland missions the Guard may be called upon to support. We recommend that the Secretary of Defense direct the Secretary of the Army to develop and submit to Congress a plan and funding strategy that addresses the equipment needs of the Army National Guard for the Global War on Terrorism and addresses how the Army will transition from short- term equipping measures to long-term equipping solutions. This plan should address the measures the Army will take to ensure it complies with existing DOD directives to safeguard reserve component equipment readiness and provide a plan to replace depleted stocks resulting from equipment transferred to the active Army, so that the Guard can plan for equipping the force for future missions. We further recommend that the Secretary of Defense direct the Secretary of the Army to develop and submit to Congress a plan for the effective integration of the Army National Guard into its rotational force model and modular force initiatives. This plan should include: the specific equipment requirements, costs, timelines, and funding strategy for converting Army National Guard units to the modular force and the extent to which Guard units will have comparable types of equipment and equipment levels as the active modular units, an analysis of the equipment the Army National Guard’s units will need for their missions in each phase of the rotation cycle, and how the Army will manage implementation risks to modular forces if full funding is not provided on the expected timeline. The Assistant Secretary of Defense for Reserve Affairs provided written comments on a draft of this report. The department agreed with our recommendations and cited actions it is taking to implement them. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated as appropriate. DOD agreed with our recommendation to develop and submit a plan and funding strategy to Congress that addresses the equipment needs of the Army National Guard for the Global War on Terrorism, specifically addressing how the Army will transition from its short-term equipping measures to long-term equipping solutions. In its comments, DOD said that the Army needs to determine how Army National Guard forces will be equipped to meet state disaster response and potential homeland defense requirements as well as federal missions and include these requirements in its resource priorities. DOD also said that the Army is working to implement stricter accountability over equipment currently left in theater and to comply with DOD guidelines which require replacement plans for these items. DOD also agreed with our recommendation to develop and submit a plan to Congress that details the effective integration of the Army National Guard into the Army’s rotational force model and modular force initiatives. DOD said that the Army plans to develop resourcing alternatives to mitigate potential risks should full funding for transformation initiatives not be realized. DOD also agreed that readiness goals for the Army National Guard in the 6-year rotational model need to be established and that the Army’s equipping strategy for the Army National Guard must include the resources required to be prepared to carry out both their federal and state missions. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Secretary of Defense; the Secretary of the Army; the Chief, National Guard Bureau; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 4402. Major contributors to this report are listed in appendix III. To conduct our work for this engagement, we analyzed data, reviewed documentation and interviewed officials from the Army National Guard, the National Guard Bureau, the Department of the Army, and the Office of the Assistant Secretary of Defense for Reserve Affairs. We supplemented this information with visits to the United States Army Forces Command, the Coalition Forces Land Component Command, and the First Army of the United States. We also developed case studies of two units: the 30th Brigade Combat Team located in North Carolina and the 48th Brigade Combat Team in Georgia. These states were chosen to provide representative examples of how Army National Guard units were prepared for deployment to Operation Iraqi Freedom in support of the Global War on Terrorism. The 30th Brigade Combat Team was one of the first National Guard units to deploy for Operation Iraqi Freedom and had just returned from deployment when we visited in March 2005. The 48th Brigade Combat Team was preparing for deployment to Operation Iraqi Freedom at the time of our visit in April 2005. In both states we met with unit logistics staff who had visibility over how the unit prepared for deployment. To examine the extent to which Army National Guard units have the equipment needed for the Global War on Terrorism, we obtained and analyzed data on critical shortages and the types and quantities of equipment transferred from nondeployed units to deploying units from the National Guard Bureau and our two case study states. Additionally, we supplemented these data with interviews, briefings, and documentation from officials at the National Guard Bureau, the Department of the Army, the Office of the Assistant Secretary of Defense for Reserve Affairs, the U.S. Army Forces Command, the Coalition Forces Land Component Command, and the First Army of the United States. We did not examine whether shortages of particular items were the result of industrial base issues. To understand the processes the Army adapted to equip units as equipment requirements evolved for the Global War on Terrorism, we interviewed officials from and analyzed data provided by the 30th Brigade Combat Team in North Carolina, the 48th Brigade Combat Team in Georgia, the National Guard Bureau, the Department of the Army, the U.S. Army Forces Command, the Coalition Forces Land Component Command, and the First Army of the United States. To assess the Army National Guard equipment retained in theater, we analyzed Army National Guard data and the Guard’s estimate of the cost to replace the equipment if it is not returned. Additionally, we interviewed officials and reviewed documentation and data from the Army National Guard, Department of the Army, the Office of the Assistant Secretary of Defense for Reserve Affairs, U.S. Army Forces Command, and the Coalition Forces Land Component Command about the lack of reliable data and whether any plans exist to replace the Guard’s equipment. We supplemented data on how much of the Army National Guard’s equipment has been left in theater with briefings and reviewed internal Army messages regarding the accountability and visibility of this equipment. To evaluate how the Army National Guard has been integrated into the Army’s plans for a modular structure and force generation model, we interviewed officials at the Army National Guard, the Department of the Army, and U.S. Army Forces Command. We reviewed documents such as the Army Campaign Plan, the Army Transformation Roadmap, the Army’s force generation model, and numerous briefings on the Army’s plans for a modular force and the new force generation model. Additionally, we interviewed Guard officials from both of our case study states about the units’ plans to convert to modular force given Army time frames and cost estimates. To assess the reliability of data used during the course of this engagement, we interviewed data sources about how they ensured the accuracy of their data and reviewed their data collection methods, standard operating procedures, and other internal control measures. In addition, we reviewed available data for inconsistencies, and, when applicable, performed computer testing to assess data reliability. We determined that the data were sufficient to answer each of our objectives. We conducted our review between December 2004 and August 2005 in accordance with generally accepted government auditing standards. In addition to the person named above, Margaret Morgan, Assistant Director; Frank Cristinzio; Alissa Czyz; Curtis Groves; Nicole Harms; Tina Morgan Kirschbaum; Kim Mayo; Kenneth Patton; Jay Smale; and Suzanne Wren also made major contributions to this report.
Recent military operations have required that the Army rely extensively on Army National Guard forces, which currently comprise over 30 percent of the ground forces in Iraq. Heavy deployments of Army National Guard forces and their equipment, much of which has been left overseas for follow-on forces, have raised questions about whether the Army National Guard has the types and quantities of equipment it will need to continue supporting ongoing operations and future missions. GAO was asked to assess the extent to which (1) the Army National Guard has the equipment needed to support ongoing operations and (2) the Army can account for Army National Guard equipment left overseas. GAO also assessed the Army's plans, cost estimates, and funding strategy for equipping Guard units under its modular and rotational force initiatives. While deploying Army National Guard units have had priority for getting the equipment they needed, readying these forces has degraded the equipment inventory of the Guard's nondeployed units and threatens the Guard's ability to prepare forces for future missions at home and overseas. Nondeployed Guard units now face significant equipment shortfalls because (1) they have been equipped at less than war-time levels with the assumption that they could obtain additional resources prior to deployment and (2) current operations have created an unanticipated high demand for certain items, such as armored vehicles. To fully equip its deploying units, as of July 2005, the Army National Guard had transferred more than 101,000 pieces of equipment from its nondeployed units. As of May 2005, such transfers had exhausted the Guard's inventory of more than 220 high demand equipment items, such as night vision equipment, trucks, and radios. Further, as equipment requirements for overseas operations continue to evolve, the Army has been unable to identify and communicate what items deploying units need until close to their scheduled deployments, which challenges the Guard to transfer needed equipment quickly. To meet the demand for certain types of equipment for continuing operations, the Army has required Army National Guard units to leave behind many items for use by follow-on forces, but the Army can account for only about 45 percent of these items and has not developed a plan to replace them, as DOD policy requires. DOD has directed the Army to track equipment Guard units left overseas and develop replacement plans, but they have not yet been completed. The Army Guard estimates that since 2003 it has left more than 64,000 items, valued at more than $1.2 billion, overseas to support operations. Without a completed and implemented plan to replace all Guard equipment left overseas, Army Guard units will likely face growing equipment shortages and challenges in regaining readiness for future missions. Thus, DOD and Congress will not have assurance that the Army has an effective strategy for addressing the Guard's equipping needs. Although Army National Guard units are scheduled to convert to new designs within the Army's modular force by 2008, they are not expected to be equipped for these designs until at least 2011. The Army has not developed detailed equipping plans that specify the Guard's equipment requirements to transform to a modular force while supporting ongoing operations. As of June 2005, the Army estimated that it would cost about $15.6 billion to convert most of the Guard's units, but this estimate did not include all expected costs and the Army was unable to provide detailed information to support the estimate. In the short term, units nearing deployment will continue to receive priority for equipment, which may affect the availability of equipment needed for modular conversions. Until the Army fully identifies the Guard's equipment requirements and costs for both the near and long term, DOD and Congress will not be in a sound position to weigh the affordability and effectiveness of the Army's plans.
Since DHS began operations in March 2003, it has developed and implemented key policies, programs, and activities for implementing its homeland security missions and functions that have created and strengthened a foundation for achieving its potential as it continues to mature. However, the department’s efforts have been hindered by challenges faced in leading and coordinating the homeland security enterprise; implementing and integrating its management functions for results; and strategically managing risk and assessing, and adjusting as necessary, its homeland security efforts. DHS has made progress in these three areas, but needs to take additional action, moving forward, to help it achieve its full potential. DHS has made important progress in implementing and strengthening its mission functions over the past 8 years, including implementing key homeland security operations and achieving important goals and milestones in many areas. The department’s accomplishments include developing strategic and operational plans across its range of missions; hiring, deploying and training workforces; establishing new, or expanding existing, offices and programs; and developing and issuing policies, procedures, and regulations to govern its homeland security operations. For example:  DHS issued the QHSR, which provides a strategic framework for homeland security, and the National Response Framework, which outlines guiding principles for disaster response.  DHS successfully hired, trained, and deployed workforces, such as a federal screening workforce which assumed security screening responsibilities at airports nationwide, and the department has about 20,000 agents to patrol U.S. land borders.  DHS created new programs and offices, or expanded existing ones, to implement key homeland security responsibilities, such as establishing the United States Computer Emergency Readiness Team to, among other things, coordinate the nation’s efforts to prepare for, prevent, and respond to cyber threats to systems and communications networks. DHS also expanded programs for identifying and removing aliens subject to removal from the United States and for preventing unauthorized aliens from entering the country.  DHS issued policies and procedures addressing, among other things, the screening of passengers at airport checkpoints, inspecting travelers seeking entry into the United States, and assessing immigration benefit applications and processes for detecting possible fraud. Establishing these elements and others are important accomplishments and have been critical for the department to position and equip itself for fulfilling its homeland security missions and functions. However, more work remains for DHS to address gaps and weaknesses in its current operational and implementation efforts, and to strengthen the efficiency and effectiveness of those efforts to achieve its full potential. For example, we have reported that many DHS programs and investments have experienced cost overruns, schedule delays, and performance problems, including, for instance, DHS’s recently cancelled technology program for securing U.S. borders, known as the Secure Border Initiative Network, and some technologies for screening passengers at airport checkpoints. Further, with respect to the cargo advanced automated radiography system to detect certain nuclear materials in vehicles and containers at ports DHS pursued the acquisition and deployment of the system without fully understanding that it would not fit within existing inspection lanes at ports of entry. DHS subsequently canceled the program. DHS also has not yet fully implemented its roles and responsibilities for developing and implementing key homeland security programs and initiatives. For example, DHS has not yet developed a set of target capabilities for disaster preparedness or established metrics for assessing those capabilities to provide a framework for evaluating preparedness, as required by the Post-Katrina Emergency Management Reform Act. Our work has shown that DHS should take additional action to improve the efficiency and effectiveness of a number of its programs and activities by, for example, improving program management and oversight, and better assessing homeland security requirements, needs, costs, and benefits, such as those for key acquisition and technology programs. Table 1 provides examples of key progress and work remaining in DHS’s functional mission areas, with an emphasis on work we completed since 2008. Impacting the department’s ability to efficiently and effectively satisfy its missions are: (1) the need to integrate and strengthen its management functions; (2) the need for increased utilization of performance assessments; (3) the need for an enhanced use of risk information to inform planning, programming, and investment decision-making; (4) limitations in effective sharing and use of terrorism-related information; (5) partnerships that are not sustained or fully leveraged; and (6) limitations in developing and deploying technologies to meet mission needs. DHS made progress in addressing these areas, but more work is needed, going forward, to further mitigate these challenges and their impact on DHS’s mission implementation. For instance, DHS strengthened its performance measures in recent years and linked its measures to the QHSR’s missions and goals. However, DHS and its components have not yet developed measures for assessing the effectiveness of key homeland security programs, such as programs for securing the border and preparing the nation for emergency incidents. For example, with regard to checkpoints DHS operates on U.S. roads to screen vehicles for unauthorized aliens and contraband, DHS established three performance measures to report the results of checkpoint operations. However, the measures did not indicate if checkpoints were operating efficiently and effectively and data reporting and collection challenges hindered the use of results to inform Congress and the public on checkpoint performance. Moreover, DHS has not yet established performance measures to assess the effectiveness of its programs for investigating alien smuggling operations and foreign nationals who overstay their authorized periods of admission to the United States, making it difficult for these agencies to determine progress made in these areas and evaluate possible improvements. Further, DHS and its component agencies developed strategies and tools for conducting risk assessments. For example, DHS has conducted risk assessments of various surface transportation modes, such as freight rail, passenger rail, and pipelines. However, the department needs to strengthen its use of risk information to inform its planning and investment decision-making. For example, DHS could better use risk information to plan and prioritize security measures and investments within and across its mission areas, as the department cannot secure the nation against every conceivable threat. In addition, DHS took action to develop and deploy new technologies to help meet its homeland security missions. However, in a number of instances DHS pursued acquisitions without ensuring that the technologies met defined requirements, conducting and documenting appropriate testing and evaluation, and performing cost-benefit analyses, resulting in important technology programs not meeting performance expectations. For example, in 2006, we recommended that DHS’s decision to deploy next-generation radiation-detection equipment, or advanced spectroscopic portals, used to detect smuggled nuclear or radiological materials, be based on an analysis of both the benefits and costs and a determination of whether any additional detection capability provided by the portals was worth their additional cost. DHS subsequently issued a cost-benefit analysis, but we reported that this analysis did not provide a sound analytical basis for DHS’s decision to deploy the portals. In June 2009, we also reported that an updated cost-benefit analysis might show that DHS’s plan to replace existing equipment with advanced spectroscopic portals was not justified, particularly given the marginal improvement in detection of certain nuclear materials required of advanced spectroscopic portals and the potential to improve the current- generation portal monitors’ sensitivity to nuclear materials, most likely at a lower cost. In July 2011, DHS announced that it would end the advanced spectroscopic portal project as originally conceived given the challenges the program faced. As we have previously reported, while it is important that DHS continue to work to strengthen each of its functional areas, it is equally important that these areas be addressed from a comprehensive, departmentwide perspective to help mitigate longstanding issues that have impacted the department’s progress. Our work at DHS has identified several key themes—leading and coordinating the homeland security enterprise, implementing and integrating management functions for results, and strategically managing risks and assessing homeland security efforts—that have impacted the department’s progress since it began operations. These themes provide insights that can inform DHS’s efforts, moving forward, as it works to implement its missions within a dynamic and evolving homeland security environment. DHS made progress and has had successes in all of these areas, but our work found that these themes have been at the foundation of DHS’s implementation challenges, and need to be addressed from a departmentwide perspective to position DHS for the future and enable it to satisfy the expectations set for it by the Congress, the administration, and the country. Leading and coordinating the homeland security enterprise. While DHS is one of a number of entities with a role in securing the homeland, it has significant leadership and coordination responsibilities for managing efforts across the homeland security enterprise. To satisfy these responsibilities, it is critically important that DHS develop, maintain and leverage effective partnerships with its stakeholders, while at the same time addressing DHS-specific responsibilities in satisfying its missions. Before DHS began operations, we reported that the quality and continuity of the new department’s leadership would be critical to building and sustaining the long-term effectiveness of DHS and achieving homeland security goals and objectives. We further reported that to secure the nation, DHS must form effective and sustained partnerships between components and also with a range of other entities, including federal agencies, state and local governments, the private and nonprofit sectors, and international partners. DHS has made important strides in providing leadership and coordinating efforts. For example, it has improved coordination and clarified roles with state and local governments for emergency management. DHS also strengthened its partnerships and collaboration with foreign governments to coordinate and standardize security practices for aviation security. However, DHS needs to take additional action to forge effective partnerships and strengthen the sharing and utilization of information, which has affected its ability to effectively satisfy its missions. For example, we reported that the expectations of private sector stakeholders have not been met by DHS and its federal partners in areas related to sharing information about cyber-based threats to critical infrastructure. Without improvements in meeting private and public sector expectations for sharing cyber threat information, private-public partnerships will remain less than optimal, and there is a risk that owners of critical infrastructure will not have the information and mechanisms needed to thwart sophisticated cyber attacks that could have catastrophic effects on our nation’s cyber-reliant critical infrastructure. Moreover, we reported that DHS needs to continue to streamline its mechanisms for sharing information with public transit agencies to reduce the volume of similar information these agencies receive from DHS, making it easier for them to discern relevant information and take appropriate actions to enhance security. In 2005, we designated information sharing for homeland security as high risk because the federal government faced serious challenges in analyzing information and sharing it among partners in a timely, accurate, and useful way. Gaps in sharing, such as agencies’ failure to link information about the individual who attempted to conduct the December 25, 2009, airline bombing, prevented the individual from being included on the federal government’s consolidated terrorist watchlist, a tool used by DHS to screen for persons who pose a security risk. The federal government and DHS have made progress, but more work remains for DHS to streamline its information sharing mechanisms and better meet partners’ needs. Moving forward, it will be important that DHS continue to enhance its focus and efforts to strengthen and leverage the broader homeland security enterprise, and build off the important progress that it has made thus far. In addressing ever-changing and complex threats, and with the vast array of partners with which DHS must coordinate, continued leadership and stewardship will be critical in achieving this end. Implementing and integrating management functions for results. Following its establishment, the department focused its efforts primarily on implementing its various missions to meet pressing homeland security needs and threats, and less on creating and integrating a fully and effectively functioning department from 22 disparate agencies. This initial focus on mission implementation was understandable given the critical homeland security needs facing the nation after the department’s establishment, and the enormous challenge posed by creating, integrating, and transforming a department as large and complex as DHS. As the department matured, it has put into place management policies and processes and made a range of other enhancements to its management functions—acquisition, information technology, financial, and human capital management. However, DHS has not always effectively executed or integrated these functions. In 2003, we designated the transformation and integration of DHS as high risk because DHS had to transform 22 agencies into one department, and failure to effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. Eight years later, DHS remains on our high-risk list. DHS has demonstrated strong leadership commitment to addressing its management challenges and has begun to implement a strategy to do so. Further, DHS developed various management policies, directives, and governance structures, such as acquisition and information technology management policies and controls, to provide enhanced guidance on investment decision making. DHS also reduced its financial management material weaknesses in internal control over financial reporting and developed strategies to strengthen human capital management, such as its Workforce Strategy for Fiscal Years 2011-2016. However, DHS needs to continue to demonstrate sustainable progress in addressing its challenges, as these issues have contributed to schedule delays, cost increases, and performance problems in major programs aimed at delivering important mission capabilities. For example, in September 2010, we reported that the Science and Technology Directorate’s master plans for conducting operational testing of container security technologies did not reflect all of the operational scenarios that U.S. Customs and Border Protection was considering for implementation. In addition, when it developed the US-VISIT program, DHS did not sufficiently define what capabilities and benefits would be delivered, by when, and at what cost, and the department has not yet determined how to deploy a biometric exit capability under the program. Moreover, DHS does not yet have enough skilled personnel to carry out activities in various areas, such as acquisition management; and has not yet implemented an integrated financial management system, impacting its ability to have ready access to reliable, useful, and timely information for informed decision making. Moving forward, addressing these management challenges will be critical for DHS’s success, as will be the integration of these functions across the department to achieve efficiencies and effectiveness. Strategically managing risks and assessing homeland security efforts. Forming a new department while working to implement statutorily mandated and department-initiated programs and responding to evolving threats, was, and is, a significant challenge facing DHS. Key threats, such as attempted attacks against the aviation sector, have impacted and altered DHS’s approaches and investments, such as changes DHS made to its processes and technology investments for screening passengers and baggage at airports. It is understandable that these threats had to be addressed immediately as they arose. However, limited strategic and program planning by DHS and limited assessment to inform approaches and investment decisions have contributed to programs not meeting strategic needs or not doing so in an efficient manner. For example, as we reported in July 2011, the Coast Guard’s planned acquisitions through its Deepwater Program, which began before DHS’s creation and includes efforts to build or modernize ships and aircraft and supporting capabilities that are critical to meeting the Coast Guard’s core missions in the future, is unachievable due to cost growth, schedule delays and affordability issues. In addition, because FEMA has not yet developed a set of target disaster preparedness capabilities and a systematic means of assessing those capabilities, as required by the Post-Katrina Emergency Management Reform Act and Presidential Policy Directive 8, it cannot effectively evaluate and identify key capability gaps and target limited resources to fill those gaps. Further, DHS has made important progress in analyzing risk across sectors, but it has more work to do in using this information to inform planning and resource allocation decisions. Risk management has been widely supported by Congress and DHS as a management approach for homeland security, enhancing the department’s ability to make informed decisions and prioritize resource investments. Since DHS does not have unlimited resources and cannot protect the nation from every conceivable threat, it must make risk-informed decisions regarding its homeland security approaches and strategies. Moreover, we have reported on the need for enhanced performance assessment, that is, evaluating existing programs and operations to determine whether they are operating as intended or are in need of change, across DHS’s missions. Information on the performance of programs is critical for helping the department, Congress, and other stakeholders more systematically assess strengths and weaknesses and inform decision making. In recent years, DHS has placed an increased emphasis on strengthening its mechanisms for assessing the performance and effectiveness of its homeland security programs. For example, DHS established new performance measures, and modified existing ones, to better assess many of its programs and efforts. However, our work has found that DHS continues to miss opportunities to optimize performance across its missions because of a lack of reliable performance information or assessment of existing information; evaluation among feasible alternatives; and, as appropriate, adjustment of programs or operations that are not meeting mission needs. For example, DHS’s program for research, development, and deployment of passenger checkpoint screening technologies lacked a risk-based plan and performance measures to assess the extent to which checkpoint screening technologies were achieving the program’s security goals, and thereby reducing or mitigating the risk of terrorist attacks. As a result, DHS had limited assurance that its strategy targeted the most critical risks and that it was investing in the most cost-effective new technologies or other protective measures. As the department further matures and seeks to optimize its operations, DHS will need to look beyond immediate requirements; assess programs’ sustainability across the long term, particularly in light of constrained budgets; and evaluate tradeoffs within and among programs across the homeland security enterprise. Doing so should better equip DHS to adapt and respond to new threats in a sustainable manner as it works to address existing ones. Given DHS’s role and leadership responsibilities in securing the homeland, it is critical that the department’s programs and activities are operating as efficiently and effectively as possible, are sustainable, and continue to mature, evolve and adapt to address pressing security needs. DHS has made significant progress throughout its missions since its creation, but more work is needed to further transform the department into a more integrated and effective organization. DHS has also made important progress in strengthening partnerships with stakeholders, improving its management processes and sharing of information, and enhancing its risk management and performance measurement efforts. These accomplishments are especially noteworthy given that the department has had to work to transform itself into a fully functioning cabinet department while implementing its missions—a difficult undertaking for any organization and one that can take years to achieve even under less daunting circumstances. Impacting the department’s efforts have been a variety of factors and events, such as attempted terrorist attacks and natural disasters, as well as new responsibilities and authorities provided by Congress and the administration. These events collectively have forced DHS to continually reassess its priorities and reallocate resources as needed, and have impacted its continued integration and transformation. Given the nature of DHS’s mission, the need to remain nimble and adaptable to respond to evolving threats, as well as to work to anticipate new ones, will not change and may become even more complex and challenging as domestic and world events unfold, particularly in light of reduced budgets and constrained resources. To better position itself to address these challenges, our work has shown that DHS should place an increased emphasis and take additional action in supporting and leveraging the homeland security enterprise, managing its operations to achieve needed results, and strategically planning for the future while assessing and adjusting, as needed, what exists today. Addressing these issues will be critically important for the department to strengthen its homeland security programs and operations. Eight years after its establishment and 10 years after the September 11, 2001, terrorist attacks, DHS has indeed made significant strides in protecting the nation, but has yet to reach its full potential. Chairman Lieberman, Ranking Member Collins, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. For further information regarding this testimony, please contact Cathleen A. Berrick at (202) 512-3404 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Rebecca Gambler, Assistant Director; Melissa Bogar; Susan Czachor; Sarah Kaczmarek; Tracey King; Taylor Matheson; Jessica Orr; and Meghan Squires. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The terrorist attacks of September 11, 2001, led to profound changes in government agendas, policies and structures to confront homeland security threats facing the nation. Most notably, the Department of Homeland Security (DHS) began operations in 2003 with key missions that included preventing terrorist attacks from occurring in the United States, reducing the country's vulnerability to terrorism, and minimizing the damages from any attacks that may occur. DHS is now the third-largest federal department, with more than 200,000 employees and an annual budget of more than $50 billion. Since 2003, GAO has issued over 1,000 products on DHS's operations in such areas as border and transportation security and emergency management, among others. As requested, this testimony addresses DHS's progress and challenges in implementing its homeland security missions since it began operations, and issues affecting implementation efforts. This testimony is based on a report GAO is issuing today, which assesses DHS's progress in implementing its homeland security functions and work remaining. Since it began operations in 2003, DHS has implemented key homeland security operations and achieved important goals and milestones in many areas to create and strengthen a foundation to reach its potential. As it continues to mature, however, more work remains for DHS to address gaps and weaknesses in its current operational and implementation efforts, and to strengthen the efficiency and effectiveness of those efforts to achieve its full potential. DHS's accomplishments include developing strategic and operational plans; deploying workforces; and establishing new, or expanding existing, offices and programs. For example, DHS (1) issued plans to guide its efforts, such as the Quadrennial Homeland Security Review, which provides a framework for homeland security, and the National Response Framework, which outlines disaster response guiding principles; (2) successfully hired, trained, and deployed workforces, such as a federal screening workforce to assume security screening responsibilities at airports nationwide; and (3) created new programs and offices to implement its homeland security responsibilities, such as establishing the U.S. Computer Emergency Readiness Team to help coordinate efforts to address cybersecurity threats. Such accomplishments are noteworthy given that DHS has had to work to transform itself into a fully functioning department while implementing its missions--a difficult undertaking that can take years to achieve. While DHS has made progress, its transformation remains high risk due to its management challenges. Examples of progress made and work remaining include: Border security. DHS implemented the U.S. Visitor and Immigrant Status Indicator Technology program to verify the identities of foreign visitors entering and exiting the country by processing biometric and biographic information. However, DHS has not yet determined how to implement a biometric exit capability and has taken action to address a small portion of the estimated overstay population in the United States (individuals who legally entered the country but then overstayed their authorized periods of admission). Aviation security. DHS developed and implemented Secure Flight, a program for screening airline passengers against terrorist watchlist records. DHS also developed new programs and technologies to screen passengers, checked baggage, and air cargo. However, DHS does not yet have a plan for deploying checked baggage screening technologies to meet recently enhanced explosive detection requirements, a mechanism to verify the accuracy of data to help ensure that air cargo screening is being conducted at reported levels, or approved technology to screen cargo once it is loaded onto a pallet or container. Emergency preparedness and response. DHS issued the National Preparedness Guidelines that describe a national framework for capabilities-based preparedness, and a Target Capabilities List to provide a national-level generic model of capabilities defining all-hazards preparedness. DHS is also finalizing a National Disaster Recovery Framework. However, DHS needs to strengthen its efforts to assess capabilities for all-hazards preparedness, and develop a long-term recovery structure to better align timing and involvement with state and local governments' capacity. Chemical, biological, radiological and nuclear (CBRN) threats. DHS assessed risks posed by CBRN threats and deployed capabilities to detect CBRN threats. However, DHS should work to improve its coordination of CBRN risk assessments, and identify monitoring mechanisms for determining progress made in implementing the global nuclear detection strategy. GAO's work identified three themes at the foundation of DHS's challenges: Leading and coordinating the homeland security enterprise; Implementing and integrating management functions for results; and Strategically managing risks and assessing homeland security efforts. This testimony contains no new recommendations.
The federal government makes loans to students through private- and public-sector lenders in the FFELP or directly to students through FDLP. These two programs are among the largest of the federal government’s credit programs. At the end of 2004, there were about $245 billion in outstanding FFELP loans, about 20 percent of total federal guaranteed loans outstanding, and $107 billion in outstanding FDLP loans, about 43 percent of total federal direct loans outstanding. Students and parents are able to borrow the same types of loans through FFELP and FDLP, which include: Subsidized and Unsubsidized Stafford Loans—variable rate loans available to students. The federal government pays the interest on behalf of subsidized loan borrowers while the student is in school and during a brief grace period when the student first leaves school. PLUS Loans—variable rate loans made to parents, on behalf of students. The borrower pays all interest costs. Consolidation Loans—borrowers may combine multiple federal student loans into a single loan. The interest rate is fixed based on the weighted average of the interest rates in effect on the loans being consolidated. Under either loan program borrowers are able to repay loans earlier than required, with no penalty. The programs have several repayment options available to borrowers. For Stafford and PLUS loans, the standard repayment in both loan programs is a fixed amount per month for up to 10 years. Borrowers have other repayment options that allow them to extend repayment for up to 30 years, gradually increase the monthly payment, or base monthly payments on their adjusted gross income. The criteria for some of the alternative repayment options are different in FFELP and FDLP. For consolidation loans, the repayment terms depend on the loan amount. Moreover, borrowers that graduate, leave school, or become a less than half-time student are given a 6-month grace period before they must begin to repay their Stafford or consolidation loans. All borrowers may postpone repayment through deferment or forbearance if they meet certain criteria and the loan is not in default. Deferment is allowed for borrowers who remain in a postsecondary school at least half- time, a graduate program, or have experienced economic hardship. For borrowers who are temporarily unable to meet repayment obligations but are not eligible for deferment, lenders may grant a temporary and limited time period in which these borrowers do not need to repay their student loans, called forbearance. The FCRA guidance issued by OMB and accounting standards provide the framework for the process Education uses to calculate subsidy costs for student loans. Subsidy costs are calculated by estimating the federal government’s future cash flows for loans made or guaranteed in a particular fiscal year, called a loan cohort. In estimating cash flows for a loan cohort, Education must make assumptions about loan characteristics and future borrower behavior, such as: type and dollar amount of loans obligated or guaranteed, and how many borrowers will pay early, pay late, or default on their loans and at what point in time. Moreover, the model used to estimate future cash flows includes assumptions about future interest rates. OMB provides Education with interest rate assumptions that are used for the discount rate, borrower interest rate, and lender yields. Education aggregates cash flows by loan cohort, loan type, and risk category, which reflects the differences in the likelihood of default. Education has five risk categories, which include, in order of higher to lower risk of default: (1) students at proprietary schools, (2) students at 2-year colleges, (3) freshman and sophomores at 4-year colleges, (4) juniors and seniors at 4-year colleges, and (5) students at graduate schools. Although the method for calculating the subsidy cost is the same for both FFELP and FDLP, the federal government’s role in each loan program differs significantly, which, in turn, affects the type and timing of cash flows in each program. In FFELP, private lenders, such as banks, fund the loans, and the federal government guarantees lenders a statutorily specified minimum yield that is tied to, and varies with, market financial instruments. When the interest rate paid by borrowers is below that yield, the federal government gives lenders subsidy payments, called SAP. Moreover, the federal government, through state-designated guaranty agencies, guarantees repayment of loans if borrowers default. Guaranty agencies provide insurance to lenders for 98 percent of the unpaid principal of defaulted loans. The federal government, in turn, pays guaranty agencies 95 percent of their default claims. Guaranty agencies also perform various administrative functions in the FFELP. As shown in figure 1, under FFELP cash inflows to the federal government include fees and other payments from lenders and outflows from the federal government include SAP and default payments. FFELP cash flows are spread out over the life of the loan. fees) Under FDLP, the U.S. Treasury funds the loans, which are originated through participating schools and contractors. Education’s Office of Federal Student Aid is responsible for delivering funds to schools participating in FDLP, monitoring its contracts, and providing technical assistance to schools. Education contracts with private-sector companies to perform various administrative activities in FDLP, such as originating and servicing loans, and collecting defaulted loans. As shown in figure 2, FDLP cash inflows to the federal government are repayments of principal and interest payments and outflows include loan disbursements to borrowers. Because the federal government funds the loans, cash outflows occur in the early years as loan disbursements are made. Cash inflows, in the form of principal repayment and interest payments, occur in later years as borrowers enter repayment. Principal repayments may be less than disbursements, reflecting defaults, loan discharges, and loan forgiveness. Annually, agencies are generally required to update or “reestimate” loan costs for differences in estimated loan performance, such as differences between assumed and actual default rates, the actual program costs recorded in the accounting records, and new forecasts of future economic conditions, such as interest rates. Reestimates include all aspects of the original cost estimate, including prepayments, defaults, delinquencies, recoveries, and interest. Reestimates of the credit subsidy allow agency management to compare the original budget estimates with actual program results to identify variances from the original estimate, assess the quality of the original estimate, and adjust future program estimates as appropriate. Both FFELP and FDLP reestimated subsidy costs have differed from original estimates for loans made in fiscal years 1994 through 2004, highlighting the challenges in estimating the costs of federal student loans. FFELP reestimated subsidy costs were similar to or lower than original estimates for loans made in fiscal years 1994 to 2002, but higher than originally estimated for loans made in fiscal years 2003 and 2004. In comparison, FDLP reestimated subsidy costs were generally similar to or higher than original estimates for loans made in fiscal years 1994 through 2004. Across all types of loans, FDLP subsidy costs per $100 of loans disbursed were, for almost all loan cohorts, lower than those of FFELP. Reestimated subsidy costs for FFELP loans disbursed between fiscal years 1994 and 2002 were, in general, close to or lower than original estimates, while reestimated subsidy costs for loans disbursed in 2003 and 2004 were higher than originally expected, as shown in figure 3. From fiscal years 1994 to 1999, reestimated subsidy costs for FFELP were typically close to original estimates, while loans disbursed from fiscal year 2000 to fiscal year 2002 had reestimated subsidy costs that were lower than original estimates, ranging from $1.5 to $2.2 billion lower. Reestimated subsidy costs for loans disbursed in fiscal years 2003 and 2004 were $2.7 and $3.6 billion higher than original estimates. Differences between reestimated and original subsidy costs estimates for the 2003 and 2004 loan cohorts were in part due to significant differences between expected and actual loan volume. For example, Education originally estimated about $40 billion in FFELP loans would be disbursed in 2003 when actually $69 billion was disbursed that year. The large difference was primarily due to a significantly higher volume of FFELP consolidation loans than originally estimated and the relatively high subsidy costs per $100 of these loans compared to consolidation loans made in previous years. After controlling for loan volume, FFELP reestimated subsidy costs per $100 disbursed were generally close to or lower than original subsidy cost estimates across loan types. As shown in table 1, for FFELP Stafford unsubsidized and PLUS loans, reestimated subsidy costs per $100 disbursed were lower for all loan cohorts than what was originally estimated—except fiscal year 1999. For subsidized Stafford loans, about two-thirds of the loan cohorts had lower reestimated subsidy costs per $100 disbursed. Slightly over half of all consolidation loan cohorts had lower reestimated subsidy costs per $100 disbursed than originally estimated. Reestimated subsidy costs for FDLP loans were in general similar to or higher than original estimates for loans disbursed between fiscal years 1994 and 2004. For FDLP loans disbursed between fiscal years 1994 and 1999, total reestimated subsidy costs were in general close to original estimates, but there was one loan cohort that had higher reestimated subsidy costs and another with much lower reestimated subsidy costs than originally expected, as shown in figure 4. In comparison, reestimated subsidy costs for FDLP loans disbursed between fiscal years 2000 and 2004 were higher than original estimates. In some cases original estimates projected a net gain for the government, but subsequent reestimates project a smaller gain or even a net cost for the government. For example, original subsidy cost estimates of the fiscal year 2000 loan cohort projected a net gain of $930 million for the government and reestimated subsidy costs project a net cost of $1.1 billion. Such swings in estimated subsidy costs illustrate that originally anticipated federal revenues may not, in fact, ultimately materialize. Differences between total reestimated and original subsidy cost estimates were not driven by differences between original and actual loan volume, but rather by changes in the subsidy rates—that is, subsidy costs per $100 disbursed. FDLP reestimated subsidy costs per $100 disbursed were usually close to or higher than original subsidy cost estimates across loan types. For example, as shown in table 2, reestimated subsidy costs per $100 disbursed for FDLP Stafford unsubsidized, and PLUS loans were, for almost all loan cohorts, higher than original estimates. For Stafford subsidized and consolidation loans, slightly over half of the loan cohorts had reestimated subsidy costs that were higher than originally estimated. For most Stafford unsubsidized and PLUS loan cohorts, and slightly over half of consolidation loan cohorts, reestimated subsidy costs per $100 disbursed were higher than the original estimate, but still project a net gain for the federal government. For example, Stafford unsubsidized loans disbursed in fiscal year 1998 were originally estimated to have a net gain of $6.93 for every $100 in loans disbursed. Reestimated subsidy costs show that the projected net gain for these same loans is estimated to be $5.13 per $100 disbursed. Some loan cohorts that originally projected a net gain for the federal government have reestimated subsidy costs with a net cost to the government. For example, PLUS loans disbursed in fiscal year 2000 that were originally projected to have a net gain of $13.41 per $100 disbursed were subsequently reestimated to have a net cost of $2.21 per $100 disbursed. For all loans disbursed between fiscal years 1994 and 2004, FDLP reestimated subsidy costs were lower than FFELP reestimated subsidy costs in aggregate and after controlling for loan volume. Reestimated total subsidy costs for FDLP loans were $2.5 billion compared to $36.6 billion for FFELP loans, as shown in table 3 below. After controlling for loan volume and comparing reestimated subsidy costs across the four types of loans—Stafford subsidized and unsubsidized, PLUS, and consolidation—FDLP reestimated subsidy costs per $100 disbursed were in general lower than FFELP reestimated subsidy costs per $100 disbursed. (See app. I for comparisons of reestimated subsidy costs of FDLP and FFELP loans, by loan type.) The difference between the reestimated subsidy cost for FDLP and FFELP varied significantly and depended on the type of loan and the year that the loan was disbursed. For example, reestimated subsidy costs per $100 disbursed for FDLP subsidized Stafford loans disbursed in fiscal year 2003 were $11.66 lower than for FFELP subsidized Stafford loans, while the difference for the same loans disbursed in 2000 was $1.35 per $100 disbursed. The primary reason for the difference in subsidy cost estimates between FFELP and FDLP were differences in the structure of the programs rather than the characteristics of the borrowers. According to Education officials, estimates of long-term costs associated with subsidizing borrowers’ interest; canceling repayment of loans due to death, disability, and bankruptcy; and defaulted loans are roughly equivalent in both programs. However, under FFELP there are larger cash outflows in the form of SAP to lenders than cash inflows of lender fees, while in FDLP there are large cash inflow projections, net of interest payments to Treasury, in the form of borrower interest payments and no SAP or guaranty fees. Differences between original and reestimated subsidy cost estimates per $100 disbursed can be explained, in part, by lower than expected market interest rates, greater than anticipated loan consolidation, and more data on student loans incorporated into cash flow model. Differences between actual and expected interest rates and rates of consolidations affected reestimated subsidy costs for each loan program in a different way. For example, lower than expected interest rates over the last several years have resulted in lower reestimated subsidy cost estimates for FFELP and higher reestimated subsidy costs for FDLP. Larger than expected volumes of consolidation loans, which stemmed in part from low interest rates, contributed to lower FFELP reestimated subsidy costs for the underlying loan cohorts and higher FDLP reestimated subsidy cost estimates of the underlying loan cohorts. Furthermore, the availability of additional data for both FFELP and FDLP loans have enabled Education to refine its cash flow model, which has also contributed to differences between reestimated and original subsidy costs. Interest rates fell to lower than expected levels in 2001 and persisted at those levels through 2004, which affected subsidy cost estimates in both FFELP and FDLP because estimates, especially for the FDLP, are highly sensitive to changes between projected and actual interest rates. Cost estimates for the loan programs are sensitive to such changes because borrower interest rates in both FFELP and FDLP and the lender yield in the FFELP, are variable rates. As a result, differences between projected and actual interest rates can have a significant impact on estimates of cash flows in both loan programs. OMB’s interest rate projections made prior to 2001, as well as those by other government agencies and the private sector, were considerably higher than actual interest rates for 2001 and beyond. For example, as shown in table 4, actual interest rates from 2001 to 2003 were substantially lower than OMB’s forecasts of interest rates used in the budget for fiscal year 1999 and fluctuated slightly from year to year. To the degree that such fluctuations were unanticipated, they contributed to volatility in subsidy cost reestimates from year to year. For FFELP, lower than expected interest rates have resulted in lower than expected SAP to lenders, which, in turn, resulted in lower reestimated subsidy cost estimates. As interest rates decreased, the difference, or spread, between the 3-month commercial paper (CP) and the 91-day Treasury bill narrowed. For example, as can be seen in figure 5, the average rates on the 91-day T-bill and the 3-month CP were 5.82 and 6.33, respectively, in 2000, a difference of 0.51. However, in 2004 the difference between the two rates was 0.15. The spread between commercial paper and Treasury bill rates serves as the primary basis for SAP payments to the lenders, and, as the spread narrowed, Education paid lower SAP, thus lowering reestimated subsidy costs. The climate of declining interest rates not only narrowed the spread between the T-bill rate and the CP rate and reduced SAP payments, it also eliminated SAP payments for some loans because interest rates paid by borrowers were higher than the guaranteed lender yield. Whether SAP is paid on a loan can change during a year because borrower interest rates are adjusted annually based on the final auction of T-bills before June 1 of each year while lender yields are adjusted each quarter. Thus in a climate of declining interest rates, SAP on certain loans was eliminated because the 3-month CP rate—on which the lender yield is based—fell, for a particular quarter, below the annually adjusted borrower rate. SAP was zero in 50 percent of the quarters for Stafford loans issued after January 1, 2000 through July 1, 2005. This is illustrated in figure 6, where one can also see that the more recent climate of rising interest rates could lead to increased SAP. In contrast, lower than expected interest rates contributed to higher reestimated FDLP subsidy costs. Under FDLP, the government had originally anticipated larger interest payments from borrowers as they repaid their loans because original subsidy cost estimates were based on forecasts that did not anticipate the significant decline in interest rates. Lower than expected interest rates thus resulted in lower than expected cash inflows to the government and higher FDLP subsidy cost reestimates. For example, using the numbers in table 4, one can see that original subsidy cost estimates made for the 1999 loan cohort assumed that interest rates on the 91-day Treasury bill would be 4 times higher than they actually were when some students would be entering repayment on loans they obtained in 1999. Moreover, original estimates were based on the assumption that the interest rate paid by borrowers on those loans would be higher than the interest rate Education pays to Treasury for borrowing the funds to make the loans. As can be seen in figure 7, the borrower interest rate fell below the discount rate (rate paid to Treasury) in 2001. Again, such a climate of lower than anticipated interest rates led to higher reestimates of subsidy costs. As interest rates rise, the interest paid by borrowers will increase–possibly to rates higher than the discount rate. Lower than expected interest rates also affected the actual rate used to discount cash flows for FFELP and FDLP subsidy cost estimates. When subsidy cost estimates are first prepared for the budget, agencies use an estimated discount rate. Education sets the actual discount rate when a loan cohort is fully disbursed. Because subsidy cost estimates are prepared prior to when a loan is disbursed, it is expected that differences between the estimated and actual discount rate will contribute to differences between reestimated and original subsidy cost estimates. For example, the actual discount rate for loans disbursed in fiscal year 2002 was lower than originally estimated, which lowered reestimated subsidy costs slightly in both FFELP and FDLP. Higher than expected consolidation volume, which stemmed in part from low interest rates, also affected reestimated subsidy costs. As we have previously reported, the number of borrowers consolidating their loans has increased substantially over the last several years. Consolidation activity has been higher than expected in both loan programs since fiscal year 1999. When borrowers consolidated their student loans and locked in recent low interest rates, they effectively paid off the underlying loans— Stafford subsidized and unsubsidized and PLUS—ahead of schedule and started a new consolidation loan. With the new consolidation loans, borrowers began new repayment periods that could be up to 30 years from when the consolidation loans were made. Because Education calculates subsidy costs for consolidation loans separately, it must adjust original estimates of the underlying loans to reflect unanticipated prepayments. Education considers the consolidation a new loan in the year that the loan was disbursed. Figures 8 and 9 provide a simplified example of consolidation from both the borrower’s and Education’s perspective. Consolidation activity has been particularly high for FFELP loans, increasing from about $7 billion in fiscal year 2000 to $37 billion in fiscal year 2004. Education had not anticipated such an increase in consolidation loans, which contributed to lower reestimated subsidy costs for the underlying loan cohorts. Under FFELP, consolidation loans shortened the length of time Education anticipated paying SAP to lenders and eliminated default risk on the underlying loans, thus lowering reestimated subsidy costs. Estimated subsidy costs for recent consolidation cohorts, which reflect costs associated with default risk and SAP to lenders, are quite large in comparison to previous consolidation loan cohorts. For example, reestimated subsidy costs per $100 disbursed for consolidation loans made in 2003 were $11.21 and in 2004 were $15.98 compared to $3.11 for consolidation loans made in 2002. The increase is due in part because borrowers locked in lower fixed interest rates on their consolidation loans and the minimum yield guaranteed to lenders is projected to be much higher than the fixed interest rate paid by borrowers, thus requiring the government to pay higher SAP than they would have on the 2002 loans. Consolidation activity in FDLP also increased—from $5 billion in fiscal year 2000 to $8 billion in fiscal year 2004. As borrowers consolidated their loans, they repaid the underlying loans that shortened the length of time Education had expected to receive interest payments on these loans. According to Education, it had calculated that the interest payments from borrowers would contribute positively to Education’s cash flows because expected interest rates that borrowers paid to Education were higher than the rate Education paid to borrow the funds. However, greater than expected prepayment due to consolidation decreased the anticipated interest payments on the underlying loans, which in turn contributed to higher reestimated subsidy cost estimates of the underlying loan cohorts. Moreover, as we reported in August 2004, large amounts of FDLP loans— about $7.5 billion between 1998 and 2002—were consolidated into FFELP. As a result, Education will not receive any of the future projected interest payments on those loans that are now FFELP loans, which also contributed to higher reestimated FDLP subsidy costs. Additionally, for the FDLP loans consolidated into FFELP, the government may need to pay SAP that it otherwise would not have had to pay. More data for both FFELP and FDLP loans has allowed Education to make refinements to its cash flow model, a result of changes made by Education to address recommendations in our prior reports and by Education’s auditors. The addition of data about borrower behavior to the cash flow model has also contributed to the differences between reestimated and original subsidy costs. For example, Education officials reported that in recent years, data on FFELP and FDLP borrowers’ use of deferment options, which allow them to delay making payments on a loan when they return to school or are experiencing economic hardship, has become available. With this data Education is able to explicitly include in its model the number of students using deferment options and project the effect on cash flows in both FFELP and FDLP, rather than implicitly including deferments in its model through adjustments in the length of time a loan was expected to be in repayment. According to Education officials, more FFELP borrowers than they had predicted have used deferment options and, when this data was incorporated into FFELP’s cash flow model, it contributed to an increase in reestimated FFELP subsidy costs of $5 billion in fiscal year 2003. Education reported that deferment data will be added to the FDLP cash flow model and will be reflected in reestimated subsidy costs in the fiscal year 2007 Budget of the United States Government. Education also noted that more data has become available in FDLP because the program has been in existence for 10 years and in FFELP because of improvements made by guaranty agencies. Previously, Education had based its FDLP cash flow assumptions on FFELP data, but Education now has data on when borrowers default or enter repayment based on FDLP borrowers. According to Education, actual defaults in FDLP have not been much different from the assumptions made using FFELP data because defaults are best predicted by the borrower and the type of school attended rather than from which loan program the student borrowed. According to Education officials, guaranty agencies—that are responsible for reporting on the status of a loan, i.e., in repayment, deferred, defaulted, or in-school—have made changes in their data systems and the quality checks on the data. As a result, Education has been better able to estimate default rates, subsequent collections, and their effect on cash flows in FFELP. In particular, Education noted that there have been improvements in the data Education uses in estimating of collections of defaulted loans in both FFELP and FDLP, which showed higher than originally estimated collections and contributed to lower reestimated subsidy costs. Additional federal costs and revenues associated with the student loan programs, such as federal administrative expenses, some costs of risk associated with lending money over time, and federal tax revenues generated by both student loan programs are not included in subsidy cost estimates. These are important factors to consider when determining costs of the student loan programs; however, they are difficult to measure. Under current law, federal administrative expenses are excluded from subsidy cost estimates. In addition, subsidy cost estimates do not explicitly include all risk that the government incurs by lending money over time. Moreover, both loan programs generate federal tax revenues that are not included in subsidy cost calculations. Under FCRA, federal administrative expenses are excluded from subsidy cost estimates. Federal administrative expenses for the student loan programs have been accounted for in Education’s budget on a cash basis—showing how much money is allocated for administering all federal student aid programs in one fiscal year. The federal government is primarily responsible for administering the FDLP and, for the most part, Education has contracted with private-sector companies to perform administrative tasks, such as originating and servicing loans. In the FFELP, lenders and guaranty agencies perform administrative functions. In addition to the SAP paid to lenders to guarantee a minimum yield, which includes coverage of the administrative expenses incurred, Education pays guaranty agencies account maintenance fees for their administrative costs. In fiscal year 2006, Education requested $939 million for administrative expenses for all federal student loan and grant aid programs. Of this amount, $238 million was for FFELP administrative expenses and $388 million was for FDLP administrative expenses. When FCRA was first passed there were concerns about whether agencies could change existing accounting systems to estimate long term administrative expenses for a loan program. Over the last few years, Education’s Office of Federal Student Aid has been developing a system that allocates its administrative expenses to each student aid program in a particular fiscal year so that management would have information that could be used for decision making purposes. While developing the system, Education officials reported that some administrative expenses are clearly linked to either FFELP or FDLP—such as payments to originate or service FDLP loans, and servicing defaulted FFELP loans. However, other administrative expenses are incurred by both loan programs, such as information systems used to process financial aid applications, thus requiring Education to develop a systematic way to allocate such expenses to FFELP or FDLP. In the fiscal year 2006 budget, Education included, as supplementary information, modified cost estimates that included estimated administrative expenses. As shown in table 5, if administrative expenses are included, subsidy cost estimates for loans disbursed in fiscal year 2006 would increase by $1.45 per $100 disbursed in FDLP and by $0.69 per $100 disbursed in FFELP. To produce cost estimates that included administrative expenses, Education not only needed to know how much of an expense was allocated to FDLP or FFELP, but also had to project how such costs might change in the future and whether an expense was paid now or later. For example, servicing costs for an FDLP loan while the borrower is in-school are paid in the first years that a loan is disbursed and are lower than the same costs when a borrower is in repayment that are typically paid several years later. According to Education, determining the timing of the expense was important because expenses in later years were discounted and, therefore, cost less in present value terms than those made in the first year. Moreover, Education officials acknowledged that there are limitations with these estimates because they assumed that administration of student aid programs would remain the same in the future. They reported that there is the possibility that administration processes and functions will change based on legislative or technological changes, but it was not possible to develop assumptions that could be used in estimating the effects of any such changes. While current subsidy cost estimates account for some risks— uncertainties regarding future cash flows—they do not include all risks incurred when lending money over time. Among the risks borne by any lender are credit risk—the possibility that the loan will not be fully repaid—and interest rate risk—unanticipated fluctuations in the interest rate due to changes in the economy that cause changes in the present value of the loans’ cash flows. Some studies have commented that by not incorporating all risks in subsidy cost estimates, the government does not present an accurate picture of the costs of its credit programs, including both FFELP and FDLP. Risk can be reflected in subsidy cost estimates in different ways. For example, one way is to incorporate it in estimates of cash flows, and another way is to adjust the discount rate to reflect the risk. Currently, Education incorporates some risks into its FFELP and FDLP subsidy cost estimate model by explicitly adjusting cash flow estimates. For example, credit risk is explicitly incorporated into Education’s subsidy cost model. Cash flow estimates are adjusted to reflect the likelihood that borrowers will default on their loans based primarily on the type of school a borrower attends (e.g., 2-year college, graduate school, etc.). Interest rate risk, however, is not explicitly incorporated into Education’s model. Interest rate fluctuations can affect estimates of SAP and borrower interest payments as well as borrower behavior with respect to loan prepayment and consolidation. Although Education uses estimated prepayment rates in adjusting estimated FFELP and FDLP cash flows, these estimates are based on historical averages rather than an econometric forecast of how interest rates might fluctuate in the future and, thereby, influence borrowers’ decisions to prepay or consolidate their loans. Relying on historical averages—especially if such averages do not reflect a variety of interest rate environments and stable loan terms and borrower characteristics—may not reflect the tendency for prepayments to increase or decrease at times when it is advantageous for borrowers. CBO and others have suggested that, rather than adjusting cash flows, the discount rate could be changed to incorporate certain types of risk, such as interest rate risk, in estimating subsidy costs of federal credit programs. Currently, subsidy cost estimates calculate the net present value of the loans using the “risk-free” discount rate determined by OMB in accordance with FCRA, which reflects the government’s cost of borrowing funds. The rate is known as risk-free because an investor buying a U.S. Treasury instrument knows with certainty what cash flows will be received and when they will be received and there is assumed to be no probability of default on the investment. This risk-free discount rate tends to be relatively low compared to interest rates used to discount cash flows in private industry, where interest rates reflect the market’s valuation of transactions and incorporate considerations of various types of risk. In a 2004 report, CBO proposed, among other methods, using a risk-adjusted discount rate, rather than the risk-free rate, to estimate subsidy costs of federal credit programs. In the case of federal student loans, one way to calculate a risk-adjusted discount rate would be to evaluate the secondary market for student loans, where student loans are often sold to banks or other investors. However, there are limitations to this approach given numerous differences in private-sector versus public sector assessments of risk. Notwithstanding this, the market price of the student loans would reflect the market’s valuation of the loans, because the expected cash flows would have been discounted using a higher discount rate that incorporates risks—such as interest rate risk—that are not included in Education’s subsidy cost model. The present value (price) of loans being sold on the secondary market would tend to be lower than the government’s valuation of similar loans, i.e., loans with similar default risk, loan amount, time to repayment, and other factors. This difference in loan valuation could be helpful in determining a risk-adjusted discount rate to use in calculating the cost to the government, although determining an appropriate rate would be challenging. Incorporating interest rate risk would affect subsidy cost estimates for both credit programs, FFELP and FDLP. Modeling interest rate risk more systematically through the cash flow estimates would affect prepayment and interest payment projections under FDLP, as well as SAP projections and prepayment activities under FFELP. The extent to which subsidy cost estimates would change for FFELP and FDLP would depend on the interest rate scenarios forecasted and the subsequent effect on cash flows in each program. However, using a risk-adjusted discount rate would have a greater impact on the subsidy cost estimates of FDLP relative to FFELP. This difference would result, in part, because of differences in the amount and timing of cash flows: FDLP has large cash outlays early in a loan’s life and large cash inflows later, when loans are in repayment. Thus these late cash inflows would be discounted at a higher rate and would have a smaller present value than under the current discounting methodology. FFELP, on the other hand, generates some cash inflows to the government early while cash outflows occur later as loans default or when SAP payments, if any, are made. Both FFELP and FDLP generate federal tax revenues that are reflected in the revenue portion of the budget but are not included in subsidy cost calculations. Federal tax revenues are generated by a variety of sources, including private-sector lenders that account for a majority of the lenders that make or hold FFELP loans. Many of these lenders participate actively in the multi-billion dollar financial services industry of taxable and tax- exempt bonds, asset-backed securities, and other debt instruments and pay federal taxes on the income earned from these sources as well as from their student loan business. In addition, other private-sector companies that work with FFELP lenders and investors buying student loan bonds and securities also generate federal tax revenues from the income earned from their participation in FFELP. Moreover, to service and collect defaulted FFELP loans, Education contracts with private-sector companies that are another source of federal tax revenue. Although FDLP is financed and primarily administered by the federal government, Education contracts with private-sector companies for many key administrative tasks, such as servicing loans while borrowers are in school, repayment, or default. In fiscal year 2004 Education reported that it paid $321 million to private-sector contractors to service student loans and perform other administrative tasks in the FDLP. These private-sector contractors earn income from their participation in FDLP on which they may pay federal taxes. Another source of tax revenue is income tax paid by U.S. investors that hold Treasury securities used to finance FDLP loans. Estimating the dollar amount of federal tax revenues generated by private sector entities and investors in FFELP and FDLP would be challenging. For example, many lenders are large publicly traded financial services companies with student loans being one portion of their business, making it difficult to identify the tax revenue generated from their student loan business. Moreover, to make an estimate of tax revenues would require knowledge of each lender’s profits from its student loan business and applicable tax rates. Significant reestimates of subsidy costs over the past 10 years illustrate the challenges of estimating the lifetime costs of loans. As we have shown, subsidy cost estimates and reestimates are sensitive to the assumptions used in estimating these costs. The historically low interest rates that persisted over the last several years were below levels previously forecasted. Because cost estimates for FFELP and especially for FDLP loans are sensitive to changes between projected and actual interest rates, subsidy cost reestimates varied from original estimates. To the extent that current assumptions correctly predict future loan performance and interest rates, subsidy costs per $100 of FFELP loans made from fiscal years 1994 to 2004 will be, in many cases, less costly than originally anticipated. On the other hand, over the same time period, subsidy costs per $100 of FDLP loans will in many cases be higher than originally anticipated. FDLP subsidy costs per $100 of loans disbursed have, in general, remained lower than those of FFELP. Nonetheless, if current assumptions correctly predict future loan performance and economic conditions, the originally estimated gain to the government from FDLP loans made in fiscal years 1994 to 2004 will not materialize, and instead these loans will result in a net cost to the government. In reality, however, subsidy cost estimates of FFELP and FDLP loans made in fiscal years 1994 to 2004 will continue to change as future reestimates incorporate actual experience and new interest rate forecasts. Similarly, initial subsidy cost estimates for loans made in the future will also change over the life of these loans and at times be lower or higher than initially estimated, depending on the extent to which loan performance and interest rates differ from assumptions used to develop initial estimates. Actual subsidy costs for a cohort of student loans will remain unknown until all payments that will be made on such loans have been collected. Despite the fact that subsidy cost estimates will change from year to year, estimates developed in accordance with FCRA more fully and accurately present the expected long-term costs of federal student loans than did the prior method of calculating costs based on single-year cash flows to and from the government. As a result of FCRA, the budget is a more useful tool for allocating resources among the myriad of competing demands for federal dollars than it once was. Subsidy cost estimates, for example, provide policymakers the means to more accurately evaluate the long-term budgetary implications of potential legislative, regulatory, and administrative reforms. At the same time, it is important for policymakers to understand how credit reform subsidy cost estimates are developed and to recognize that such estimates will change in the future. Decisions made in the short-term on the basis of these estimates can have long-term repercussions for the fiscal condition of the nation. While subsidy cost estimates include many of the federal costs associated with FFELP and FDLP loans, they do not capture all federal costs and revenues associated with the loan programs. Consideration of all federal costs and revenues of the loan programs would be an important component of a broader assessment of the costs and benefits of the two programs. Because federal administrative expenses—in accordance with FCRA—are excluded from subsidy cost estimates, for example, these estimates can underestimate the total lifetime costs of FFELP and FDLP loans. Other costs and revenues are also not considered in subsidy costs estimates, including interest rate risk inherent to lending programs, and federal tax revenues generated by private-sector activity in both FFELP and FDLP. Calculations of total federal costs would be enhanced were these additional costs and revenues considered, though doing so may require complex methodologies and/or data that are not currently readily available. We provided Education with a copy of our draft report for review and comment. Education reviewed the report and had no comments. Education noted that because the report did not include recommendations for the Department, it was not providing a formal response to be included in the report. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time we will send copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to the report are listed in appendix II. Appendix I: Comparison of Fiscal Year 2006 FDLP and FFELP Reestimated Subsidy Costs per $100 Disbursed, by Loan Type and Cohort Subsidy cost per $100 disbursed (nominal dollars) The following individuals made important contributions to the report: Jeff Appel, Assistant Director; Andrea Sykes, Analyst-in-Charge, Nagla’a El-Hodiri, Jeffrey W. Weinstein, Christine Bonham, Marcia Carlsen, Austin Kelly, Mitch Rachlis and Lauren Kennedy.
In fiscal year 2004, the federal government made or guaranteed about $84 billion in loans for postsecondary education through two loan programs--the Federal Family Education Loan Progam (FFELP) and the Federal Direct Loan Program (FDLP). Under FFELP, private lenders fund the loans and the government guarantees them a minimum yield and repayment if borrowers default. When the interest rate paid by borrowers is lower than the guaranteed minimum yield, the government pays lenders special allowance payments (SAP). Under FDLP, the U.S. Treasury funds the loans that are originated through participating schools. Under the Federal Credit Reform Act (FCRA) of 1990 the government calculates, for purposes of the budget, the net cost of extending or guaranteeing credit over the life of a loan, called a subsidy cost. Agencies generally update, or reestimate, subsidy costs annually to include actual program results and adjust future program estimates. GAO examined (1) whether reestimated subsidy costs have differed from original estimates for FFELP and FDLP loans disbursed in fiscal years 1994 through 2004, (2) what factors explain changes between reestimated and original subsidy rates--that is subsidy cost estimates per $100 disbursed; and (3) which federal costs and revenues associated with the student loan programs are not included in subsidy cost estimates. Both FFELP and FDLP subsidy cost reestimates have differed from original estimates for loans made in fiscal years 1994 through 2004, reflecting the challenges inherent in estimating the actual costs of loans made under each of these federal loan programs. Reestimated subsidy costs for FFELP loans were close to or lower than original estimates for loans made in fiscal years 1994 to 2002, but higher than originally estimated for loans made in fiscal years 2003 and 2004. FDLP reestimated subsidy costs were generally similar to or higher than originally estimated for loans made in fiscal years 1994 through 2004. Differences between original and reestimated subsidy cost estimates per $100 disbursed were, in part, due to market interest rates that were lower than originally forecasted, greater than anticipated loan consolidation, and the availability of additional data on student loans. Each of these factors has affected reestimated subsidy costs for each loan program in a different way. For example, interest rates fell to lower than expected levels in 2001 and the condition persisted through 2004. For FFELP, lower than expected interest rates have made the difference between the borrower interest rate and lender yield smaller than expected resulting in lower SAP paid to lenders, which in turn resulted in lower reestimated subsidy cost estimates. For FDLP, lower than expected interest rates contributed to higher reestimated subsidy costs because the government received smaller interest payments from borrowers than originally anticipated and, in some cases, the rate paid by student borrowers fell below the government's fixed borrowing rate. Certain federal costs and revenues associated with the student loan programs, such as federal administrative expenses, some costs of risk associated with lending money over time, and federal tax revenues generated by both student loan programs, are not included in subsidy cost estimates. For example, under current law, federal administrative expenses are excluded from subsidy cost estimates. Moreover, both loan programs generate federal tax revenues from private sector companies and investors that are encompassed in the revenue portion of the budget but are not included in subsidy cost calculations. Estimating the amount of federal tax revenues generated by the loan programs would be difficult and was beyond the scope of our review. Education reviewed a draft copy of this report and did not have any comments.
The college textbook market is complex, with a number of parties involved in the development and distribution of course materials. First, publishers develop and produce textbooks and accompanying materials for faculty and students. Publishers then market their materials to faculty, school administrators, and sometimes academic departments that make decisions about what course materials to assign to students. Publishers employ sales representatives who often speak with instructors in person to discuss product options. They also provide instructors with free sample materials for their consideration. Publishers produce a variety of products and services for faculty to choose from in selecting course materials. In addition to traditional textbooks, faculty can work with publishers to create customized course materials by adding or deleting information from a single textbook or multiple sources. Faculty may also select supplemental materials, such as workbooks, lab activities, and study guides. Supplemental materials and textbooks may be sold together in one package, referred to as a bundle, and may also be available for sale separately. In addition to print versions, course materials are often available as digital products that can be accessed on computers or e-readers. Publishers have also developed online interactive systems that integrate multimedia instructional material with supplemental materials like homework or personalized quizzes and tutorials. Faculty or other designated parties select course materials and submit orders for them according to the process and time frames established by their school. Upon receiving information about selected materials and enrollment numbers from the school, campus bookstores determine how many to order and stock. These school-affiliated bookstores may include independent booksellers and large retail chains. They generally sell both new and used books, and some offer digital products and textbook rental programs. Many campus bookstores also have websites through which students can purchase or rent textbooks online. In addition to campus bookstores, students may be able to obtain course materials from a variety of sources. For example, students may purchase or rent course materials from online retailers, publishers, or through peer-to-peer exchanges, among other outlets. They may also borrow materials from libraries or peers. An emerging source of course materials is the open source model, in which textbooks and other materials are published under a license that allows faculty to personally adapt materials and students to access them for free or for a nominal cost. Table 1 below summarizes common options available for obtaining course materials. After completing a course, students may be able to offset their costs by selling back their course materials to the campus bookstore, online retailers, or wholesale companies. Bookstores generally buy as many used textbooks from their students as possible, but there are limits to what students can sell back. For example, electronic products are generally not eligible for resale because they are accessible through one- time access codes or downloading. Opportunities to sell back customized course materials may also be limited given their uniqueness to a particular course on a particular campus. In 2005, based on data from the Bureau of Labor Statistics, we reported that new college textbook prices had risen at twice the rate of annual inflation over the course of nearly two decades, increasing at an average of 6 percent per year and following close behind increases in tuition and fees. More recent data show that textbook prices continued to rise from 2002 to 2012 at an average of 6 percent per year, while tuition and fees increased at an average of 7 percent and overall prices increased at an average of 2 percent per year. As reflected in figure 1 below, new textbook prices increased by a total of 82 percent over this time period, while tuition and fees increased by 89 percent and overall consumer prices grew by 28 percent. While the Bureau of Labor Statistics publishes data annually on college textbook pricing, there are no comparable, nationally representative data sources that estimate student spending. Given the range of options for format and delivery of course materials, students are not limited to purchasing new, print books. Students may lower their costs by purchasing used or digital textbooks, renting materials, or taking advantage of other affordable options. However, the price of new, print books often drives the prices of other items. Specifically, as we reported in 2005, used textbook prices are directly linked to new textbook prices in that retailers typically offer used books for about 75 percent of the new, print price.offered at a discount based on the new, print price. Thus, while students may be able to find lower-priced options, increasing prices for new, print books will likely lead to similar price increases for other related course materials. In part to ensure that students have access to information about selected course materials, Congress included several provisions related to textbook information in the Higher Education Opportunity Act (HEOA), as described below. When publishers provide faculty or others responsible for selecting course materials at schools with information about textbooks or supplemental material, they are required to provide the price at which they would make the textbook or supplemental material available to the school’s bookstore (often referred to as the net price) and, if available, to the public (often referred to as the retail price or the list price). They are also required to provide a description of substantial content revisions between the current and prior edition, the copyright dates of the three prior editions, if any, and any alternate formats available, along with their net prices and retail price, if available. 20 U.S.C. § 1015b(c). While publishers may suggest retail prices, retailers determine the final price at which to sell the materials. Schools are generally required to disclose information on textbooks to students and campus bookstores. Specifically, to the maximum extent practicable, in their online course schedules, schools are to provide the International Standard Book Number (ISBN) and retail price for required and recommended material for each course listed in the schedule used for preregistration and registration purposes, with some exceptions. If the ISBN is not available, schools are to provide the author, title, publisher, and copyright date in the schedule. If disclosing the required information is not practicable, the school is to list it as “to be determined” (TBD) on the course schedule. In addition to making these disclosures on the course schedule, schools are to provide their affiliated bookstores with the same information, along with enrollment data for each course as soon as practicable upon the request of the bookstore. Beyond these requirements, HEOA encourages schools to provide information to students about institutional textbook rental programs, buyback programs, and any other cost-saving strategies. Figure 2 illustrates the types of information communicated throughout the process of selecting and ordering course materials. In addition to the disclosure requirements, HEOA requires publishers to make college textbooks and materials sold together in a bundle available for sale individually, a practice referred to as unbundling (see fig. 3). While each component must be available individually, publishers may continue to sell course materials in bundles. This requirement does not apply in the case of integrated textbooks. These are books with related materials that are either governed by a third-party contract that prohibits their separation, such as those produced by another company, or are so interrelated that the components would be unusable on their own. For example, a computer software textbook may include multimedia content that is essential to understanding the book. HEOA prohibits Education from issuing regulations related to the textbook information section of the law. While Education therefore has a limited role in this area, it provided some early nonregulatory guidance to help publishers and schools understand the textbook provisions. In addition, Education encourages students to consider textbook costs in preparing for college and collects student complaints, including those related to textbooks. The eight publishers included in this study have disclosed textbook information including retail prices, available alternative formats, and descriptions of substantial content revisions between editions. Seven publishers also provided net prices and six provided information on prior copyright dates to faculty. Two smaller publishers told us they did not have a practice of disclosing prior copyright dates, and one said net pricing was not part of its business practices. Publishers included in this study have communicated this information to faculty online and in other marketing materials, as well as in the course materials themselves. In most cases, publishers’ textbook information was available to students and the public, in addition to faculty. For example, all eight publishers chose to disclose retail prices and format options in publicly accessible areas of their websites. Representatives from one of these publishers stated the company’s intent in making the information publicly available was to increase transparency. Another publisher chose to disclose net pricing information to faculty on a restricted-access website—a decision representatives said was meant to avoid public confusion about the difference between net and retail prices. Instead of using a website, representatives from another publisher said they primarily use marketing materials distributed directly to faculty to disclose net pricing and format information. In addition, publishers provided some required information in the course materials themselves. For example, five of eight publishers disclosed prior copyright dates on the inside pages of textbooks. See figure 4 for the types of information publishers disclosed and the methods they utilized for doing so. Representatives of the majority of publishers included in our study said disclosing required textbook information involved process changes that took initial investments of time and financial resources. Some publishers told us they made changes to their technology and production systems, as well as their marketing practices. For example, representatives from two publishers said they had to change their internal databases to include all the textbook information specified in HEOA. Four publishers also told us they conducted extensive training with staff about HEOA implementation. Despite the changes publishers made to disclose required textbook information, four publishers did not view the costs directly associated with implementing HEOA as substantial. All publishers included in this study have made bundled materials available for sale individually, which is a requirement of HEOA. In some cases, publishers said they began phasing out bundles before HEOA’s passage in 2008, which we also noted in our 2005 report. For example, one publisher told us it made the decision to make its bundled products available for sale individually prior to HEOA’s enactment because it wanted to be more transparent about its product offerings. Another publisher said that most of its bundled materials were available for sale individually by 2004, and in response to HEOA, it only had to change its external website to allow faculty to view the individual components. In contrast, another publisher noted that in response to HEOA, it had to begin setting prices for all bundled components and change its process for storing them. In order to make faculty aware that bundled materials are available individually, publishers use the same methods they use for disclosing information about other course materials. For example, five publishers display options for supplemental materials or bundles on their public websites. These pages include lists of each component, sometimes with individual prices or ISBNs. Offering individual components for sale gives students more options when selecting their course materials, but several bookstores, faculty, students, and national associations told us that some market conditions may limit the potential for savings or student choice. For example: As we reported in 2005, buying individual components may not be cheaper for students, as bundled products may be offered at a discount compared to the price of buying each component separately. Some publishers said that the discounts available on bundles are a selling point with faculty. Several faculty, students, and bookstore representatives we spoke with for this study told us that the price of individual components, particularly electronic course materials, is often higher than it would have been in a bundle. This pricing structure may limit students’ ability to reduce their costs by purchasing less expensive used books and choosing which supplements to purchase. Students may have limited options for obtaining unbundled components. When faculty select bundles, campus bookstores may choose not to stock all the individual components, according to representatives from two campus bookstores and a national campus retailer. In addition, publishers may not make some separate components available for sale through campus bookstores, according to representatives from an industry trade group and a national campus retailer. In such cases, students wishing to obtain individual components would need to seek other outlets, such as publisher websites or online retailers. As previously discussed, the HEOA requirement to make materials available for sale individually does not apply in the case of integrated textbooks, which either are governed by a third-party contract that prohibits separation or would be rendered useless if separated. For example, one publisher reported that it is not authorized to separately sell a listening component that accompanies a music book under the terms of the contract it has with the company that produced the music. While representatives of one industry trade group expressed concern that the exemption for integrated textbooks offered a way around the requirement to ensure all products in a bundle are available for sale individually, the five large publishers we spoke with said they offer very few of these materials. For example, one publisher estimated that integrated textbooks make up 2 percent of its inventory, while representatives of another said they offer one integrated textbook. Representatives of faculty groups, bookstores and publishers we interviewed said the availability of information and unbundled materials has had little effect on college textbook selection decisions. Faculty told us they typically prioritize selecting the most appropriate materials for their courses over pricing and format considerations. One faculty group we spoke with explained that the quality and relevance of the materials are the key factors in finding the best match for the course. Another group said they need to determine whether the material is at a level suitable for the students likely to enroll and comprehensive enough for the content they plan to cover. Only after they have identified the most appropriate course materials will faculty consider pricing and format options, according to stakeholders. For example, a representative of a national campus retailer said faculty ask about cost-saving options like digital formats and textbook rentals after they have identified the best materials to help their students master the necessary concepts. Changes in technology and available options in the college textbook market—factors unrelated to HEOA—have also shaped faculty decisions about course materials. For example, representatives from a publisher and a national campus retailer noted there is growing interest in digital assessment tools that allow faculty to track student progress in an effort to improve student outcomes. In 2005, we reported that publishers were developing these types of digital products to help enhance faculty productivity and teaching. Currently, publishers are expanding these offerings with interactive products like online interactive systems, which may include some combination of instructional material, adaptive homework questions, exams, worksheets, or tutoring programs in one system. Representatives of two campus bookstores and a faculty group told us that online interactive systems are becoming more popular at their schools. Although HEOA requires publishers to make textbooks and supplemental materials available for sale individually, faculty can still select bundled products for their courses. Some publishers and bookstores told us there was initial confusion about whether the law constrained faculty choice in selecting bundles, and they employed various communication efforts to help clarify the issue. Our review of a nationally representative sample of school websites shows that bundles continue to be assigned for some courses. More specifically, in cases where schools provided textbook information online, we found that an estimated 58 percent included required materials that appeared to be bundles for at least one of the three courses we reviewed. Although faculty decisions about textbook selections have not changed much in response to publisher practices, representatives of faculty groups told us they are more aware of affordability than they used to be. For example, faculty we spoke with at a public school expressed a strong interest in finding appropriate textbook options that also save students money because they often serve low-income students who work part- time. They added that a new textbook could cost over $200, which is more than the course fee of $136. These faculty also said that as a result of HEOA, more faculty are providing information about textbook selections in their syllabi as early as possible, and are putting books on reserve in the library. In addition, several faculty from different schools reported using the same textbook for multiple semesters, which allows students to buy a used version at a reduced price. At one public school, faculty told us their school’s guidelines encourage them to consider both the price of course materials and the use of the same edition of a textbook for as long as possible. With regard to other cost-saving solutions, a few faculty told us they have developed their own course materials. For example, one professor said a lab manual for his course could cost $50-$60 when developed by a publisher; instead, he created his own, which costs students about $10 in printing fees. A music professor said that with the advent of free online videos, he can teach his class almost entirely by assigning links to websites. Based on our review of a nationally representative sample of school websites, most schools provided students and college bookstores with the textbook information specified in the HEOA provisions. We estimate that 81 percent of schools, serving an estimated 97 percent of college students nationwide, made textbook information—such as the ISBN or retail price—available online in time for the fall 2012 term. In addition, an estimated 93 percent of these schools made the information publicly available without the use of a password, allowing both current and prospective students to view it. Schools are structured and operate in various ways, and HEOA allows some flexibility in whether and how they disclose textbook information. An estimated 19 percent of schools did not provide textbook information online. When we contacted all such schools in our sample to inquire why they did not provide that information, representatives from 62 percent said they included the cost of textbooks in tuition and fees or assessed students a separate fee for textbooks. Other reasons cited included not posting a course schedule online and supplying required materials through the school’s library (see fig. 5). The extent to which schools provided textbook information online varied by school sector and level (see fig. 6). The vast majority of public and private nonprofit schools provided information compared to about half of private for-profit schools, while 4-year schools provided information more often than 2-year schools. In turn, we found the practice of including the cost of textbooks in tuition and fees or assessing students a separate fee for textbooks occurred at private for-profit schools more often than at public or private nonprofit schools, and at 2-year schools more often than at 4-year schools. Representatives we spoke with at one such private for- profit, 2-year school that offers medical, business, and technical programs told us the school has a centralized curriculum and standardized textbooks for each program across its 10 campuses, allowing it to purchase textbooks and materials in bulk at lower prices and pass the savings on to its students. This is a common practice among schools with specialized or technical programs of study, according to a higher education association we interviewed. An estimated 80 percent of schools that provided textbook information online to students and bookstores included the primary elements specified in HEOA for all textbooks listed for the three courses we reviewed. In doing so, almost all of these schools provided the ISBN or alternative information outlined by HEOA (title, author, copyright date, and publisher) and some retail pricing information (i.e., new, used, rental, or other price) for all courses that required textbooks. A small share of schools indicated textbook information was “to be determined” for one or more of their courses. Of the remaining 20 percent of schools that provided textbook information online, most included the primary elements specified in HEOA for all textbooks listed for one or two courses we reviewed, and almost all included some information that could assist students in shopping for their textbooks, such as the titles or authors. Schools also provided additional information—such as information on textbook rental programs, used books, and digital products—to students, as encouraged by the HEOA provisions. Of the estimated 81 percent of schools that provided textbook information online for the fall 2012 term: an estimated 67 percent had an institutional textbook rental an estimated 73 percent provided some used textbook pricing information for at least one course; and an estimated 40 percent provided other pricing information—almost always for digital products—for at least one course. Representatives of all campus bookstores and national campus retailers we spoke with said they have ramped up their rental programs in the last few years, and cited discounts for renting textbooks, ranging from 14 to 60 percent over the new retail price. They also reported their stores carry used and digital books to provide students with additional options. One campus bookstore reported that its students saved more than $150 per semester, on average, after it introduced textbook rental and digital book programs. Representatives of schools, bookstores, and higher education associations we spoke with said the costs to schools and bookstores of implementing the HEOA provisions were manageable. Administrators and campus bookstore representatives of six schools, as well as a national campus retailer, said they invested some time or money to implement the HEOA provisions—for example, by linking up school and bookstore databases and convening internal meetings—but that their costs were not substantial. Representatives of another national campus retailer we spoke with said their costs were more considerable because they developed software to help their schools comply. We also heard from representatives of two higher education associations, which represent over a thousand schools nationwide, that implementation of the HEOA textbook information provisions had gone smoothly and that they did not hear about any complaints from their member schools. Schools varied in their approach to implementing the HEOA provisions. Some schools that provided textbook information online relied on their campus bookstores to provide it, according to school administrators and national campus retailers. For example, administrators we spoke with at four schools said they provided textbook information to students by adding links from their schools’ online course schedules to their bookstores’ websites, which in some cases involved linking school and bookstore databases so that students could go directly from the course schedule to the bookstore’s list of textbook information for each course. Administrators at two of these schools said this approach was simpler and more cost-effective than other options they considered. Another school went a step further and provided textbook information directly on the school’s course schedule. Specifically, administrators said they used a web-based tool from the company that managed their bookstore to display textbook information directly on the school’s online course schedule, at no additional cost to the school. A student at this school told us it was helpful to have course and textbook information available in one place. However, administrators at another school that considered this approach, but ultimately included a link to the bookstore website instead, said it would have required substantial resources and maintenance to implement. Besides providing textbook information to students, schools also submitted the number of students enrolled in each course to their campus bookstores, as outlined by HEOA. Administrators at four schools we spoke with said they provide this information electronically by transferring it from their course registration database to the campus bookstore’s database. Some bookstores moved up their deadlines for faculty to submit their textbook choices as a result of implementing the HEOA provisions. Specifically, representatives from an industry trade group representing school bookstores said some of their members moved their fall deadlines up by as many as 2 months to provide textbook information to students at the time of preregistration. Representatives of two campus bookstores and both national campus retailers we spoke with reported that getting faculty to submit their textbook choices by the deadlines had already been a challenge prior to HEOA. However, faculty we spoke with at one school, as well as representatives from two campus bookstores and an industry trade group, acknowledged there may be legitimate reasons why textbook choices are not submitted by the deadline, such as a given course having no instructor assigned. Students, faculty, school administrators, and most other stakeholders we spoke with said students have benefited from having timely and reliable textbook information. As faculty submit more complete and timely textbook information, schools are better able to provide that information to students, according to faculty at one school and representatives from a campus bookstore and an industry trade group. Representatives of student organizations at three schools said they now have sufficient information and time to shop for their course materials before each academic term. Students told us they use ISBNs found on their campus bookstore’s website to actively research textbook prices at both the campus bookstore and other retailers, and that they make better spending decisions as a result. Students at one school said researching their options to find the highest quality materials for the lowest cost is easy and worth the little time it takes. For example, one student estimated she saves about $150 per semester by being able to compare prices among multiple retailers. Students at one of the three schools said their campus bookstore includes competitors’ prices on its website, allowing students to see which retailer offers the lowest price for a given textbook. While students said price is the biggest factor affecting their decisions about obtaining textbooks, they also consider how they want to use the books and how long they want to keep them. For example, students at three schools mentioned that some students prefer to highlight or take notes in print versions of books rather than mark up digital books. Students also said they are able to obtain ISBNs and other information in enough time to acquire their required textbooks by the first day of class. Students further benefit from increased information on the options for obtaining textbooks, such as institutional rental programs, according to representatives from one bookstore and an industry trade group we interviewed. In addition to talking to students and other stakeholders, we reviewed 153 entries from Education’s Program Compliance Complaints Tracking System from July 2010 through April 2012 that mentioned the word “book.” While we found several entries related to the timing of financial aid for purchasing books, our review yielded no complaints related to the HEOA textbook provisions. As the cost of attending college continues to rise, students and their families need clear and early information about the cost of textbooks. In implementing HEOA’s textbook provisions, publishers and schools have provided them with increased access to such information. Greater transparency of information alone, however, does not make textbooks less expensive, as the affordability of course materials results from the complex market forces that drive prices. Moreover, the textbook market is different from other commodity markets; although students are the end consumers, faculty are responsible for selecting which textbooks students will need, thereby limiting students’ ability to allay costs. Nevertheless, the proliferation of new products, formats, and delivery channels has left students with many options for obtaining course materials. For example, students can now choose whether to realize savings upfront by selecting digital or rental options, or on the back end by reselling books in the used book market. In light of the increased complexity of students’ options, the information schools have provided in implementing HEOA has proven useful to students in making informed decisions. As a result, even though students cannot directly control the cost of their textbooks, they can better manage these costs by comparison shopping and making deliberate decisions about what to purchase, and from where. We provided a draft of the report to the Department of Education for review and comment. Education provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To examine publishers’ efforts and faculty decision-making, we interviewed representatives from eight publishers that develop materials for the college textbook market and reviewed supporting documentation from each of them, including documents from their websites and ones obtained at the interviews. To obtain a range of perspectives, we selected five publishers that represented over 85 percent of new U.S. higher education textbook sales, according to information they provided us, as well as three smaller publishers. The views of publishers we spoke with cannot be generalized to all publishers. While we reviewed documentation of publisher efforts to provide information to faculty and make bundled materials available for sale individually, we did not evaluate whether these practices, as supported by documentation or described to us in interviews, were in compliance with the law. We also interviewed others with relevant expertise, including faculty groups at three schools, two national campus retailers that operate hundreds of bookstores nationwide, a textbook rental company, a company that provides price comparison software for campus bookstores, and professional organizations that represent publishers, bookstores, faculty, students, and schools. To determine the extent to which postsecondary schools have provided students and college bookstores access to textbook information, we reviewed websites of a nationally representative, stratified random sample of 150 schools to determine the extent to which they disclosed textbook information in their fall 2012 course schedules. The sample was drawn from Education’s 2010-2011 Integrated Postsecondary Education Data System (IPEDS), which contains data on over 7,200 institutions eligible for federal student aid programs authorized under Title IV of the Higher Education Act of 1965, as amended. We assessed the reliability of the IPEDS data by reviewing Education’s quality control procedures and testing the data electronically. We determined the data were sufficiently reliable for our purposes. Our sampling frame consisted of all public, private nonprofit, and private for-profit 2-year and 4-year degree-granting postsecondary schools that participated in Title IV federal student aid programs, had undergraduate programs, were not U.S. military academies, and had at least 100 students, yielding a universe of 4,293 schools. We created six strata to stratify the sampling frame by sector (public, private nonprofit, and private for-profit) and level (2-year and 4- year). This sample of schools allowed us to make national estimates about the availability of textbook information, as well as estimates by sector and level. The percentage estimates reported from this review have margins of error at the 95 percent confidence level of 9 percentage points or less, unless otherwise noted. In order to review comparable information across the sampled schools, we developed a standardized web-based data collection instrument and pre-tested it in July 2012. Using the finalized data collection instrument, we examined available textbook information for an introductory business, psychology, and biology course at each school in our sample. We judgmentally selected these subjects because they are among the top 10 undergraduate majors for degree-completing students, according to IPEDS data. In addition, these courses likely affect a larger number of students than upper division courses. We reviewed school websites from July through September 2012, as students would be obtaining textbooks for their fall 2012 term courses. In cases where we were unable to find or access textbook information online for a particular school, we contacted the school to determine how to access it and followed up with representatives as appropriate to determine the reasons why information was not available online. We recorded and verified the information we obtained in the web- based data collection instrument to facilitate the analysis across schools. We collected complete information for all 150 schools in our sample, resulting in a response rate of 100 percent. Based on the results of our review of school websites, we conducted follow-up interviews regarding the implementation of the HEOA textbook provisions and the associated costs and benefits with various representatives from eight schools in our sample, including seven groups of administrators, four campus bookstores, three faculty groups, and three student government groups. (See table 2 for a list of schools.) To select this subset of eight schools, we took into account characteristics such as sector, level, and geographic location. To provide background information on changes in the prices of textbooks over time, we reviewed public data from 2002-2012 from the Consumer Price Index (CPI) published by the Department of Labor’s Bureau of Labor Statistics (BLS). The CPI data reflect changes over time in the prices paid by urban consumers for different goods and services. Specifically, we reviewed CPI data for three expenditure categories: college textbooks, college tuition and fees, and overall prices. The college textbooks data are limited to the price of new print and digital books. In addition, the timeframes for collecting price information in the field vary, so different textbooks may be priced at different times. We assessed the reliability of these data by reviewing documentation of the CPI data and interviewing BLS officials about the methodology used to calculate the data. We determined that these data were sufficiently reliable for the purposes of our study. We provided BLS with an opportunity to review our use of the CPI data. In addition, we reviewed complaints submitted to Education to determine whether any were related to the information provided by schools about required textbooks. Specifically, Education provided GAO with an excerpt from its Program Compliance Complaints Tracking System containing any complaints in which the word “book” appeared. There were 153 such complaints from July 2010, when the HEOA textbook provisions went into effect, through April 2012. We took steps to verify the reliability of the data. We interviewed Education officials about the methodology used to calculate the data and determined they were sufficiently reliable for our purposes. We also reviewed relevant studies and federal laws. In addition to the contact named above, Debra Prescott (Assistant Director), Lara Laufer, Jeffrey G. Miller, Amy Moran Lowe, Michelle Sager, Najeema Washington, and Rebecca Woiwode made significant contributions to this report. Also contributing to this report were David Barish, James Bennett, Deborah Bland, David Chrisinger, Daniel Concepcion, Jennifer Cook, Rachel Frisk, Alex Galuten, Bill Keller, Ying Long, Jean McSween, Karen O’Conor, and Betty Ward-Zukerman.
The rising costs of postsecondary education present challenges to maintaining college affordability. Textbooks are an important factor students need to consider when calculating the overall cost of attending college. In an effort to ensure that faculty and students have sufficient information about textbooks, Congress included requirements in HEOA concerning publisher and school disclosures, as well as publisher provision of individual course materials. HEOA directed GAO to examine the implementation of the new textbook provisions. This report addresses (1) the efforts publishers have made to provide textbook information to faculty and make bundled materials available for sale individually, and how these practices have informed faculty selection of course materials; and (2) the extent to which postsecondary schools have provided students and college bookstores access to textbook information, and what the resulting costs and benefits have been. To conduct this study, GAO interviewed eight publishers representing over 85 percent of new U.S. higher education textbook sales, administrators at seven schools, four campus bookstores, two national campus retailers, faculty and student groups at three schools, and others with relevant expertise. GAO also reviewed websites of a nationally representative sample of schools, complaint data from Education, and relevant federal laws. GAO makes no recommendations in this report. The Department of Education provided technical comments, which were incorporated as appropriate. Publishers included in GAO's study have disclosed textbook information required by the Higher Education Opportunity Act (HEOA), such as pricing and format options, and made components of bundled materials available individually, but stakeholders GAO interviewed said these practices have had little effect on faculty decisions. While most publishers in GAO's study provided all relevant textbook information, two smaller publishers did not provide copyright dates of prior editions, and one did not provide certain pricing information. Publishers communicated information to faculty online and in other marketing materials, and in most cases the information was available to students and the public. In addition, publishers said they began making bundled materials available for sale individually before HEOA was passed. Faculty GAO interviewed said they typically prioritize selecting the most appropriate materials for their courses over pricing and format considerations, although they said they are more aware of affordability issues than they used to be. Changes in the availability of options in the college textbook market that are not related to HEOA, such as the increase in digital products, have also shaped faculty decisions about course materials. Based on GAO's review of a nationally representative sample of schools, an estimated 81 percent provided fall 2012 textbook information online, and stakeholders GAO interviewed said implementation costs were manageable and students have benefited from increased transparency. HEOA allows schools some flexibility in whether and how they disclose information and an estimated 19 percent of schools did not provide textbook information online for various reasons, such as including textbook costs in tuition and fees or not posting a course schedule online. Representatives of most schools and bookstores, as well as others GAO interviewed, said implementation costs were not substantial. In addition, there was general consensus among students and others GAO interviewed that students have benefited from timely and dependable textbook information. Specifically, representatives of student organizations said they had sufficient information and time to comparison shop for their course materials before each academic term.
Employers have been the driving force behind the growing move to compare health care providers and plans on the basis of their performance. These employers have worked both individually and collaboratively with providers, health plans, and government to produce information that will allow them to assess the quality of the care they purchase. Health plans have been publishing reports comparing their performance to their peers or to a national standard. State governments have published comparative information, often focused on specific procedures performed in hospitals. Although the federal government was responsible for the first widespread public disclosure of hospital performance data in 1987, it discontinued this practice in 1993. As a payer of health care services on behalf of Medicare and Medicaid beneficiaries, the Health Care Financing Administration (HCFA) lags behind others in making performance data public. Report cards can include a variety of performance indicators, either structural, process, or outcome based. Structural indicators measure the resources and organizational arrangements in place to deliver care, such as the ratio of nurses to inpatient beds. Process indicators measure the physician and other provider activities carried out to deliver the care, such as the rates of childhood immunization. Outcome indicators measure the results of the physician and other provider activities, such as mortality, morbidity, and customer satisfaction. In 1989, a group of employers initiated one of the most significant efforts to identify uniform and standardized performance indicators. This effort resulted in the creation of a performance measurement system known as the Health Plan Employer Data and Information Set (HEDIS). Several business coalitions and health care organizations used the first HEDIS measures in 1991. The nonprofit National Committee for Quality Assurance (NCQA) has led the effort to revise the measures, issuing HEDIS 2.0 in 1993 and HEDIS 2.5 in 1995. Current HEDIS measures focus on process indicators. (See table 1 for a list of some key HEDIS measures.) Using HEDIS as a base, some employers have begun to distribute to their employees educational materials that include outcome measures. For example, the California Public Employees’ Retirement System (CalPERS) recently distributed to its employees a performance report about the health plans it offers. Although it had furnished some comparative information to its employees in previous years, the information generally featured cost and benefits. CalPERS’ May 1995 Health Plan Quality/Performance Report is its first effort at distributing comprehensive information that includes both specific quality performance indicators and member satisfaction survey results. The quality performance data are based on HEDIS indicators measuring health maintenance organizations’ (HMO) success with providing childhood immunizations, cholesterol screening, prenatal care, cervical and breast cancer screening results, and diabetic eye exams. Employee survey results include employee satisfaction with physician care, hospital care, and the overall plan, and the results of a question asking whether members would recommend the plan to a fellow employee or friend. Some employers are using third-party health care accrediting organizations to measure health plan performance using structural indicators. These employers are requiring the health plans they contract with to be accredited by organizations such as NCQA and the Joint Commission on Accreditation of Healthcare Organizations. Furthermore, some accrediting agencies publicize their accreditation decisions, which allows employers and individual consumers to consider accreditation status in their health care purchasing decisions. For example, a consortium of employers has elected to exclude a Florida HMO from new business with its employer-sponsored health plans because of the HMO’s failure to obtain accreditation. Health plans have published comparative information intended to assist individual consumers in their health care choices and health care providers in their quality improvements. For example, in 1993, Kaiser Permanente Northern California Region released a report on 102 performance measures divided into the following categories: childhood health, maternal care, cardiovascular disease, cancer, common surgical procedures, other adult health, and mental health/substance abuse. (See fig. 1.) Although Kaiser was one of the first health plans to publish this kind of information, an increasing number of health plans are now providing similar information. Health plans have been exploring new ways to make information readily available and understandable to individual consumers. For example, on September 15, 1995, HealthPartners, Inc., will initiate a consumer-oriented program using touch-screen computers. Initially, at least 50 computers will be installed permanently at 50 employer sites, and at least 100 computers will be rotated among other employers. This will allow employees to obtain details about any one of the plans’ primary care sites, such as its physicians’ credentials, on-site services offered, and specialists to which its physicians refer. Because health plan members are expected to enroll in a specific care delivery system—a set of primary care sites with affiliated specialists—HealthPartners will furnish data about each care system to help plan members make a decision about which one to join. Currently these data include preventive screening rates and patient satisfaction measures. HealthPartners anticipates expanding the availability of touch-screen computers to more public spaces, such as shopping malls, after physician concerns about data confidentiality and other matters are resolved. The states have also been active in providing information about provider performance to the public. Forty states have mandated the collection, analysis, and public distribution of health care data, such as hospital use, charges or cost of care, effectiveness of health care, and performance of hospitals. For example, Pennsylvania has released four report cards on the hospitals and physicians in the state performing coronary artery bypass graft surgery (CABG) since 1992. Providing both costs and mortality rates, the reports are publicized through the local media and are available free to consumers. (See fig. 2.) In 1987, HCFA initially publically released hospital mortality information, but did so only in response to a request under the Freedom of Information Act (5 U.S.C. 552). The published information, collected as part of HCFA’s oversight efforts, included the observed and expected mortality rates for Medicare beneficiaries in each hospital that performed CABG surgery. HCFA published the information annually until 1993, when the HCFA Administrator discontinued the reports. He cited problems with the reliability of HCFA’s methods to adjust the data to account for the influence of patient characteristics on the outcomes. HCFA has not published any other information about the performance of Medicare providers. HCFA’s responsibility to Medicare beneficiaries in the selection and oversight of Medicare contract HMOs is similar to that of employers to their employees in selecting health plans. However, HCFA does not routinely provide beneficiaries the results of its monitoring reviews or other performance-related information such as HMO disenrollment rates. In August 1995, we recommended that HCFA publish (1) comparative performance data it collects on HMOs such as complaint rates, disenrollment rates, and rates and outcomes of appeals and (2) the results of its investigations or any findings of noncompliance by HMOs. Our recommendation that HCFA publish performance data was consistent with the views of experts we interviewed about the federal government’s role in ensuring that Medicare beneficiaries receive quality care. These experts cited the need for gathering health plan information such as (1) performance measures, (2) patient satisfaction, and (3) assurances that basic organizational standards have been met. Furthermore, they believed that when the information is obtained, it should be shared with beneficiaries to assist them in their health care purchasing decisions. Although HCFA has not been publishing data on Medicare providers, it is collaborating with others to publish performance information about Medicaid providers. HCFA has been participating with NCQA and the American Public Welfare Association on behalf of the State Medicaid Agencies Directors Group to tailor HEDIS to the particular needs of state Medicaid agencies, health plans that serve Medicaid recipients, and the recipients themselves. In July 1995 the work group released the first draft of Medicaid HEDIS and is expected to release a final version of the document in Fall 1995 after considering comments received. Like HEDIS, many of the most recent initiatives to provide data involve a partnership between private and public players. For example, a more recent public/private initiative that includes some of the major employers involved in developing HEDIS is the Foundation for Accountability (FAcct), created in June 1995. At a meeting of the Jackson Hole Group, some of the nation’s largest employers and HCFA, together representing more than 80 million people, or almost a third of the U.S. population, agreed to combine their expertise and purchasing power. This action grew out of employer frustration with current performance data that focus on plan and provider structure and process rather than outcomes of care. FAcct intends to recommend measures of health care quality that can be easily understood by the general public so that people can make informed decisions when choosing a health plan. FAcct also hopes to encourage the common adoption of these standards to establish uniformity and minimize health plan reporting burdens as well as develop a means of educating diverse audiences about the significance and applications of health plan accountability. Experts have noted that studies performed to determine how consumers make decisions when no comparative information on quality has been available may not be helpful in determining what information consumers would actually use. Adding to the conclusions of numerous researchers that individual consumers give more weight to information from acquaintances than to expert opinion, researchers at Brandeis University reported in 1994 that Massachusetts state employees they surveyed valued information about quality but did not value report card information. From this apparent contradiction, the researchers concluded that survey respondents view quality as something other than what is described in report cards. In 1995, NCQA reported that almost all consumers participating in focus groups NCQA sponsored stated that they would use better evaluative information if it were available to them. In addition, when NCQA provided participants with sample report cards, NCQA noted that in every group, participants were able to critically evaluate the information, raising the same questions about the validity of the data that experts debate. In 1994, we reported that while performance measures or report cards could be a useful tool to educate consumers about the health care that plans provide, the report cards being developed may not reflect the needs of some users. Employers have been the primary users of information comparing quality of care; little is known about the extent to which this information is meeting individual consumers’ needs. The sections that follow discuss in more detail the results of our efforts to determine, from the consumers’ perspectives, the extent to which they use quality of care information in making health care choices and the types of information consumers find useful in arriving at decisions. Many of the employers and individual consumers of health care we talked with are increasingly using information that compares the quality of care furnished by health care providers or health plans to make purchasing decisions and to encourage providers and plans to improve the quality of their care. However, some of those we interviewed told us they are not using the information because they are unaware that it exists, they have not been able to find it in some markets, they believe the available information does not meet their needs, or they lack the resources or time to find and use the information. Further, they stated that the information would be more useful if their concerns about the reliability and validity of the information were addressed. “we’d like to get some kind of value-based decision for purchasing health care. The pure pricing arrangements, the deals . . . have not really been a complete answer for us. Those arrangements don’t address quality, and we’re coming to believe that that’s got to be the cornerstone of your health care plan.” “I think that they [HMOs] should be in the business of comparing hospitals, picking out the high-quality, cost-effective providers; that’s what I’m paying them to do. I just want to make sure they’re doing it, and feel comfortable that they’re doing it.” These employers used the comparative data as a “red flag,” signaling a possible decline in quality. For example, one large southeastern self-insured employer stated that he watched for trends in performance measures that might serve as a warning that a problem was developing. Some smaller employers reported that they had neither the resources nor the time to find or use report cards but wanted the information to be available to the insurance agents or purchasing alliance staff they relied on to make health insurance recommendations. Employers told us that they are also using the data as a tool to market a specific plan to their employees or to negotiate contract terms with the insurance carriers. Numerous employers told us that providing employees with data comparing quality of care was particularly helpful in convincing their workers that managed care plans do not compromise the quality of care provided. Employers stated that they use the data to influence providers and plans to improve quality. For example, one employer told us that during contract negotiations, data were used comparing hospitals on specific procedures, such as hysterectomies, to encourage hospitals to reduce unnecessary surgeries. The individual consumers we talked with in Pennsylvania, California, and Minnesota who had requested and received specific report cards generally used the information and found it to be very helpful in making health care purchasing decisions. These consumers received either (1) information about patient outcomes for physicians and hospitals performing specific procedures or (2) information on a specific plan. More specifically, individual consumers in Pennsylvania and California reported using the procedure-specific reports to select the best surgeon or hospital because they or someone in their family anticipated having the surgery described in the report, select the best surgeon or hospital for procedures other than those described in the report, review the ranking of the surgeon who had performed their surgery before they had obtained the report, ask more informed questions of their doctors, increase their general knowledge, provide advice to others, or satisfy their curiosity. Individual consumers using a plan-specific report card told us that they used the information to select a health plan or to increase their knowledge about the health plan chosen by their employer, such as the services provided or the financial health of the plan. Consumers using either a plan- or procedure-specific report card who had no choice of provider reported that the information gave them reassurance. Although most individual consumers we interviewed found the report card helpful, some did not. Some consumers reported that they did not use the information because it focused on one procedure or health plan, or because it was limited to a specific state or area. Other consumers told us that they were unaware the information existed until after they or a family member no longer needed it. For example, a Pennsylvania woman stated that she wished she had known about this information before her mother died after heart surgery, because it might have helped her select a provider. Both employers and individual consumers echoed many of the same concerns expressed by health care experts and previously reported by us that comparative information may not be measuring what it is intended to measure. Experts have varying beliefs about what information should be included in a report card because of acknowledged difficulties with the reliability and validity of data sources and systems designed to measure quality. Areas of concern for purchasers we interviewed focused on risk adjustment, age of data, subjectivity, and bias. More specifically, consumers, both corporate and individual, questioned whether procedure-specific data were properly adjusted to account for differences in patient characteristics that might contribute to adverse outcomes. They were skeptical about whether factors such as age, severity of condition, and functional status could be accounted for to ensure that outcomes were an accurate reflection of provider quality. We have reported previously that severity-adjusted performance measurement systems are in a relatively early stage of development and may not provide information for accurately comparing hospitals’ performance. We concluded that additional information and methodological improvements are needed to provide more useful data on which to base purchasing decisions. “they are to reassure the public, but they can’t be used to make health care decisions because they are too general and outdated from the time the data was gathered until the decision is made.” Some consumers stated that selecting a health care provider is a subjective decision that is difficult to quantify. In the words of a Pennsylvania consumer, while the report card was a good publication, “it is limited by trying to objectify something that will always be subjective.” For example, consumers differ in what they want from a provider. Some consumers mentioned that it is more important for some patients to feel at ease with their doctor than it is for others. Although many consumers stated that they wanted information on customer satisfaction, others felt it was of limited value because “just because you’re happy with your doctor doesn’t mean I would be happy with him or her.” Another individual consumer questioned a patient’s ability to assess a doctor’s medical knowledge, technical skills, and ability. “Quality is in the eyes of the beholder . . . . It is not appropriate for the employer to place a value on one outcome over another. It is up to the patient to place that value. Is it more important that I be alive but it’s okay if I’m hurt or I’d rather die than be ?” “you know I think that a big part of the problem, and we’re guilty of it too, is imposing our own tastes or beliefs on other people . . . . In health care we do a lot of deciding of what’s good for people on the basis of our own beliefs, and the issues that a $9 an hour person are not the same ones that I’m contending with . . . . The highly paid person may not have any problem in going out of network—may be able to afford to go to Mayo’s [the Mayo clinic] and decide, ‘hey, that’s where I’m going with this problem. I’m not going to stick around .’ Whereas somebody on the shop floor has got to stay in [city deleted].” Individual consumers questioned the objectivity of the health care data produced and distributed by the provider or plan. Many consumers stated they would be less likely to believe the information if it is gathered and reported by the provider or plan rather than an independent third party. For example, one individual stated that “an unscrupulous provider could make sure they hit home runs on all of these particular items [the quality measurements] . . . .” Individual consumers who requested and received report card information from health plans used terms such as “self-serving,” “one-sided,” and “nontrustworthy” to describe the report. These respondents saw the purpose of the reports as a provider’s public relations effort to “blow its own horn” or use the report as a “marketing tool” rather than to provide information to the consumer. Consumers we interviewed want more information than they currently have. Both employers and individual consumers want information that emphasizes outcomes rather than process or structure measures of quality. They want standardized information that allows them to compare providers and plans. Few employers we interviewed are sharing unpublished data with employees, and they differ from one another on whether or not they believe their employees would use it to make decisions. Individual consumers generally stated that they wanted reports on quality to make decisions, but many emphasized that such reports would never be the sole source of information; they would only augment the advice of others. When emphasizing that they want information on the outcome of health care provided, consumers are asking for a measure that allows them to select providers who will improve their health status or that of their employees. For example, in describing the need for outcome data, one employer stated that rather than just knowing how many women received mammography screening for breast cancer, he wanted to know if the number of women who died or were incapacitated from breast cancer was being reduced. A major northeastern food manufacturer used outcomes to relate quality assurance in health care to its manufacturing quality assurance program to explain that “outcome data . . . is the only way to measure quality . . . . Once you have the outcome, you can go back and look at the processes themselves.” A large West Coast employer stated that what the company really wants is information on health status. “What we’d like as a measure is we’d like to know that the plan has improved the health status of the population served . . . . That might be different for some subpopulations. So, moving much more to population-based approaches.” “in general, you’re looking for quality and you’re looking for value, so maybe more of a functional analysis. There is some subjective information that needs to be obtained along with the length of stay and cost of stay and some of these other factors that we’re just not getting yet . . . . You need to do a kind of functional analysis as well, to say 30 days after that angioplasty was that patient back at work, and were they working 40 hours per week, and were they doing their job . . . . How’s your quality of life after you’ve had this?” “the number one thing people ask . . . when they’re considering an HMO . . . is not like gee, ‘Am I going to get that mammogram,’ it’s ‘What if I get sick, am I going to die, are they going to take care of me?’” Both employers and individual consumers stated that although data reflecting the outcomes may be the best measure of provider quality, it is very difficult to know whether outcomes result from quality of care or factors such as the patient’s condition or lifestyle choices. “the government should prescribe some standards and force providers to adhere to these standards in the publishing of information. The government should say, ‘You’re going to code this disease this way, and you do it consistently and uniformly . . . .’” Individual consumers stated that the way the information was presented was very important to them. For example, some wanted to have providers or plans compared side-by-side on one or two pages. Consumers using the procedure-specific reports uniformly praised the table format that provided this kind of direct comparison. (See fig. 2 for an example of a table providing a comparison of providers from a procedure-specific report.) Some individual consumers wanted the information to cover a wider geographic area, and others emphasized the need for community-specific data. For example, some Pennsylvania consumers stated that the report card for that state pertained only to providers in Pennsylvania. Consumers living in Philadelphia would like to have had this type of information for surrounding states because their providers, while close to their homes, were located in other states. The same concern came up in a midwestern city that bordered two states. “hav some report card concepts that the employees could understand the information, user friendly . . . consistent . . . . I want to have a tool for the employees to make that decision. If the employees are making that decision, they are going to change the marketplace. They are going to improve the quality of the system because the doctors and the hospitals are going to have to alter their practices because of the information that has been gathered and is presented and understood by employees. They are then making intelligent decisions as far as where to get their health care . . . . Empower the people to make the decision.” “if we are going to have value-based purchasing which would drive a competitive marketplace in health care, we have to involve consumers who make the ultimate choice. Therefore the information has to be relevant for them.” “I think it’s not speaking to how they make decisions. I think we’d overwhelm them . . . . Also, I don’t know if the data we’d be giving them would be the complete picture.” “I’ve been in health care benefits for 15 years. I don’t know how to make the choice. What happens to poor Harry the Huffer working on the shop floor when you give him . . . the morbidity in this hospital is here, and you know the readmission rate is this, and the reinfection rate is this, and the guy says, ‘I don’t know what I should do.’ Because what they do to our counselors is say, ‘I don’t want to make choices.’” Nevertheless, employees we interviewed disagreed with those employers who said that employees would not use the information. Employees’ concerns included issues of validity and reliability such as risk assessment and accuracy rather than their ability to understand the data. Most of the individual consumers who had requested the published reports found them to be easy to understand, using terms such as “clear,” “concise,” and “well organized.” They found the charts and tables particularly useful. For those who had some problems understanding the reports, additional assistance was useful. For example, a Pennsylvania consumer who had been unable to fully understand the published report on her own had no trouble after it was explained by the state agency that had produced the report. Another Pennsylvania consumer stated that the first report card she received was difficult to understand but that by the third report she received, she found it very useful. Many individual consumers emphasized that published information would never be the sole source of data for their health care decisions but would be used in addition to other information such as personal consultation with their physician, friends, family members, or coworkers. Data comparing health care plans and providers helped the consumers we interviewed make their health care purchasing decisions. However, performance reports have not yet achieved their fullest potential. Consumers said they needed more reliable and valid data, more readily available and standardized information, and a greater emphasis on outcome measures. Meeting the information needs of individual consumers continues to lag behind meeting the employer needs. Attention must be paid to ensuring that individual consumers have access to health care data. While employers themselves have initiated efforts to cooperate with one another, few we interviewed are making complete health care data available to assist individual consumers in making purchasing decisions. Relevant stakeholders have not yet addressed the issues of disseminating performance data to individual consumers so that they can make responsive, informed decisions about their health care coverage. We are sending copies of this report to interested congressional committees and other interested parties. We will make copies available to others on request. This report was prepared under the direction of Carlotta C. Joyner, Associate Director. Other major contributors to this report include Sandra K. Isaacson, Assistant Director; Susan Lawes; Lise Levie; Lesia Mandzia; and Janice Raynor. Please call me on (202) 512-6806, or Dr. Joyner on (202) 512-7002, if you have any questions. To obtain information on how consumers use data comparing the quality of health care providers or health plans and what information they want when making health care purchasing decisions, we interviewed both employers and individual consumers. To obtain the view of employers, we interviewed officials at over 60 businesses. The size of these businesses ranged from under 5 employees to over 100,000 employees. These employers were selected on the basis of the following criteria: (1) size of workforce (small, medium, and large); (2) geographic variability; and (3) variation in whether or not they used published report cards. “Small” employers were defined as those with fewer than 50 employees, “medium” as having 50 to 499, and “large” as 500 or more employees. Because the businesses were not randomly selected, their experiences and opinions cannot be generalized to all employers. We also interviewed a major private sector management consulting firm that supplies comparative health care data to employers. To obtain the views of individual consumers who had received a report card, we conducted telephone interviews during January, February, and March 1995 with 153 consumers who had requested and received published report cards to determine how they used the information (see table I.1). The report cards they received were published by either California or Pennsylvania state agencies or by health maintenance organizations in California and Minnesota. These report cards were selected because they were the most recently available in which the issuing entity had a record of requesters and the state or HMO was willing to assist in the study. The consumers we talked with had received this information sometime during 1993 and 1994. The reports published by the state agencies contained only procedure-specific indicators, while the health plan reports focused on various plan and procedure quality indicators related to the individual health plans. Kaiser Permanente (HMO) Although we attempted to contact all 1,087 individuals who had requested the report cards issued by those states or health plans, many of these individuals did not choose to participate, could not recall receiving the information, or had requested the information for reasons other than making health care purchasing decisions, such as for school or work. Because we spoke with only a small number of individuals who had requested information for consumer-related purposes and they were not chosen at random, their experiences and opinions cannot be generalized to the entire consumer population that requested report card information. We also conducted interviews with seven groups of employees around the country who may not have had previous experience with such reports. We conducted these interviews with employees from four federal government agencies and three private corporations—manufacturing, sales, and service. These employees were selected because their employers offered them more than one health insurance plan to choose from when making their health care insurance purchasing decisions. The number of participants in each group ranged from 8 to 10 and included employees with varying marital, family, and age status as well as employees enrolled in both indemnity and managed care plans. Medicare: Increased HMO Oversight Could Improve Quality and Access to Care (GAO/HEHS-95-155, Aug. 3, 1995). Employer-Based Health Plans: Issues, Trends, and Challenges Posed by ERISA (GAO/HEHS-95-167, July 25, 1995). Medicare: Enhancing Health Care Quality Assurance (GAO/T-HEHS-95-224, July 27, 1995). Health Care: Employers Urge Hospitals to Battle Costs Using Performance Data Systems (GAO/HEHS-95-1, Oct. 3, 1994). Health Care Reform: “Report Cards” Are Useful but Significant Issues Need to Be Addressed (GAO/HEHS-94-219, Sept. 29, 1994). Access to Health Insurance: Public and Private Employers’ Experience With Purchasing Cooperatives (GAO/HEHS-94-142, May 31, 1994). Managed Health Care: Effect on Employers’ Cost Difficult to Measure (GAO/HRD-94-3, Oct. 19, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on health care quality issues, focusing on: (1) how consumers use health care performance reports that contain comparative data on the quality of health care providers; and (2) what information consumers consider most important. GAO found that: (1) employers and individuals use information that measures and compares the quality of health care furnished by providers and health plans when making purchasing decisions; (2) consumers want performance reporting efforts to continue and are requesting that more data be made publicly available; (3) consumers want standardized and comparable health care information to assess health care providers' or health plans' performance; (4) many employers get health care performance data through business coalitions, consultants, and their own data collection efforts; and (5) although employers have begun cooperating with one another to enhance their purchasing decisions, few employers make health care data available to their employees.
Crude oil is a naturally occurring substance generated by geological and geochemical processes. A variety of petroleum products, such as gasoline, diesel fuel, and heavy fuel oil are derived from this natural resource. Crude oil and petroleum products can vary greatly depending on where and how they were extracted and refined, and their unique characteristics influence how they will behave when released into water and how they will affect animals, plants, and their habitats. Because oil is typically less dense than water, oil spills on or near the surface of water will float and form slicks. An untreated slick will remain at the surface until it evaporates, disperses naturally into the water column, washes onto the shoreline, breaks up into smaller collections of oil—known as tarballs—or is recovered or removed from the water. Oil or petroleum products spilled on water undergo a series of physical and chemical processes that may cause the oil to change––known as weathering––or migrate. Some processes cause oil to be removed from the water’s surface, while others change its form on the surface. Figure 1 depicts these processes, which are further described and defined in table 1. Regardless of their physical and chemical properties, all oils will weather once spilled. The rate of weathering depends on the conditions at the time of the spill and the nature of the spilled oil. Most weathering processes are highly temperature dependent, however, and will often slow considerably as the temperature approaches freezing temperatures. A blowout is an uncontrolled release of oil or gas from a well. less dense than water, it will float toward the surface. The speed at which it rises is based on the oil’s droplet size—the larger the droplet the faster the oil rises. Once it reaches the surface, the oil forms a slick thinner than those that result from surface spills, in part because of the diffusion and dispersal of oil droplets as they rise. When an oil spill occurs, responders have several techniques for responding, including the following: Chemical dispersants—applying chemicals to help break up the oil into smaller droplets to facilitate the movement of the oil off the surface and into the water column and enhance microbial breakdown of the oil. Mechanical containment and recovery—using booms, skimmers, sorbents, and other techniques to trap and remove the oil. In-situ burning—burning spilled oil on the surface of the water. Shoreline clean-up—physically picking up oil and washing or chemically treating shorelines, or deploying bioremediation, which involves the addition of nutrients to enhance the ability of microorganisms to degrade the oil more rapidly. No action—taking no active response to the spill. Each response technique has its own operational requirements, benefits, limitations, and potential adverse impacts. Responders must evaluate which method or combination of methods to use depending on the circumstances and conditions of the oil spill, such as the weather, sea state, type and amount of oil spilled, distance of spill from shore, and potentially affected natural resources. In the United States, mechanical containment and recovery is the primary response option, since it physically removes oil from the environment. However, experience has shown that mechanical containment and recovery in open waters can be limited depending on sea conditions. Specifically, for such operations to be conducted most effectively, seas need to be relatively calm, with waves under about 3 feet, according to documents we reviewed and specialists with whom we spoke. Oil spills inevitably have environmental impacts, and response actions may only reduce these impacts or shift them. In determining which response options are best for an individual spill, agency officials said that decision makers weigh the ecological risks and consequences with the goal of minimizing adverse effects as much as possible. For example, when considering the use of chemical dispersants as a response option, the essential question asked is whether dispersing the oil into the water column offers more benefits (i.e., causes less harm) than leaving the oil on the surface if it cannot be adequately removed by mechanical means or burned. Decision makers would collect as much information as possible to assess, for example, whether the potential harm to wetlands or waterfowl that could occur if dispersants were not applied is greater than the potential harm to marine species from chemically dispersed oil entering the water column. This evaluation of these trade-offs is sometimes called a net environmental benefit analysis. Chemical dispersants function by reducing the surface tension between oil and water—similar to the way that dish detergents break up cooking oil on a skillet—and enhancing the natural process of dispersion by generating larger numbers of small droplets of oil that are mixed into the water column by wave energy. Thus, rather than having a surface slick of oil, one will have an underwater plume of chemically dispersed oil. Throughout this report we use the term “chemically dispersed oil” to discuss the mixture that results when chemical dispersants are applied to oil and facilitate the formation of oil droplets. A typical commercial dispersant contains a mixture of three types of chemicals: surfactants, solvents, and additives. Surfactants are the active agents that reduce oil- water surface tension. Surfactant compounds contain both oil-compatible and water-compatible groups on the same molecule, with the oil- compatible group interacting with oil and the water-compatible group interacting with water to make the interaction between the two easier. Solvents are added to promote the dissolution of the surfactants and additives into the dispersant mixture and then, during application, into the oil slick. Additives may be present for a number of purposes, such as improving the dissolution of the surfactants and increasing the long term stability of the dispersant formulation. Federal statutes required the development of a National Oil and Hazardous Substances Pollution Contingency Plan that, among other things, delineates the procedures for preparing for and responding to oil spills and details the roles and responsibilities of federal agencies and others involved in dispersant decision making. Specifically, the National Contingency Plan is based on a framework that brings together the functions of the federal government, the affected state governments, and the party responsible for a spill under a unified command to achieve an effective and efficient response. In response to an oil spill, the National Contingency Plan calls for a Federal On-Scene Coordinator to direct and coordinate response efforts. In the case of oil spills in the coastal zone,such as in the Deepwater Horizon incident, a representative from the Coast Guard serves as the Federal On-Scene Coordinator. EPA provides the Federal On-Scene Coordinator for spills occurring in the inland zone, and the designation of these zones is documented in the Regional Contingency Plans. As part of the National Contingency Plan, EPA maintains the National Oil and Hazardous Substances Pollution Contingency Plan Product Schedule, which lists chemical dispersants that may be authorized for use on oil discharges. Inclusion on the Product Schedule does not mean that EPA recommends the product for use; rather, it only means that certain data have been submitted to EPA and that the dispersant has a certain effectiveness. The data that a manufacturer must submit to EPA includes effectiveness and toxicity data, special handling and worker precautions for storage and application, recommended application procedures and conditions for use, and shelf life. An appendix to the regulations implementing the National Contingency Plan describes the test methods a manufacturer is to follow for measuring effectiveness and toxicity of dispersants. In terms of effectiveness, the manufacturer must demonstrate that the dispersant can disperse at least 45 percent of oil in testing. To assess toxicity, the appendix specifies the standard test for a chemical dispersant, which involves exposing two species––silverside fish (Menidia beryllina) and mysid shrimp (Mysidopsis bahia)—to varying concentrations of the dispersant, oil, and a mixture of the two, to determine mortality rates at the end of 96 hours for silversides and 48 hours for mysid shrimp. Chemical dispersant manufacturers must submit the results of effectiveness and toxicity testing to EPA, which may request further documentation or verify test results in determining whether the dispersant meets listing criteria. Both the presidential commission that investigated the Deepwater Horizon incident and the EPA Inspector General have recommended that EPA update the Product Schedule’s testing protocols and requirements for listing.Inspector General recommended, among other things, that EPA modify the Product Schedule and contingency plans to include additional information learned from the Deepwater Horizon oil spill response, such as subsurface dispersant application in deep water. EPA anticipates issuing a proposed rule in winter 2012 that would revise the requirements for listing a product on the Product Schedule and is considering changes to effectiveness and toxicity testing protocols. In addition, the EPA A National Response Team and Regional Response Teams serve as preparedness and planning organizations prior to a response and may serve as incident-specific response teams to provide support and advice to the Federal On-Scene Coordinator during a response. The National Response Team includes 20 federal departments and agencies responsible for national response and preparedness planning, for coordinating regional planning, and for providing policy guidance and support to Regional Response Teams. Regional Response Teams are composed of representatives of each National Response Team agency and representatives from relevant state and local governments (as agreed upon by the states) and may also include tribal governments. There are 13 Regional Response Teams corresponding to the 10 standard federal regions, plus separate teams for Alaska, Oceania in the Pacific, and the Caribbean. The Regional Response Teams develop Regional Contingency Plans establishing procedures for preparing for and responding to oil spills in the region. Within the regions, area committees composed of officials from federal, state, and local agencies have been designated to develop Area Contingency Plans. Regional and Area Contingency Plans may address the specific situations in which chemical dispersants should and should not be used and may preauthorize their use by the Federal On-Scene Coordinator. Preauthorization plans may address factors such as the potential sources and types of oil that might be spilled, the existence and location of environmentally sensitive resources that could be affected, available dispersant stockpiles, available equipment and adequately trained operators, and means to monitor product application and effectiveness. The details and procedures for preauthorized use vary by region; however, plans generally preauthorize use of dispersants for areas at least 3 nautical miles from shore with water at least 10 meters deep, and the chemical dispersant must be listed on EPA’s Product Schedule. If dispersants are not preauthorized, the Federal On-Scene Coordinator may authorize use of dispersants on the Product Schedule with the concurrence of EPA and appropriate state representatives and in consultation with the Department of Commerce and Department of the Interior. The Federal On-Scene Coordinator may authorize the use of any dispersant, including products not listed on the Product Schedule, without obtaining concurrence, when, in the judgment of the coordinator, the use of the product is necessary to prevent or substantially reduce a hazard to human life. Currently, most Regional Contingency Plans include preauthorization for application of dispersants on the surface in certain areas; however, none of the plans include preauthorization for subsurface application of dispersants in deep water. During the Deepwater Horizon incident, chemical dispersants were used with and without preauthorization and were applied at various times throughout the response by airplane, boat, and deep water, subsurface injection at the wellhead. The aerial and boat applications were preauthorized, but subsurface injection of dispersants, which had never previously been used, was guided by a directive and a series of addenda to that directive. This directive and its addenda were established jointly by the Coast Guard and EPA as the spill was occurring, and these documents placed certain restrictions on dispersant use. Because of complications and uncertainties related to real time authorization of chemical dispersant use in this novel manner, the EPA Inspector General recommended in its 2011 report that EPA develop policies and procedures to govern subsurface dispersant use and to modify preauthorization plans to specifically address subsurface application of dispersants. According to agency officials, the National Response Team has drafted guidelines for subsurface dispersant monitoring and application and expects to finalize them by winter 2012. According to experts we spoke with, there is a significant body of research on the use of chemical dispersants on the surface of the water, but some gaps remain in several research areas. Moreover, experts highlighted two additional areas in which knowledge is limited and more research is needed—the subsurface application and effects of dispersants in deep water environments and the use of dispersants in Arctic and other cold water environments. According to experts, agency officials, and specialists we spoke with, much is known about the use of dispersants on the surface of the water; however, they said that gaps remain in several research areas. Specifically, experts, agency officials, and specialists described the state of knowledge and gaps in the following six research areas: effectiveness in dispersing oil, fate and transport of chemically dispersed oil, aquatic toxicity and environmental effects of chemically dispersed oil, modeling of chemically dispersed oil, monitoring of chemically dispersed oil, and human health effects. Effectiveness in dispersing oil. Most of the 11 experts we interviewed agreed that there is a large body of research on the effectiveness of chemical dispersants, and many said that there is a solid understanding of the factors that may influence the effectiveness of such dispersants when used on the surface. For a dispersant to be effective, the oil must be dispersible, and there must be sufficient mixing energy––the energy generated by movement of the water from wind and wave action—to allow formation of smaller oil droplets and to disperse these droplets into the water column. Whether these two conditions are satisfied relies on a complex set of factors, including the type of oil spilled, how the long the oil has been exposed to the environment, and sea and weather conditions. One of the primary factors in the dispersability of oil is its viscosity––the resistance of a liquid to flow. Oils that do not flow easily have a high viscosity and are more difficult to disperse; oils that flow easily have a low viscosity and tend to be more dispersible. Oil viscosity is influenced by its type and the amount of change or weathering it has undergone. For example, many experts stated that chemical dispersants are more effective in dispersing light to medium crude oils, which have a lower viscosity, than heavy oils, which have a higher viscosity. In addition, the longer oil weathers, the more viscous––and thus less dispersible––it becomes. This means chemical dispersants need to be used quickly after a spill––typically within hours to 1 to 2 days after a spill, depending on conditions––before the oil has weathered substantially. At a certain level of viscosity, dispersants are no longer effective. Many experts also told us that chemical dispersants are more effective in dispersing oil in moderately wavy seas than in calm seas because of the mixing energy such sea states provide, and dispersants would likely not be used in very stormy, wavy seas because such conditions would disperse the oil naturally and present operational difficulties. In addition, the effectiveness of a chemical dispersant depends on the ratio of chemical dispersant to oil. Planning guidelines generally recommend a ratio of 1 part dispersant to 20 parts oil. However, some experts and specialists told us that the minimum effective dispersant-to-oil ratio can also vary greatly based on the type of oil and degree of weathering. Thus, some light oils, if fresh, may only require ratios of 1:40 or less, whereas weathered or more viscous oils may require ratios above 1:20. While there is a large body of research on the effectiveness of chemical dispersant use on the surface of the water, experts identified a number of areas in which they believe additional study is needed. Specifically, some experts told us that research on effectiveness should more closely resemble real world conditions, rather than the artificial conditions often experienced in a laboratory. For example, one expert said that some laboratory effectiveness tests involve less mixing energy than real world conditions found in the ocean, and therefore, dispersant effectiveness rates may be understated. In addition, the properties of oil can vary greatly depending on the source, and some experts said that more research should be conducted on the effectiveness of different dispersant formulations on different types of oil. Because there are hundreds of types of oil, specific dispersants may work better on certain types of oil than others. Some experts also said that more research is needed to better understand the effectiveness of dispersants on heavily weathered and emulsified oil, noting that dispersants are typically applied on the surface just once; however, applying dispersants twice may increase their effectiveness on emulsified oil. Fate and transport of chemically dispersed oil. Many of the experts we spoke with indicated that there is a basic understanding of the processes that influence the fate and transport of chemically dispersed oil, but that fate and transport of oil are subject to many complex processes, some of which are better understood than others. Specifically, most experts whom we spoke with agreed that the use of chemical dispersants increases biodegradation rates, as dispersants help reduce the size of oil droplets, making them more accessible to microbes that feed on them. Experts differed in their views with regard to the extent to which factors such as evaporation, photo-oxidation, and dissolution influence the fate of chemically dispersed oil. For example, some experts said that dissolution—the chemical stabilization of oil components in water— increases with dispersant use; whereas, other experts said that more research is needed to understand the relationship between dispersant use and dissolution. Chemically dispersed oil is transported both vertically and horizontally through the water by wind, waves, and currents. Once droplets are dispersed vertically into the water column, most oil droplets will be positively buoyant and will rise toward the surface. The speed at which the droplets will rise depends on their diameter, with the smallest droplets rising very slowly. For example, according to a 2005 National Academy of Sciences report on chemical dispersants, a droplet with a diameter of 300 micrometers (0.3 millimeters) would take less than 8 minutes to rise 3 meters, while a droplet with the diameter of 30 micrometers (0.03 millimeters) would take over 12 hours to rise the same distance. Once the oil is dispersed below the surface, subsurface currents move the location of the oil droplets horizontally. In some cases, the direction the oil will travel below the surface will be different than it traveled on the surface because the direction of the currents may be different than the direction of the wind. When currents are non-uniform, mixing is produced that further dilutes and disperses oil droplets throughout the water. Many experts also told us that chemically dispersed oil, as compared with oil that is naturally dispersed, reduces the likelihood of oil droplets reforming into slicks because of the smaller droplet size— which allow for greater dispersion and slower rise rates. The experts we spoke with also identified several research gaps related to the fate and transport of chemically dispersed oil. For example, most experts told us that the use of chemical dispersants increases biodegradation rates, but many told us that more research was needed to quantify the actual rate at which biodegradation occurs. Additionally, many experts said that more research is needed to understand the specifics of transport within the water column and oil droplet size, since they are important factors for determining whether the chemically dispersed oil will remain in the water column or float back to the surface. Many experts also said that more research is needed on chemically dispersed oil’s interactions with suspended particulate material, interactions that occur when oil droplets attach to small particles such as sediment. Such oil-particle combinations could influence fate and transport in various ways, such as preventing the oil from recoalescing. Also, some combinations may potentially sink to the bottom, and others may remain suspended in the water column. According to a 2005 National Academy of Sciences report, gaps related to understanding the fate of chemically dispersed oil and the interaction of the dispersed oil with sediments could be addressed through the use of actual spill events to conduct research and collect data. . COREXIT®,States. Additionally, most experts said that chemically dispersed oil can increase oil’s bioavailability—how easily an organism can take up a particular contaminant from the environment—which can have varying harmful effects. For example, many experts said that chemical dispersion will alter the bioavailability of oil. Exposure to shoreline and surface oil may decrease for wildlife, such as birds or marine mammals, but exposure may increase for species living in the water column, such as certain fish or plankton. which is the most widely stockpiled dispersant in the United Experts also identified several knowledge gaps and limitations in regard to information on the toxicity of chemically dispersed oil. In particular, most experts told us that research on the chronic effects of exposure has been more limited, and many identified this as an area in which more research is needed. Lack of research on chronic effects limits the understanding of how marine communities and populations—including corals, fish, and marine mammals—are affected by dispersant use over the long term. In addition, many experts said that more research is needed to understand the impact of chemically dispersed oil on marine communities and populations. For example, one expert noted that the rate of recovery for species is a key aspect for determining the trade-offs of using chemical dispersants. Furthermore, some experts questioned the usefulness of some toxicity research, noting that this research was generally not conducted using consistent methodological approaches, which limits its comparability. For example, one expert said that early toxicity research did not include chemical analysis, which limits the comparability of older studies to more recent ones that contain such analysis. Additionally, some experts noted that while there are many studies on COREXIT®, there are few studies on the toxicity of the other dispersants on the Product Schedule. In addition, some experts and specialists we spoke with questioned the applicability of the research to real world spill scenarios. Specifically, one expert said that the concentrations and durations of exposure to chemically dispersed oil often used in the laboratory do not reflect oil exposure concentrations and durations during an actual spill. Many laboratory tests use a constant exposure level over a period of 96 hours (4 days), while during a dispersant application on a real spill, the concentration of chemically dispersed oil could be very high when first applied but will decline quickly over a matter of hours, particularly in the open ocean. Thus, some experts noted the need for more studies using realistic exposure scenarios and consistent methodologies. Further, many experts said that research should be conducted on a broader range of species, as the majority of research has been conducted on a small number of species. For example, one expert said that it is not always possible to extrapolate from the standard test species––silverside fish and mysid shrimp—to other species, particularly from different regions or climates. Another expert noted that since it is not practical to test every species, those that are tested need to be ones that can be extrapolated to the key species in each region. In addition, according to EPA researchers, additional research is needed to better understand photoenhanced toxicity—the increase in toxic effects resulting from the synergistic interaction of components of oil accumulated by aquatic organisms and the ultraviolet radiation in natural sunlight. Recent studies demonstrate that chemically dispersed oil was substantially more toxic to early life stages of fish and invertebrates under the light wavelengths and intensity present in aquatic habitats than under the light systems used to generate toxicity data in the laboratory, but additional research is needed according to EPA researchers. Modeling of chemically dispersed oil. Models that are used to predict how spilled oil will behave in the environment rely upon a number of inputs, but according to most experts we spoke with, modeling efforts are limited by the accuracy of inputs to the model, and the experts said that they believe that more research is needed to improve these inputs. Specifically, fate and transport models rely on a variety of inputs, including dispersant effectiveness, wind speed, and ocean currents. Some experts we spoke with questioned the accuracy of some of these inputs, which has implications for the predictive value of the model and may result in greater uncertainty with regard to the ultimate fate and transport of the dispersed oil. For example, some experts noted that more research is needed to more quantitatively measure dispersant effectiveness, including the amount of oil dispersed below the surface as droplets and the resulting droplet size distribution. Monitoring of chemically dispersed oil. Some experts told us the monitoring protocols currently used are generally sufficient for their intended purpose of determining whether oil is dispersing. The primary tool used to monitor this is the Special Monitoring of Applied Response Technologies (SMART) protocols, which were established by a multi- agency group—including Coast Guard, NOAA, EPA, CDC, and BSEE— and are implemented by the Coast Guard in spill response. These protocols establish a system for rapid collection of real-time, scientifically based data to assist in decision making related to whether additional chemical dispersants should be applied to break up remaining oil on the surface of the water. These protocols rely heavily on trained personnel to visually observe dispersed oil, collect water samples, and measure the amount of oil in the water using a fluorometer—a device that detects the presence of oil in the water column by measuring the light emitted when certain oil compounds are exposed to ultraviolet light—which helps indicate that the dispersant is having its desired effect. Some experts stated that the fluorometry equipment used for the SMART protocols is useful for determining the initial effectiveness of dispersants—that is, whether or not oil is being broken up and distributed through the water column during an oil spill response. Additionally, one expert said that the SMART protocols are simple, well defined, and standardized and are able to quickly provide information to decision makers during emergency response operations. Many experts and a NOAA review of SMART protocol implementation also said that the protocols and the equipment used could be enhanced to provide some in-depth information to help inform research efforts to address gaps or to further assess the effectiveness of chemical dispersants. For example, some experts told us that the protocols do not provide an analysis of oil composition to determine whether and how long the dispersant remains present in the water and continues to break up the oil, making it difficult to assess the true effectiveness. Additionally, the SMART protocols were focused on providing operational guidance on dispersant effectiveness and were not designed to monitor the fate, effects, or impacts of chemically dispersed oil, but many experts said that research should be conducted to integrate monitoring of fate and effects into the protocols. Doing so would help inform research efforts to better address gaps and help spill responders make better decisions. Some experts also told us that the fluorometry technology used in SMART is limited in that it only measures a portion of oil components and that the standardization and calibration of this equipment could be improved. Many experts also told us SMART could be enhanced with different, newer equipment, such as particle size analyzers to measure oil droplet size, which could better monitor chemically dispersed oil. Moreover, a February 2012 NOAA review of SMART monitoring protocol implementation during the Deepwater Horizon incident found that the SMART protocols were not sufficient to determine the effects of the dispersant and oil on marine life in the water column. In addition, the report found that for large spills with information needs beyond the question of whether the oil is dispersing, the protocols need to be revamped. This review concluded that the SMART monitoring methodologies used during the Deepwater Horizon incident lacked rigor and repeatability. Human health effects. HHS officials and human health specialists we spoke with noted that toxicity information is available for the individual ingredients of some dispersants––particularly COREXIT® EC9500A–– and those individual ingredients are generally believed to be not particularly toxic to humans. Furthermore, HHS officials and human health specialists we spoke with noted that there is little likelihood that the general public will be exposed to dispersants or chemically dispersed oil. Individuals involved in cleanup operations that directly handled dispersants or worked in the immediate area of application would likely have greater potential exposure to dispersants and therefore might have a greater risk of adverse effects. However, during the Deepwater Horizon incident, a National Institute for Occupational Safety and Health (NIOSH) Health Hazard Evaluation looked at the potential exposure of these highest risk groups and found that indicators of dispersant exposure were nondetectable or at levels well below applicable occupational exposure limits. In addition, the Material Safety Data Sheet for COREXIT® EC9500A—the dispersant most used during the Deepwater Horizon incident response—states that potential human exposure will be low if recommended product application procedures and use of personal protective equipment such as use of hand, skin, and eye protection are followed. In addition, in laboratory tests following the Deepwater Horizon incident, NIOSH researchers found no long-term negative health effects due to short-term dermal or inhalation exposure to COREXIT® EC9500A. However, adverse effects of longer-term exposure have not been evaluated, according to HHS officials. With regard to seafood safety, studies indicate that the dispersants used during the Deepwater Horizon incident did not accumulate in seafood, and therefore there is no public health concern from them because of seafood consumption, according to the FDA. To ensure consumers had confidence in the safety of seafood being harvested from the Gulf, NOAA and FDA developed a chemical test for the presence of dispersant in seafood. Most of the seafood samples tested had no detectible oil or dispersant residue. For the few samples in which some residue was detected, the levels were far lower than the amounts that would cause a health concern, even when seafood is eaten on a daily basis. Agency officials and human health specialists said that less is known about the ingredients in several other dispersants listed on the Product Schedule and that they believe more information is needed on the ingredients in these dispersants. In addition, toxicity information may be available on many of the individual ingredients in dispersants, but agency officials and human health specialists told us that there is very little data regarding the potential human health effects of the mixture of these ingredients as found in oil dispersant products. For example, the Material Safety Data Sheet for COREXIT® EC9500A states that no human health toxicity studies have been conducted on this product. In addition, agency officials and human health specialists told us that more research is needed on whether dispersants can alter the toxicological properties of the chemicals in the oil, which may increase the ability of oil or some of its constituents to permeate the skin in the event of dermal exposure to chemically dispersed oil. Agency officials and human health specialists also told us that currently there are no good biomarkers for dispersant exposure, making it difficult for researchers to fully measure the extent of human exposure and any resulting toxicological effects. In addition, results from studies based on human samples or populations are needed to fully inform our understanding of potential health effects, according to agency officials. For example, in order to determine the likelihood of meaningful exposures and the potential for health effects to occur, it would be important to have ongoing environmental and biological monitoring, such as through the collection of blood or urine samples from oil spill response workers before and after they encounter dispersants. Although much is known about the use of dispersants on the surface of the water, experts highlighted two emerging areas in which additional research is needed—specifically, the subsurface application and effects of dispersants in deep water environments and the use of dispersants in Arctic conditions and other cold water environments. As previously discussed, and according to many experts we spoke with, it will be particularly important to gain a better understanding of these environments since the future of oil production will rely to a substantial extent on producing oil from deep, offshore wells in the Gulf of Mexico and off the Alaskan Coast. Subsurface application of dispersants. All of the 11 experts we spoke with told us that little is known about the use and effects of chemical dispersants applied subsurface in deep water environments—ocean depths of over 1,000 feet—noting that conditions there may influence the effectiveness of dispersants, such as higher pressure, lower water temperatures, and the presence of gas. Most experts characterized the subsurface application of chemical dispersants in the deep water during the Deepwater Horizon incident as surrounded by uncertainties, since it was the first attempt of its kind. Officials and specialists noted that monitoring efforts and visual evidence from the spill indicated subsurface application of dispersants was effective in reducing the amount of oil and volatile organic compound levels that appeared at the surface. Experts agreed that the influence of deep water conditions on subsurface dispersant use requires further research, but they disagreed over the significance of some of the knowledge gaps. For example, some experts felt lack of knowledge about the role of high pressure in the deep water was a big gap, while others felt that, based on the knowledge of chemistry and other existing knowledge about dispersants, pressure was likely to have no influence on effectiveness. Specialists told us scientists are beginning to undertake research to validate the effectiveness of chemical dispersants applied subsurface in deep water environments and better understand how to optimize dispersant formulations, dispersant-to-oil ratios, and application methods for these conditions. Some experts and specialists told us that since application directly at a spill source in deep water allows for direct contact with fresh oil and the force of a blowout creates substantial mixing energy, dispersants designed specifically for subsurface application could require less or no solvent and be applied at significantly lower dispersant-to-oil ratios. Furthermore, with regard to the subsurface use of dispersants, most experts told us that there are gaps in knowledge related to fate and transport, toxicity, and monitoring. In terms of the fate and transport of dispersed oil at depth, while research and models to indicate what happens to oil released from the ocean floor exist, previous research had not taken into account the changes the addition of chemical dispersants could cause. Many experts also cited the need for more research on issues such as biodegradation, oil droplet size, and interaction with particulate material in the subsurface, deep water environment. For example, some experts noted that such research could inform the adaptation and improvement of models for tracking the fate and transport of chemically dispersed oil from subsurface dispersant use. One expert noted a particular need for research on interactions with suspended particulate material in deep water. This expert noted that there is some evidence that smaller droplets react differently with suspended particulate material in deep water and can create a substance, which can entrap organisms that cannot swim away fast enough. With regard to toxicity related to the subsurface application of dispersants, in addition to the gaps in information on chronic effects discussed above, experts told us that little is known about the species that reside in deep water environments and how chemically dispersed oil may affect them. Also, some noted that the difficulties of conducting toxicity testing on relevant species in realistic exposure scenarios are amplified for subsurface use of chemical dispersants in deep water because bringing such species to the surface would likely kill them, and creating test conditions that would allow them to survive and serve as a reasonable simulation of that environment would be extremely challenging. Given the inability in a subsurface, deep water scenario to implement direct visual observation based monitoring, such as occurs with the SMART protocols, some experts noted the need for research to develop scientifically sound monitoring protocols and equipment for deep water use. Use of dispersants in Arctic environments. Most experts told us that knowledge about the use of dispersants in Arctic environments is limited, and less research has been conducted on dispersant use in the Arctic and other cold environments than in temperate or tropical climates. Specifically, some experts stated that additional research is needed to ensure that dispersant formulations are effective in the Arctic environment. For example, one expert said that dispersants are currently designed for temperate or tropical climates, and there is reason to believe that these formulations will be less effective in the Arctic environment because of environmental conditions such as cooler temperatures and the presence of ice. Specifically, sea ice introduces several potential complicating factors, which require more research. For example, ice alters the sea’s state, diminishing waves, which could lead to lower mixing energy. In addition, the presence of ice and broken ice may affect application methods. Previously discussed knowledge gaps about fate and transport of chemically dispersed oil also apply in the Arctic, with one expert noting that more research is needed on biodegradation rates in the Arctic because the cold temperatures may slow the process down. Furthermore, one expert told us that additional research is needed to enhance fate and transport models for chemically dispersed oil in icy conditions to better understand the movement of chemically dispersed oil. Some experts also noted possible differences in the toxicity of chemically dispersed oil for Arctic species as compared with temperate species. For example, one expert said that some Arctic species have different metabolism rates than species in warmer climates, and research is needed to determine how dispersant use affects Arctic species. Federal agencies and other groups, including industry and states, have enhanced knowledge on the use of chemical dispersants and its effects by funding research projects. Specifically, six federal agencies have funded over $15.5 million of dispersant-related research projects since fiscal year 2000, with about half of this total federal funding—over $8 million—occurring since the Deepwater Horizon incident. Over 40 percent of all federally funded dispersant research projects have focused on testing dispersant effectiveness. Appendix III provides a list of federally sponsored research projects related to dispersants since fiscal year 2000. In addition, industry has a number of past and ongoing research projects focused on the use and effects of dispersants, and states and other groups have also funded dispersant-related research. Since fiscal year 2000, six federal agencies—BSEE, Coast Guard, EPA, HHS, NOAA, and NSF—have funded 106 research projects related to chemical dispersants, at a cost of approximately $15.6 million (see table 2). Roughly half of the total federal funding—approximately $8.5 million— occurred in fiscal years 2010 or 2011, largely in response to the Deepwater Horizon incident. In general, most of the projects funded by federal agencies were conducted by nonfederal researchers, including university researchers and independent laboratories. In addition, the federal government has a committee—the Interagency Coordinating Committee on Oil Pollution Research—that helps coordinate research efforts across federal agencies. This committee was established by the Oil Pollution Act of 1990 and is currently composed of 14 federal agencies and chaired by the Coast Guard. Details on dispersant-related research funded by the six federal agencies since fiscal year 2000 are as follows: BSEE has consistently funded dispersant research projects every fiscal year since 2000, and funding for most individual projects has ranged from $10,000 to $300,000. According to agency officials, BSEE has plans to undertake additional projects and has tentatively planned to fund studies on the impact of dispersant use on worker safety and studies on subsurface dispersant application. In addition to jointly funded projects with other federal agencies, BSEE has also funded projects jointly with industry and other groups to conduct dispersant research. For example, for one dispersant research project, BSEE was one of nine partners, including four oil companies and two oil spill response organizations, as well as Canada’s Department of Fisheries and Oceans, and Texas’ General Land Office. NSF has funded the second largest number of projects—29 in all— and all but one of its projects were funded as a result of the Deepwater Horizon incident. Almost all of NSF’s dispersant research funding was distributed to researchers through its rapid response grant program––a grant mechanism developed specifically to respond to unusual circumstances where a timely response is essential to achieving research results, such as in the case of the Deepwater Horizon incident. NSF also had the largest total agency funding, with individual project funding ranging from $12,878 to $200,000, with an average funding level of $151,566. Most of this research is still under way. Absent another oil spill, NSF does not have plans to fund further dispersant research—other than for projects submitted as individual, unsolicited proposals—according to agency officials. EPA, similar to BSEE, has funded at least one project in most years since fiscal year 2000. EPA’s total annual funding for dispersant- related projects was generally less than $300,000 per year. In fiscal year 2010, EPA funding increased, and the agency funded six dispersant research projects at a total of $1.3 million. In addition, EPA has collaborated with the Canadian government on a wave tank facility in Canada, which EPA has used to support some of its dispersant-related research projects. EPA, through its STAR grant program, also issued a request for proposal on the environmental impact and mitigation of oil spills, including the application of dispersants as one of the mitigation measures, after the Deepwater Horizon incident. This grant program plans to award $2 million to four projects by April 2012; an agency official told us that one of the projects will focus on the development of new types of dispersants. NOAA has funded several projects over the past decade, but has not consistently funded dispersant-related research on an annual basis. A significant portion of NOAA’s dispersant funding—$1 million out of about $3.3 million total—has been for an ongoing project, funded in fiscal year 2011, and focused on dispersant use during the Deepwater Horizon incident and lessons learned from that event. NOAA funded most of its past dispersant research through its partnership with the University of New Hampshire’s Coastal Response Research Center (CRRC). CRRC projects represent 10 of the 15 NOAA-funded dispersant research projects. However, NOAA officials told us that the agency’s funding for the CRRC ended in 2007. HHS has funded four research projects, all in fiscal years 2010 or 2011 and has done so as a result of the Deepwater Horizon incident, similar to NSF. Specifically, HHS has funded four research projects, ranging in costs from $6,000 to $634,000. One of these projects was a jointly funded project with EPA, at a cost of $77,491 to HHS. HHS officials told us that the agency currently does not have plans to fund any dispersant research in the future. The Coast Guard has the most limited dispersant research program of the six key agencies, funding two joint projects since fiscal year 2000, at a total cost of $64,000. One of these co-funded projects was the 2005 National Academy of Sciences report on dispersants. The Coast Guard also jointly funded a project with BSEE to analyze SMART protocol monitoring data. Coast Guard officials told us that the agency has no plans to fund dispersant research projects in the future and that the agency has no formal effort under way to update the SMART monitoring protocols. In addition, although the agency has not funded a large amount of dispersant-related research since fiscal year 2000, it has focused its research efforts on other response options, such as in situ burning and mechanical recovery, in accordance with federal oil pollution research plans, according to agency officials. The Interagency Coordinating Committee on Oil Pollution Research’s purpose is to coordinate a comprehensive program of oil pollution research, technology, development, and demonstration among federal agencies, in cooperation and coordination with industry, universities, research institutions, state governments, and other nations as appropriate, and to foster cost-effective research, including the joint funding of research. Officials told us that the committee has never received specific funding to operate as a body. Support for the Interagency Committee’s activities and responsibilities is currently subsidized by the budgets of its component member agencies. For example, the establishment and maintenance of the committee’s website is being funded by the Coast Guard. The Oil Pollution Act also directed the committee to develop a comprehensive research and technology plan to lead federal oil pollution research. Among other things, the plan must assess the current status of knowledge on oil pollution prevention, response, and mitigation technologies and effects of oil pollution on the environment; identify significant oil pollution research gaps; and establish research priorities. In addition, the chair is required to report every 2 years to Congress on the committee’s past activities and future plans for oil pollution research. The Interagency Committee first prepared a research and technology plan in 1992 and subsequently updated it in 1997, but it has not been revised since. According to agency officials, the plan is currently undergoing revision, and they anticipate releasing the new plan in 2013; dispersants are to be a focus area in the plan. In March 2011, we issued a report reviewing the Interagency Committee’s efforts to facilitate coordination of federal oil pollution research and made recommendations to improve these efforts. The Department of Homeland Security concurred with our recommendations and plans to address them. Over 40 percent of the 106 federally funded research projects on dispersants have focused at least in part on effectiveness, with the remaining projects spread across a broad range of research areas, as noted in table 3. Specifically, federally funded dispersant research since fiscal year 2000 has included the following areas of study. Effectiveness in dispersing oil. Of the 106 research projects on dispersants, the largest number were focused on assessing the effectiveness of chemical dispersants, and BSEE and EPA have funded almost all of these. Specifically, BSEE has funded projects on the effectiveness of dispersants on different types of oil and under specific environmental conditions. For example, one such project focused on the effectiveness of dispersant use on heavy oil, and another examined dispersant use in calm waters. BSEE has also conducted research to mimic at-sea conditions by using the Ohmsett wave tank testing facility in New Jersey to study the effectiveness of dispersants on light to medium oils when applied at typical application rates. EPA funded several projects related to dispersant testing protocols that are used to assess effectiveness, a key criterion required to list dispersants on the Product Schedule. For example, EPA funded a study to determine the effectiveness of eight dispersants on its Product Schedule in dispersing south Louisiana crude oil. In addition, EPA funded research conducted in a wave tank in Nova Scotia, Canada, that produced quantitative estimates of the mixing energy necessary for effective chemical dispersion under various sea states. Fate and transport of chemically dispersed oil. Half of the federal agencies we reviewed have funded projects focused on better understanding the fate and transport of chemically dispersed oil, with over half of these studies initiated since the Deepwater Horizon incident. In particular, fate and transport was the focus of nearly half of the NSF grant projects. For example, one NSF rapid response grantee studied the oil plume that resulted from the Deepwater Horizon incident using a specially designed, portable underwater mass spectrometer, which can measure minute quantities of chemicals in the ocean to determine the movement of the oil droplets. Other NSF projects focused on the interaction of oil and dispersed-oil components with sediments collected in regional sediment traps during the Deepwater Horizon incident, and on determining the impacts of dispersants on oil interactions with water column particulates and sedimentation. In addition, EPA has funded four projects that focus at least in part on the fate and transport of dispersed oil. For example, one project examined the impact of waves on the movement of dispersed oil and resulting oil droplet size. EPA also funded several projects focusing on the biodegradation rates of different types of oil and dispersant mixtures. Aquatic toxicity and environmental effects of chemically dispersed oil. NOAA and NSF are the two primary agencies sponsoring research projects focused on assessing the toxicity and environmental effects of chemically dispersed oil—funding 18 of the 23 projects in this area. Specifically, NOAA has funded projects that focus on both the acute and chronic effects of chemically dispersed oil on certain marine species. For example, one project examined the acute and chronic effects of crude oil and chemically dispersed oil on chinook salmon smolts. In addition, NSF funded a research project examining the potential toxic effects of chemically dispersed oil on benthic—or sea floor— environments in the Gulf of Mexico. Another NSF-funded project is investigating the effects of oil and dispersants on the larval stages of blue crabs and any subsequent impact the oil and dispersants may have on population dynamics. All of NSF’s projects in this area were in response to the Deepwater Horizon incident. EPA and BSEE also funded projects in this category, although fewer in number. For example, one EPA project focused on how the dispersion and weathering of dispersed oil affects the exposure of marine species to dispersed and non-dispersed oil. In response to the Deepwater Horizon incident, EPA funded a project focused on the toxic effects of (1) crude oil alone, (2) eight different dispersants alone, and (3) a mixture of crude oil and each of the dispersants on two Gulf marine species. In addition, BSEE funded a project completed in 2005 to examine the effects of oil and chemically dispersed oil on mussels and amphipods—a type of crustacean. Modeling of chemically dispersed oil. Most of the agencies supported research projects focused on modeling chemically dispersed oil. For example, NOAA funded a project to model the way that chemically dispersed oil particles may combine with other particulate material in the ocean. In addition, four of NSF’s grants were awarded to projects to model the impacts of the Deepwater Horizon incident and dispersant use, such as the effects on plankton and other offshore marine organisms, and BSEE funded a project that involved validating two models developed to predict the window of opportunity for dispersant use in the Gulf of Mexico. Not specifically focused on modeling chemically dispersed oil, some projects are under way to improve three-dimensional modeling of ocean currents, which agency officials told us will be helpful in the event of a future oil spill.funding related to the Deepwater Horizon incident to improve its modeling capabilities to better forecast the subsurface movement and distribution of oil, taking into account the subsurface currents. According to agency officials, the three-dimensional modeling will be a significant addition to the more standard two-dimensional modeling of oil along the surface that has historically been used to track oil trajectories. Similarly, the Department of the Interior’s Bureau of Ocean Energy Management currently has a $989,000 modeling project under way to develop a new model for ocean currents and oil spills in the Gulf of Mexico. The enhanced models that both of these projects are developing may be applied in the future to model chemically dispersed oil and enhance decision making regarding its efficacy, fate, and transport. Specifically, NOAA received $1.3 million in supplemental Monitoring of chemically dispersed oil. Research in this area has been more limited, with four projects funded since fiscal year 2000, primarily by BSEE. One such project focused on SMART protocol monitoring results and the effectiveness of dispersants. Specifically, this project involved applying different ratios of dispersants to oil—ranging from ratios known to be effective at dispersing oil to ratios that were not effective at dispersing oil—to compare how well the SMART monitoring protocols were able to monitor the results of each type of application. The Coast Guard and BSEE also jointly funded a research project focused on analyzing SMART protocol monitoring data to verify the reliability of the protocols and to identify ways in which the protocols could be improved; NOAA and EPA provided assistance, but not funding, to this project. In addition, NOAA funded a project to evaluate dispersant application and monitoring techniques by using oil seeps originating naturally at the bottom of the ocean as a proxy for an oil spill. Human health. HHS, through NIH and CDC, is the primary agency that researches possible human health effects because of the use of dispersants. For example, CDC’s NIOSH conducted laboratory tests involving short-term inhalation exposure of rats to the dispersant COREXIT® EC9500A to study the pulmonary, cardiovascular, and central-nervous-system responses. NIOSH also studied the dermal effects of dispersant exposure. In addition, the National Institute of Environmental Health Sciences (NIEHS) has funded an ongoing project through an NIH initiative called the Deepwater Horizon Research Consortia that will, among other things, analyze the contaminant profiles of seafood fished by subsistence and non-subsistence fishermen in the Gulf of Mexico and will analyze the seafood samples for dispersant residues. In addition, NIEHS funded a joint NIH research project with EPA to evaluate the extent of dispersants’ effects, if any, on endocrine disruption in human cell lines, among other toxicity markers. In addition, EPA funded one research project that focused on in vitro testing of eight oil dispersants to assess four human health toxicity markers. Moreover, NIEHS launched the Gulf Long-term Follow-up (GuLF) Study to investigate potential short- and long-term human health effects associated with clean-up activities following the Deepwater Horizon incident. The GuLF Study is expected to involve at least 40,000 clean- up workers and last for at least 10 years, according to agency officials, and the first 5 years of the study have been funded at $34 million. Through its interviews with clean-up workers, the GuLF Study will examine potential exposures and health effects from a variety of substances and will also try to assess the extent of exposure to dispersants. Research on subsurface application of dispersants. Prior to the Deepwater Horizon incident, federal agencies had not funded research on the subsurface application of dispersants in deep water. Since then, NSF has funded three rapid response grant projects that focus on subsurface application of dispersants and its effects. For example, one project is using specialty instruments to detect and quantify oil and dispersed oil in the deep waters of the Gulf of Mexico. Another NSF project is looking at the acute toxicity effects of oil and chemically dispersed oil on the benthic communities in the deep water of the Gulf of Mexico. The last project is studying the impact of chemical dispersants on the aggregation of oil into oil droplets in the deep water. BSEE has tentative plans to fund research on subsurface application of dispersants in fiscal year 2012. EPA, NOAA, and the Coast Guard do not have any current research related to subsurface dispersant use in the deep water, according to agency officials. Arctic environment dispersant research. Federal research related to the use of chemical dispersants in an Arctic or cold water environment has been somewhat limited, with only eight projects undertaken since fiscal year 2000. For example, one of BSEE’s six funded projects examined the effectiveness of dispersants in broken-ice conditions, which are fairly common many months out of the year off the Alaskan coast. Another project studied dispersant effectiveness in a low mixing energy environment, which could be caused by the presence of ice cover in the Arctic. Similarly, an ongoing project is examining new techniques to apply dispersants in icy environments in which the waves are smaller because of the presence of ice and, as a result, less mixing generally occurs. In addition, EPA funded two studies that focused on the fate and transport of chemically dispersed oil at different temperatures, including in cold water. EPA is also collaborating with other members of the National Response Team and the Alaska Regional Response Team to understand the unique aspects of potential Arctic oil spills with respect to the authorization and use of dispersants in order to inform and prioritize research needs. Alternative dispersant formulations. Prior to the Deepwater Horizon incident, federal agencies had not funded research on alternatives to the current blends of chemical dispersants used to disperse oil. Since the Deepwater Horizon incident, NSF has funded four projects in this area. Specifically, one project is studying natural and synthetic biological agents as alternatives to chemical dispersants for application in marine oil spills. Another study is evaluating the potential usefulness of man-made nanofiber materials as an alternative to chemical dispersants in marine oil spills. The third study is examining the difference in efficacy of natural and synthetic surfactants, which may help with the development of less toxic dispersants. The final project is focusing on the development of bio- derived, biodegradable oil dispersants. General. Research in this category includes efforts to synthesize information and identify broad applications of dispersant knowledge, such as improving dispersant decision making processes and educational efforts. For example, three agencies—the Coast Guard, BSEE and NOAA—provided funding for the 2005 National Academy of Sciences report. This report provided an expert evaluation of the adequacy of existing information and ongoing research regarding the effectiveness and effects of dispersants and recommended steps to be taken to better support policymakers with dispersant decision making. In addition, BSEE funded three other general projects, including one that focused on developing a training package on the use of chemical dispersants for the Ohmsett wave tank testing facility. Another BSEE project studied the operational and environmental factors associated with the use of chemical dispersants to treat oil spills in California waters, with a goal toward expediting dispersant use decision making and planning for such spills. In addition to federally funded dispersant research, the oil industry has funded a number of past and ongoing research projects related to the use and effects of chemical dispersants. These projects have been conducted collaboratively through industry trade associations or across multiple companies, by individual companies, and through an independent research initiative. According to industry representatives, the industry has committed over $20 million to fund American Petroleum Institute and International Association of Oil & Gas Producers’ dispersant programs. These projects generally began in 2011 and are anticipated to end by 2016. Specifically, the American Petroleum Institute is currently leading a set of dispersant-related projects involving several oil companies and oil spill response organizations, among others. According to industry representatives, a significant part of this research will focus on the subsurface use of dispersants in deep water, ice-free environments. In addition, the International Association of Oil & Gas Producers is pursuing two dispersant research initiatives. One initiative—the Oil Spill Response Joint Industry Project—will focus on the fate and effects of subsurface dispersant use and the tracking and modeling of dispersed oil, among other things. A second initiative—the Arctic Oil Spill Response Technology Joint Industry Programme—includes research on dispersant use in the Arctic. Specifically, the dispersant portion of this project is investigating the fate and transport of chemically dispersed oil under ice and dispersant effectiveness testing in Arctic environments, as well as the environmental impacts of Arctic spills and options for responding to them. Shell representatives told us that there are nine oil companies participating in the Arctic research project, and that this project is building on earlier Arctic research conducted by a Norwegian research institute called SINTEF. Individual oil companies, including ExxonMobil and Shell, have also invested in dispersant research projects together and separately. For example, Shell, ExxonMobil, Statoil, British Petroleum, and ConocoPhillips have funded a project to study the biodegradation of physically and chemically dispersed oil and its toxicity on Arctic species in Alaska. According to Shell representatives, this project started in 2009, in response to concerns from Coast Guard and NOAA officials that the agencies did not have sufficient information to conduct an assessment of potential ecological risk for the North Slope of Alaska. The five oil companies provided funding to NewFields, a private consulting firm, and the University of Alaska at Fairbanks to conduct the research. Federal agencies—including NOAA, EPA, and the Coast Guard—are part of a technical advisory committee overseeing this research project. Shell representatives told us that this project has been funded at a total cost of about $2.5 million. Individual oil companies have also funded chemical dispersant research. For example, industry representatives for Exxon estimated that the company has funded more than $20 million for dispersant research since 2000. In addition to industry-led research efforts, British Petroleum has set up an independent group, the Gulf of Mexico Research Initiative, to disburse $500 million in research funds over 10 years to study the effects of the Deepwater Horizon incident, as well as other oil spills, on the Gulf of Mexico. A portion of this funding will be for dispersant research. For example, Tulane University is leading a consortium of over 40 researchers to conduct a roughly $10 million project to examine the science and technology of chemical dispersants as relevant to deep water oil releases. States, organizations, and governments have also funded dispersant research. States—including California and Texas—have funded dispersant research on topics including the toxicity of dispersed oil on certain species, but they are not currently funding such work because of limited funding or competing research priorities. Specifically, California’s Office of Spill Prevention and Response funded a number of research projects from 1993 through 2011 related to the use of chemical dispersants, at an estimated cost of about $2 million. For example, one project studied the physical effects on a marine bird or otter diving through a subsurface plume of chemically dispersed oil. Another funded research project focused on the acute and chronic toxic effects of dispersants on salmon larvae, according to agency officials. Texas has also funded dispersant research projects. According to a state official, the Texas General Land Office spent several million dollars on dispersant research from the mid 1990s through the early 2000s. For example, one project studied the behavior of chemically dispersed oil in a wetland environment. However, the state official told us that dispersant research is no longer a priority for Texas because federal agencies, including BSEE and NOAA, are currently conducting dispersant research and that his office prefers to spend the state’s limited research funds on other aspects of oil spill response that need attention, such as improving buoys to measure waves and ocean currents in order to inform oil spill modeling. The Prince William Sound Regional Citizens’ Advisory Council is an independent non- profit organization established after the Exxon Valdez spill and works to reduce pollution from crude oil transportation through Prince William Sound and the Gulf of Alaska. of chemical dispersants, including organizing a conference in March 2011 focused on the future of dispersant use, with experts addressing the novel uses of dispersants during the Deepwater Horizon incident. In addition, Canada’s Department of Fisheries and Oceans has also funded dispersant research, such as fish toxicity studies and effectiveness studies. This department also collaborated with EPA and the Bedford Institute of Oceanography to build a 32-meter wave tank, which was completed in 2006. Both countries use this wave tank for research purposes, such as to measure the biological effects of various oil, dispersant, and sea water blends by mimicking different ocean conditions in the lab. Lastly, the United Kingdom has also funded dispersant toxicity research to establish assessment criteria for dispersant approval. According to federal officials, experts, and specialists we spoke with, federal agencies and researchers face resource, scientific, and communication challenges in their attempts to enhance knowledge on chemical dispersant use and its effects. Resource challenges. Agency officials, experts, and specialists identified inconsistent and limited levels of funding as a challenge to developing research related to the use and effects of chemical dispersants. Specifically, according to agency officials, experts, and industry representatives, because support for dispersant research tends to increase in the immediate aftermath of a major oil spill and decrease in the years following a spill, it is difficult for federal agencies, states, and industry to sustain a long term research program. For example, agency officials told us that while there was an increase in research funding specifically related to the Deepwater Horizon incident, this funding is not expected to continue in the future. Some agency officials, as well as some industry representatives and experts, told us that a similar pattern occurred after the 1989 Exxon Valdez oil spill, with a temporary increase in research funding following the spill. However, once those initial research funds were allocated, very little research funding was available again until after the Deepwater Horizon oil spill. In addition, some industry representatives told us that maintaining a long-term focus for dispersant research can be a challenge for industry groups, as there are many different oil spill research priorities and responsibilities in addition to dispersants. According to agency officials and a National Research Council report, the lack of a consistent research funding stream also makes it difficult for federal agencies to fund longer term projects. For example, some agency officials and experts said that to understand the chronic toxicological effects of dispersants, scientists would need to design long-term, multiyear studies of the effects of the use of dispersants on marine species; however, such longer term studies are more expensive and more complicated to conduct than short-term acute toxicity tests. Furthermore, although most of the key agencies conducting research on dispersant use and effects have identified areas in which additional dispersant-related research would be informative and aid with decision making, officials from many of these agencies told us their agencies are unable to fund this research given their limited budgets. Some state officials we spoke with echoed similar concerns and said that they have been unable to continue with research in this area. Scientific challenges. Agency officials, experts, and specialists also identified scientific challenges, in particular, conducting research that replicates realistic oil spill conditions and obtaining oil and dispersants for testing. Every oil spill is different, and the conditions—such as weather, oil type and volume, currents, and location—surrounding any unanticipated release of oil into the ocean are highly variable. Given this variability, no one study can account for all the potential permutations. Laboratory experiments are useful for determining the chemical effectiveness of dispersants, but they are unable to approximate ocean conditions given the difference in scale. Researchers can employ alternative methods to try to replicate realistic oil spill conditions for the purposes of conducting dispersant research—use of a wave tank, use of an existing spill, or the intentional release of oil to create a new spill—but each of these have their own drawbacks. Wave tanks. As described earlier, two wave tanks are regularly used in North America—one in New Jersey and the other in Canada. The tanks provide an arena in which oil spills can be created in a body of water without risks to the environment; however, unlike the open ocean, the size of the tank and presence of walls constrain the movement of the oil and water and do not fully account for ocean currents. According to EPA researchers, the tank in Canada is able to come close in terms of simulating breaking wave action and ocean currents, and according to BSEE officials, the tank in New Jersey is able to simulate waves up to 1 meter in height. However, neither of these wave tanks is equipped to simulate the high pressure and dark conditions present in the deep water. Existing spills. An opportunity exists to conduct research on the use of chemical dispersants during an oil spill and to obtain real world information that can help address some of the identified research gaps, but agency officials and experts told us that it is hard to conduct rigorous scientific research because of the competing needs of oil spill responders. For example, one expert told us that a research team may have access to sample and test water in a given spill location but may later be restricted from sampling from the same area because of actions being taken to respond to the spill. In addition, some agency officials told us that it is virtually impossible to conduct scientifically sound research during an oil spill emergency because there is not enough time to carefully design and execute research projects. Intentional discharges. In the absence of an unexpected spill, another option to conduct dispersant-related research could come through the intentional discharge of oil for the express purpose of studying how the oil responds with or without the application of dispersants. However, agency officials, experts, and industry representatives told us that it is very difficult to gain approval for an intentional discharge of oil into the ocean for research purposes. EPA officials told us that states must first approve such a discharge before any applications for a permit to discharge come to EPA for review. The few applications attempted did not receive state approval. These officials also told us that EPA received and granted only one permit, in 1994, for intentional oil discharge to a U.S. water for research on a bioremediation agent. Because open ocean experiments are generally not conducted in the United States, researchers have traveled abroad, including to Norway and Canada, to do such testing. According to officials on the Interagency Coordinating Committee on Oil Pollution Research, the Ocean Energy Safety Advisory Committee, and the American Petroleum Institute, there is growing interest in exploring intentional discharges of oil in controlled settings for research purposes. Another scientific challenge to conducting dispersant-related research is the accessibility of oil and dispersant samples for testing. Several agency officials, specialists, and experts told us that it can be difficult and time consuming to access oil and dispersants to conduct dispersant research. For example, one expert told us that she has been waiting for several months to receive the oil she requested from an oil company for her research, thus delaying her entire project. An industry representative also told us that access to oil and dispersants could be a challenge for researchers because of liability concerns from the companies that produce them, as these companies do not want to be held responsible for any liability if a research project goes badly or either substance spills into the environment. Communication challenges. Agency officials, experts, and specialists told us that it can be a challenge to communicate research across the different groups involved in dispersant use and research, including federal agencies, industry, and academia. Agency officials and industry representatives noted that the oil spill response research community is small and that awareness of each others’ work is based on informal interactions, such as at workshops, meetings, and conferences. Agency officials and industry representatives we spoke with told us they are generally aware of each other’s research, but there is additional research that may not be readily known, such as research undertaken by academia. Some officials also noted that research across these different groups can be hard to track, a task that only gets more difficult following an event like the Deepwater Horizon incident, when there are many new studies under way at once because of the increased attention and funding. In addition, according to agency officials, many oil spill research projects are reported in conference proceedings, such as the Arctic and Marine Oilspill Program Technical Seminar on Environmental Contamination and Response and the International Oil Spill Conference,but these proceedings are not covered in commonly used search engines, such as Web of Science. Some organizations have attempted to develop lists of dispersant-related research, but there is no comprehensive mechanism or database that tracks this research across all sources, includes both past and ongoing research projects, and is regularly updated. For example, the Interagency Coordinating Committee on Oil Pollution Research maintains a list of federally sponsored oil spill related research, including research on dispersants, from which it publishes biennial reports containing short summaries of the federal research projects completed during the prior 2 years. However, these reports are intended only to summarize federal research efforts and do not track or cross-reference related research that has been funded solely by industry or non-governmental sources. Several other organizations have gathered dispersant research information in various types of databases or bibliographies, including those maintained by the Louisiana Universities Marine Consortium, the Prince William Sound Regional Citizens’ Advisory Council, and the CRRC, but none of these lists include the full range of past and current federal, industry, and academic research on the topic. For example, the Louisiana Universities Marine Consortium developed a database consisting of citations found in journals, conference proceedings, and government reports covering published research on oil spill dispersants from 1960 through June 2008, but the database has not been updated. In addition, the Prince William Sound Regional Citizens’ Advisory Council maintains a similar database of citations of published literature on dispersants; however, this database does not track ongoing projects. Also, CRRC’s list describes approximately 100 past and current research projects but is limited in that it contains fewer research projects than the other lists. According to some specialists we spoke with, a central repository for past and ongoing research would be helpful to ensure that future research plans will align with current needs and that new research undertaken will not be duplicative of prior research. It will also help ensure the transfer of knowledge and experience between different groups and generations of researchers and responders so that key lessons and insights do not get lost from one spill to the next, according to some specialists we spoke with. Some agency officials, experts, and specialists expressed concerns about the independence and quality of dispersant research, which can lead to mistrust and misperception about the results. For example, one expert told us that industry research may not be fully independent in that industry groups would not want to publish research results demonstrating that dispersants are harmful in any way. Moreover, some agency officials told us there is a concern in the oil spill research community that industry researchers do not necessarily use the same peer review process for validating their results as is used by government or academia, raising concerns about the reliability of the research. Conversely, some specialists and one expert noted that because of limited experience in actual spill response, many academic researchers do not design and conduct studies that reflect realistic spill scenarios, which can skew the results or make them less helpful for making decisions during a spill. In addition, as previously mentioned, not all dispersant research is conducted using consistent methodological approaches, which limits its comparability and usefulness in drawing broader conclusions. In addition to communication challenges that may exist among the different groups involved in dispersant research, some agency officials, experts, and specialists we spoke with noted challenges in communicating scientific information to the public. According to proceedings from a NOAA-sponsored workshop on dispersant use, communication to the public—as well as to federal, state, and local agencies—was seen as one of the largest issues during the Deepwater Horizon incident. For example, a series of local community meetings were held during the response at which response specialists were on hand to address specific stakeholder questions. From these sessions, it was clear to the response specialists that members of the community at these sessions had many misconceptions about dispersants, specifically with regard to their degradation, toxicity, and application, as well as ways in which to monitor them. Ocean oil spills can have devastating effects on the environment, coating coastlines and wetlands and killing marine mammals, birds, fish, and other wildlife. Chemical dispersants are one tool that responders have at their disposal to try to mitigate the consequences of a spill. Much is known about the use of dispersants—particularly on the surface of the water and in temperate climates—and federal agencies, industry, states, and other groups have taken steps to enhance knowledge on dispersants. However, gaps remain, and less is known about the application and effects of dispersants applied subsurface to underwater spills and to spills in the Arctic or colder environments. Because future domestic oil production will rely to a substantial extent on developing additional wells in challenging environments, such as deep waters and the Arctic Ocean, researching dispersant use in these environments will be key to improving decision makers’ understanding of the potential consequences of using dispersants in these situations. Some research related to application below the surface and in Arctic conditions is under way, and the Interagency Coordinating Committee on Oil Pollution Research is currently working to revise its research and technology plan to address some gaps, including those related to dispersant use. To make decisions about whether to use dispersants, decision makers need timely and reliable scientific information on the trade-offs between the risks that untreated oil poses to the water surface and shoreline habitats and the risks that chemically dispersed oil poses to underwater environments. This information must be available before a spill happens and incorporated into response planning, as the decision to use dispersants must be made quickly, and an emergency situation provides no time for designing new research. Because years may pass in between spill events, information on dispersant use must also be available to responders and researchers who may have limited experience in using and applying dispersants as a response option. Some groups, including the Interagency Coordinating Committee on Oil Pollution Research, have developed lists of past or ongoing federal research projects related to dispersants, but there currently is no mechanism that tracks dispersant research across all sources and highlights key recent and ongoing research projects. Dissemination of such information would help ensure that new federal research undertaken will not duplicate prior research and that key knowledge can more easily transfer from one spill or generation of researchers and responders to the next. Moreover, the Interagency Committee is in a prime position to request the sharing of such information from these non-federal sources in the course of fulfilling its mission to coordinate a comprehensive program of oil pollution research among federal agencies, in cooperation and coordination with industry, universities, research institutions, state governments, and other nations. Up-to-date information on the findings of key research on dispersant use and its effects is essential to ensuring that federal research priorities, as articulated in the research and technology plan currently being revised, are effectively targeting the most important research needs. Gaining a full understanding of the effectiveness and potential environmental effects of dispersant use is difficult to accomplish in a laboratory setting, not to mention during a spill in light of the competing needs of oil spill responders. However, it is during a spill when the greatest opportunity exists to gather real world data to help address some of the identified research gaps. While some information is currently gathered during response operations, it is primarily limited to whether the oil on the surface is breaking into small droplets and entering the water column. Specifically, the SMART monitoring protocols currently used during a spill response gather information on whether chemical dispersants should continue to be used, but these protocols do not provide robust scientific information on dispersant use and effects. Furthermore, these monitoring protocols are designed for use with surface application of dispersants and do not monitor dispersed oil resulting from deep water dispersant application. NOAA recognized such limitations in its recent review of the SMART data from dispersant monitoring during the Deepwater Horizon incident and has acknowledged improvements could be made. To ensure existing and ongoing dispersant research is adequately captured and broadly available to different groups and generations of researchers, to ensure that new research undertaken by the federal government will not duplicate other research efforts, and to ensure that adequate attention is given to better understanding dispersant use in deep water and Arctic environments, we recommend that the Commandant of the Coast Guard direct the Chair of the Interagency Coordinating Committee on Oil Pollution Research to take the following two actions, in coordination with member agencies:  Ensure that in the course of revising the Interagency Committee’s research and technology plan, applications of dispersants subsurface and in Arctic conditions are among the areas prioritized for subsequent research.  As part of the Interagency Committee’s efforts to help guide federal research, identify information on key ongoing dispersant-related research, including research sponsored by state governments, industry, academia, and other oil pollution research organizations. This information should be provided in the planned and future revisions to the research and technology plan. In addition, periodically update and disseminate this information, for example, as part of the Interagency Committee’s biennial report to Congress on its activities. To enhance the knowledge of the effectiveness and potential environmental effects of chemical dispersants, we recommend that the Secretaries of Commerce and the Interior, the Administrator of EPA, and the Commandant of the Coast Guard direct their respective agencies, NOAA, BSEE, EPA, and the Coast Guard, to coordinate and explore ways to better obtain more scientifically robust information during spills without hindering response efforts through enhancement of monitoring protocols and development of new data collection tools. We provided a draft of this report to the Department of Commerce, the Department of Health and Human Services, the Department of Homeland Security, the Department of the Interior, the Environmental Protection Agency, and the National Science Foundation for review and comment. DHS concurred with all three recommendations made to it. Commerce and Interior concurred with the recommendation directing them to explore ways to better obtain more scientifically robust information during spills. While EPA did not directly state whether it concurred with that recommendation, the agency generally agreed, noting that it is committed to exploring ways to coordinate with other agencies to better obtain more scientifically robust information during spills, enhance monitoring protocols, and develop new data collection tools. In addition, Commerce, HHS, Interior, EPA, and NSF provided us with technical comments, which we have incorporated as appropriate. See appendixes IV, V, VI, and VII for agency comment letters from Commerce, DHS, Interior, and EPA, respectively. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Commerce, Health and Human Services, Homeland Security, and the Interior; the EPA Administrator; the Director of the National Science Foundation; the appropriate congressional committees; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Our objectives were to examine (1) what is known about the use of chemical dispersants and their effects, and knowledge gaps about or limitations to their use, if any; (2) the extent to which federal agencies and other entities have taken steps to enhance knowledge on chemical dispersant use and its effects; and (3) challenges, if any, that researchers and federal agencies face in their attempts to enhance knowledge on chemical dispersant use and its effects. To determine what is known about the use and effects of chemical dispersants and identify any knowledge gaps or limitations, we reviewed documents and literature, including federal regulations and government oil spill planning documents, such as the National Contingency Plan, Regional Contingency Plans, and dispersant guidelines. We also reviewed scientific studies and key reports on dispersant use, such as the 2005 National Academy of Sciences (NAS) report on Oil Spill Dispersants: Efficacy and Effects, the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling report to the President, and several Coast Guard and National Oceanic and Atmospheric Administration (NOAA) reports on response actions during the Deepwater Horizon incident. We used these documents to determine areas of research that inform planning and decision making regarding the use of chemical dispersants. In addition, we collaborated with the NAS to identify 11 academic, industry, and other researchers recognized as experts in their respective scientific fields and capable of advising us on chemical dispersant use and research. In the report, these scientists and researchers are referred to as “experts.” NAS staff selected these experts based on their knowledge of one or more of the following topic areas: dispersant effectiveness, toxicity of dispersants and dispersed oil, fate and transport of dispersants and dispersed oil, monitoring actual dispersant use, risk assessment of dispersant use, other environmental effects of dispersant use, and challenges to dispersant research. In addition, NAS staff sought experts representing a wide range of viewpoints, including some experts who had experience with the Deepwater Horizon incident. In developing the list of experts, NAS staff consulted with NAS Ocean Studies Board members and volunteers from past and ongoing NAS studies on relevant topics to identify potential experts. NAS staff also performed literature reviews and targeted Internet searches based on the topic areas and questions identified by GAO. NAS staff composed the list of experts by identifying a range of expertise among prospective experts and then performing short interviews with them to discuss potential biases and any possible conflicts of interest, ensure that viewpoints were balanced, and confirm that some of the experts had experience with the Deepwater Horizon incident. GAO conducted semi-structured interviews with these experts to discuss the state of knowledge, including gaps, regarding dispersant research. We used a standard set of questions, asking the same questions in the same order to each expert. We carefully documented and analyzed expert responses to address our objectives and establish common themes. We used the following categories to quantify the responses of experts: “some” refers to responses from 2 to 4 experts, “many” refers to responses from 5 to 7 experts, “most” refers to responses from 8 to 10 experts, and “all” refers to responses from all 11 experts. We supplemented our semi-structured expert interviews with interviews of federal officials and other oil spill or dispersant specialists, including state officials who have been involved in past response actions, human health researchers, oil spill response organizations with expertise in applying chemical dispersants, industry representatives with experience in researching oil dispersants and responding to oil spills, and other relevant non-governmental organizations, such as a regional advisory group focused on environmental protection as it relates to oil production and transportation. Statements from these groups are identified as being from “specialists.” During the course of our review, we spoke with 37 specialists. For the purposes of our interview analysis, in cases where multiple specialists were present during one interview but each provided their own views, we counted each specialist separately. We used the following categories to quantify the responses of specialists: “some” refers to responses from 2 to 4 specialists, “several” refers to responses from 5 to 8 specialists, and “many” refers to responses from 9 or more specialists. To determine the extent to which federal agencies and other entities have taken steps to enhance knowledge on chemical dispersant use and its effects, and what challenges, if any, researchers have faced, we analyzed information on federal research efforts since fiscal year 2000 supplied by the key federal agencies conducting research on dispersant use and effects: the Department of the Interior’s Bureau of Safety and Environmental Enforcement (BSEE), the Department of Homeland Security’s United States Coast Guard, the Environmental Protection Agency (EPA), the Department of Health and Human Services’ (HHS) National Institutes of Health (NIH) and Centers for Disease Control and Prevention (CDC), the Department of Commerce’s NOAA, and the National Science Foundation (NSF). This information included titles, funding levels, and a brief description of agency research projects. To assess the reliability of agency-supplied data, we asked the agencies to describe how they gathered this information, including their data reliability controls; we also checked the lists that the agencies provided to us against other publicly available lists of dispersant research projects to help ensure consistency and completeness. We then categorized each of the dispersant research projects into one or two research areas and sent these categorizations back to the agencies for their concurrence. We also conducted interviews with federal officials from these agencies to obtain their perspectives on the extent and focus of their research efforts and what challenges, if any, they have faced. In addition, we analyzed information supplied by, and conducted interviews with, specialists to obtain their perspectives on dispersant research efforts and potential associated challenges. In addition, we attended a NOAA-funded workshop on the future of dispersant use to gather information on both the state of knowledge and ongoing research. We also attended an industry-funded workshop of key federal, state, and local responders, academic researchers, and other stakeholders who could potentially be affected by an accidental offshore oil spill along the Eastern seaboard of the United States. At these workshops we collected written materials, listened to presentations, and spoke with specialists in attendance. We conducted this performance audit from March 2011 through May 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Michel Boufadel, Temple University James Clark, Independent Consultant (retired - ExxonMobil Research and Engineering) Cortis Cooper, Chevron Energy Technology Company Sara Edge, Harbor Branch Oceanographic Institute Merv Fingas, Independent Consultant (retired - Environment Canada) Kenneth Lee, Fisheries and Oceans Canada Judith McDowell, Woods Hole Oceanographic Institution Francois Merlin, Centre of Documentation, Research and Experimentation on Accidental Water Pollution (CEDRE) Jacqueline Michel, Research Planning, Inc. The following is a listing of federally sponsored research projects related to dispersants. The title and initial funding year for each dispersant project was supplied by the respective agency. We asked for this information from the following agencies: the Department of the Interior’s Bureau of Safety and Environmental Enforcement (BSEE), the Department of Homeland Security’s United States Coast Guard (Coast Guard), the Environmental Protection Agency (EPA), the Department of Health and Human Services’ (HHS) National Institutes of Health (NIH) and Centers for Disease Control and Prevention (CDC), the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA), and the National Science Foundation (NSF). In addition to the individual named above, Elizabeth Erdmann (Assistant Director), Antoinette Capaccio, Margaret Childs, Cindy Gilbert, Ryan Gottschall, Rebecca Makar, Alison O’Neill, and Jena Sinkfield made key contributions to this report.
In April 2010, an explosion onboard the Deepwater Horizon drilling rig in the Gulf of Mexico led to a release of approximately 206 million gallons of oil. When an oil spill occurs, responders have several options for managing the environmental impacts, including using chemical dispersants to break the oil into smaller droplets, which can promote biodegradation and help prevent oil from coming on shore. GAO was asked to review (1) what is known about the use of chemical dispersants and their effects, and any knowledge gaps or limitations; (2) the extent to which federal agencies and other entities have taken steps to enhance knowledge on dispersant use and its effects; and (3) challenges, if any, that researchers and federal agencies face in their attempts to enhance knowledge. GAO collaborated with the National Academy of Sciences to identify and recruit experts on dispersant use and conducted interviews with these experts, agency officials, and other specialists, and reviewed key documents and reports. According to experts, agency officials, and specialists, much is known about the use of chemical dispersants on the surface of the water, but gaps remain in several research areas. For example, experts generally agreed that there is a basic understanding of the processes that influence where and how oil travels through the water, but that more research was needed to quantify the actual rate at which dispersants biodegrade. In addition, all the experts GAO spoke with said that little is known about the application and effects of dispersants applied subsurface, noting that specific environmental conditions, such as higher pressures, may influence dispersants’ effectiveness. Knowledge about the use and effectiveness of dispersants in the Arctic is also limited, with less research conducted on dispersant use there than in temperate or tropical climates. For example, one expert noted that more research is needed on biodegradation rates for oil in the Arctic because the cold temperature may slow the process down. Federal agencies have funded over $15.5 million of dispersant-related research since fiscal year 2000, with more than half of the total funding occurring since the Deepwater Horizon incident. Most of these 106 projects were funded by the Department of the Interior’s Bureau of Safety and Environmental Enforcement (BSEE), the National Science Foundation (NSF), and the Environmental Protection Agency (EPA). Over 40 percent of the research projects were focused at least in part on testing dispersant effectiveness. For example, BSEE funded 28 projects on the efficacy of dispersants on different types of oil and under different ocean conditions. In contrast, relatively few projects were focused on applying dispersants subsurface or in the Arctic. Specifically, NSF funded three projects looking at the use and effects of subsurface dispersant application, and BSEE and EPA funded the eight projects related to the use of chemical dispersants in Arctic or cold water environments. Researchers face resource, scientific, and communication challenges related to dispersant research. Agency officials, experts, and specialists identified inconsistent and limited levels of funding as a challenge to developing research on the use and effects of chemical dispersants. For example, because support for dispersant research fluctuates, with temporary increases following a major spill, it is difficult for federal agencies to fund longer term studies, such as those needed to understand chronic toxicological effects of dispersants. In addition, researchers face scientific challenges with respect to dispersants, including being able to conduct research that replicates realistic oil spill conditions. Conducting research in the open ocean faces several logistical barriers, and laboratory experiments are unable to fully approximate the scale and complexity of ocean conditions. Lastly, agency officials, experts, and specialists told GAO that it can be a challenge to communicate and track research. Although some organizations have attempted to compile lists of dispersant-related research, currently there is no mechanism that tracks dispersant research across all sources and highlights past and ongoing research projects. For example, the Interagency Coordinating Committee on Oil Pollution Research—a multi-agency committee chaired by the Coast Guard—maintains a list of federally sponsored oil spill related research, but does not track or cross-reference related research that has been funded solely by industry or nongovernmental sources. GAO recommends, among other things, that the Interagency Coordinating Committee on Oil Pollution Research periodically provide updated information on key dispersant research by nonfederal sources. Also, the Interagency Committee should ensure that subsurface and Arctic applications are among the future priority research areas. The Departments of the Interior, Commerce, and Homeland Security, and the EPA generally concurred with the recommendations made to them.
The MHS is a complex organization that provides health services to almost 10 million beneficiaries across a range of care venues, including the battlefield, traditional hospitals and clinics at stationary locations, and authorized civilian providers. Responsibility for the delivery of care is shared among the Office of the Assistant Secretary of Defense for Health Affairs (OASD HA), the military services, and the DHA. The OASD HA reports to the Under Secretary of Defense for Personnel and Readiness, who in turn reports to the Secretary of Defense, whereas the Army, the Navy, and the Air Force medical commands and agencies report through their Service Chiefs to their respective Military Department Secretary and then to the Secretary of Defense. The OASD HA manages the Defense Health Program appropriation, which funds the service medical departments, but the military treatment facilities, including hospitals and clinics, are under the direction and control of the services, which maintain the responsibility to staff, train, and equip those commands to meet mission requirements. The MHS collaboratively develops strategy to meet policy directives and targets, with the service components, the DHA, or both responsible for execution. See figure 1 for the current MHS organizational structure. Decision making within the MHS reflects the many actors and complex nature of this relationship. Important decisions are made collaboratively by a number of bodies with representation from each service and the DHA throughout the decision making process. See figure 2 for a diagram of the MHS governance structure. The roles and responsibilities within the MHS governance structure are as follows: The Military Health System Executive Review, which is chaired by the Under Secretary of Defense for Personnel and Readiness and has other members such as the Vice Chiefs of Staff for the three services as well as the Director of the Joint Staff, serves as a senior-level forum for DOD leadership discussion of strategic, transitional, and emerging issues facing the MHS. The Senior Military Medical Action Council, which is chaired by the Assistant Secretary of Defense for Health Affairs and includes the service Surgeons General, the DHA Director, and others, presents enterprise-level guidance and operational issues for decision making by the Assistant Secretary of Defense for Health Affairs. The Medical Deputies Action Group, which consists of the service Deputy Surgeons General, the Joint Staff Surgeon, and a DHA representative, and is chaired by the Principal Deputy Assistant Secretary of Defense for Health Affairs, reports to the Senior Military Medical Action Council to ensure that actions are coordinated across the MHS and are in alignment with strategy, policies, directives, and initiatives of the MHS. Reporting to the Medical Deputies Action Group are four supporting governing bodies, consisting of flag or general officers from the service medical departments and senior executives from DHA: The Medical Operations Group carries out assigned tasks and provides enterprise-wide oversight of the direct and purchased care systems. The Medical Business Operations Group provides a forum for providing resource management input on direct and purchased care issues. The Manpower and Personnel Operations Workgroup supports centralized, coordinated policy execution and guidance for development of coordinated human resources and personnel policies and procedures for the MHS. The Enhanced Multi-Service Markets Leadership Group provides a forum for managers of geographic MHS markets to discuss clinical and business issues, policies, performance standards, and opportunities. DOD established the DHA to assume management responsibility for numerous functions of its medical health care system. The DHA supports the delivery of services to MHS beneficiaries and is responsible for integrating clinical and business processes across the MHS. The DHA also exercises management responsibility for the ten joint shared services and the military’s health plan and oversees the medical operations within the National Capital Region, which include those at the Walter Reed National Military Medical Center and at the Fort Belvoir Community Hospital. See figure 3 for the organizational structure of the DHA. According to DOD, a “shared services concept” is a combination of common services performed across the medical community to reduce variation, eliminate redundant processes, and improve performance. Further, according to DOD, the overall purpose and core measure of success for all shared services is the achievement of cost savings. DHA stood up the following ten shared services during its first year of operation, bringing together elements from the Army, the Navy, the Air Force, and the former TRICARE Management Activity: Budget and Resource Management promotes the cost-effective use of program and budgeted funds, increased reimbursements, and improved financial transparency and utilization in support of the MHS, and encompasses financial management activities including cost accounting and billing to other health insurance providers as well as to inter-agency entities. Contracting and Procurement centralizes the strategy for the acquisition of goods and services to meet the needs of shared services and common functions and product lines. Facility Planning centralizes enterprise facility planning requirements, to better tailor investment decisions to meet future needs, to build and operate less space while better meeting the mission. Additionally, the DHA establishes and strengthens enterprise standards, standard business processes, and performance measurement functions, decreasing variance across the entire facilities business. Health Information Technology consolidates health information technology services—information technology management, infrastructure, and applications—under the management of the DHA, creating a single point of accountability for the delivery of health information technology services to MHS customers. Medical Education and Training provides administrative support; academic review and policy oversight; and professional development, sustainment, and program management to the military departments’ medical services, the combatant commands, and the Joint Staff. Medical Logistics standardizes clinical demand signals for medical supplies, equipment, and housekeeping services, and establishes DHA oversight of compliance with best purchasing practices across the MHS. Medical Research, Development, and Acquisition executes specific activities to improve coordination, process efficiency, and output quality across the enterprise, producing greater operational efficiency, and reducing research costs allowing DOD to recapture funds that can be reinvested into additional research programs. Pharmacy delivers and centrally manages funding for enterprise-wide pharmacy programs, services, and initiatives, and is responsible for leading the strategy, management, and oversight of pharmacy operations across the enterprise. Public Health consolidates and centralizes governance for all appropriate public health product lines, including Deployment Health, Health Surveillance, and other processes that promote health, and manage population and individual health risks, to field a fit and medically ready force. TRICARE Health Plan supports the MHS integrated health delivery system and the purchase of health care services contracts, executes the requirements determined by the integrated health delivery system, and provides assurance that those requirements are purchased and implemented effectively. To accomplish the purposes listed above, some of these shared services are composed of a number of projects or “product lines.” For instance, within the Budget and Resource Management shared service, DOD identified three product lines, which involve the (1) implementation of a common cost accounting structure throughout the three military services in support of DHA budget operations; (2) standardization of medical record coding procedures throughout the three military departments through the establishment of a Medical Coding Program Office; and (3) implementation of a joint billing solution to improve the medical treatment facilities’ ability to bill and collect. DOD has initiated a process for assessing the personnel requirements of the DHA, but it continues to operate without information that would allow it to determine the effect of the DHA’s establishment on the number of headquarters and administrative personnel in the MHS. The DHA has started assessing its personnel requirements, but this analysis will not be completed by the DHA’s proposed full operating capability in October 2015, and is not comprehensive in that it does not include a complete and detailed timeline, and does not address key issues, including the final size of the agency and its workforce mix. In addition, DOD does not have the information it would need to determine whether creating the DHA resulted in an increase or decrease in the number of MHS headquarters and administrative personnel. As we reported in November 2013, determining the impact of the DHA’s creation on MHS personnel levels necessitates finalized personnel requirements for the DHA and a baseline estimate of MHS headquarters and administrative personnel prior to the establishment of the DHA. However, the DHA has just begun its personnel requirements assessment, and DOD does not have a baseline estimate of MHS administrative and headquarters personnel levels that existed before the DHA was established. DOD has initiated the process of assessing personnel requirements for the DHA, but this analysis will not be finished by the DHA’s proposed full operating capability in October 2015, and does not include a finalized timeline for its completion. According to DHA officials, the assessment will not be completed until September 2016. The National Defense Authorization Act for Fiscal Year 2013 mandated that DOD include personnel requirements for the DHA in its series of three reports to Congress on the DHA’s implementation; however, we testified in February 2014 that DOD’s reports did not include DHA personnel requirements. The DHA Manpower and Organization Division, which is responsible for the DHA personnel requirements assessment, was not established until July 2014, 9 months after the DHA’s creation in October 2013. DHA officials said that initial staffing for the office took an additional 6 months and that the office is currently almost completely staffed, but that the time necessary to create an operational, fully-functioning office contributed to the assessment’s delay. According to DHA officials, the TRICARE Management Activity—the DHA’s predecessor and the only organization that was brought into the DHA in its entirety—did not have personnel requirements. DHA officials stated that they are conducting an assessment of the requirements needed to perform the functions of the former TRICARE Management Activity, which the DHA absorbed. Similarly, in 2010, we reported on the services’ medical personnel requirements processes in military treatment facilities, finding that these processes were not validated and verifiable and that the services did not centrally manage civilian personnel requirements. DHA officials stated that the requirements assessment process includes personnel specialists analyzing each part of the agency, documenting its functions, and determining how many personnel hours are needed to execute those functions. DOD Directive 1100.4, Guidance for Manpower Management, states that resources are to be programmed in accordance with validated personnel requirements. DHA officials said that they are currently developing procedures for the assessment of DHA personnel requirements; however, as of July 2015, officials stated that this document had not been fully developed and were unable to provide specific information. In addition, DHA provided a tentative timeline as of June 2015 for completion of its requirements assessment process, but this timeline was not complete or finalized. Further, DHA officials stated that they will not complete the personnel requirements assessment when the DHA reaches full operational capability on October 1, 2015. DHA created the tentative personnel requirements assessment timeline that provides estimated personnel days for assessing the different parts of the agency as well as estimated dates for beginning and completing portions of the assessment. According to this timeline, DHA expects to complete the assessment of its Health Information Technology and Business Support directorates by October 9, 2015. However, tentative start and end dates for assessing other parts of the DHA, including the National Capital Region Medical Directorate, Medical Education and Training Directorate, Defense Health Agency Support, and the Research, Development and Acquisition Directorate are “To Be Determined.” DHA officials stated that they expect to complete the assessment of DHA’s personnel requirements by fiscal year 2017, but the timeline does not include established timelines for completion of the entire assessment or for all interim steps of the process. Our reports on performance planning indicate that timelines with milestones and interim steps can be used to show progress toward implementing efforts or to make adjustments to those efforts when necessary. In 2013, we found that DOD had not consistently identified milestones for all activities between initial operating capability and full operational capability for each of the goals of its reform, and we recommended that DOD develop and present to Congress a timeline with interim milestones for all reform goals that could be used to show implementation progress. DOD concurred with the recommendation but has not yet implemented it. The timeline focused on implementation of DOD’s shared services and other reform objectives, but did not specifically address development of personnel requirements for the DHA. By developing a timeline for DHA’s personnel requirements assessment that includes milestones and interim steps for determining those requirements, DOD could provide Congress with important information concerning the size and scope of the DHA that Congress requested 2 years ago. DOD’s ongoing assessment does not address key issues that are important to the size and workforce mix of the DHA. Specifically, according to DHA officials, their personnel requirements assessment does not account for the possible addition of other missions or organizations to the DHA, and DOD has not made a decision as to whether military personnel will be permanently assigned to the DHA. The DHA has already incorporated a number of components for which a military service formerly served as executive agent, such as the Joint Medical Executive Skills Institute under the Army, and various DHA and service officials said that the DHA could incorporate additional missions or organizations in the future. DHA officials stated they expect adjustments to personnel requirements with any added mission. However, the current assessment process does not specifically address such potential changes. Our work on effective strategic workforce planning found that agency personnel planning should consider not only the needs of its current workforce, but its future workforce as well. Should DOD not take into account the additional skills and competencies required to meet future missions, its requirements assessment will be incomplete. DOD’s assessment also does not address aspects of workforce mix – the proportion of military, civilian, and contractor personnel that perform DOD’s functions. DOD Directive 1100.4, Guidance for Manpower Management, instructs that all three segments of the workforce should be considered when determining how DOD’s work should be performed. Further, 10 U.S.C. § 115b requires DOD to address in its strategic workforce plan, among other things, the appropriate mix of military, civilian, and contractor personnel capabilities. In 2012, we reported that DOD’s plan submissions have not addressed this requirement. DHA officials stated that their personnel requirements assessment will include an analysis of whether work should be performed by military, civilian, or contractor personnel. However, officials also stated that the ongoing personnel assessment will not determine whether the DHA will assume full responsibility for military personnel working at the DHA or whether servicemembers’ respective military service will retain those functions. The determination of military personnel status would also affect the workforce mix of military, civilian, and contractor personnel within the agency. Officials stated that if responsibility for military personnel were transferred to the DHA, meaning the military billets and associated funding were moved to the DHA budget, then the responsibility for related human capital functions would transfer as well. The DHA’s human capital office would require an increase in size to provide this support. Until a decision is made, officials told us that the DHA and services plan to use a specific code to identify assigned military personnel working at the DHA in order to track the number of military personnel working within the DHA. Officials further stated that they did not have an estimated timeframe for the decision about military personnel. If the DHA absorbs additional agencies or missions, or if full responsibility for military personnel transfers to the DHA, DOD will need to reassess requirements in light of these changes. However, DOD does not have a plan for addressing these potential changes or periodically reassessing the personnel needs of the DHA. DOD Directive 1100.4, Guidance for Manpower Management, states that personnel management shall be, among other things, adaptive to program changes and existing policies, procedures, and structures should be periodically evaluated for efficient and effective use of resources and long-range strategies and workforce forecasts should be developed to implement major changes. The DHA published a Directive Type Memorandum on its personnel planning process in January 2014, but this guidance does not specifically address the need for a plan to reassess and revalidate personnel requirements on a regular, recurring basis. Without addressing the need to periodically reassess the DHA’s personnel requirements, DOD cannot effectively plan for, manage, and adjust to future programmatic and organizational changes that may occur within the MHS or the DHA. DOD decided to implement the DHA, in part, on the assumption that it would result in reduced personnel costs of $46.5 million annually in MHS administrative and headquarters organizations. However, as we reported in November 2013, DOD, citing internal disagreement over the report’s personnel estimate, identified a more conservative goal of not increasing overall personnel numbers in the MHS headquarters through the establishment of the DHA. We recommended that DOD develop a baseline estimate of headquarters personnel and an estimate of such personnel needs at full operating capability. DOD concurred with our recommendation and stated that it would provide this information in its third submission to Congress on the implementation of the DHA, but as we noted in our February 2014 testimony, DOD did not do so. By comparing finalized personnel levels with a baseline of MHS personnel levels before the DHA’s creation, DOD could demonstrate the effect of the DHA’s establishment on the size of MHS administrative and headquarters personnel levels. However, DOD has neither finalized personnel levels for the DHA nor completed a baseline assessment of MHS personnel levels. During the course of this review, officials from the Office of the Assistant Secretary of Defense (Health Affairs) stated that they do not plan to identify a historical baseline estimate of MHS headquarters and administrative personnel levels prior to the establishment of the DHA. Officials stated that it would prove impossible to retroactively establish such a baseline estimate. However, we continue to believe that, as we recommended in 2013, a pre-DHA baseline estimate would provide decision makers with a transparent and complete picture of the impact of the DHA’s creation on MHS headquarters personnel levels. Our prior work on strategic human capital management states that workforce planning efforts linked to strategic goals and objectives can enable an agency to remain aware of and be prepared for its current and future needs as an organization, such as the size of its workforce. By developing a baseline, DOD would be able to demonstrate whether MHS administrative and headquarters organizations are larger or smaller since the establishment of the DHA. Without reliable baseline and requirements personnel data, decision makers at DOD and in Congress do not have comprehensive information about previous and current personnel levels and needs for the DHA and the MHS. As a result, they do not know whether the DHA has had an effect on personnel costs and cannot make fully informed decisions about future needs and long-term goals. Congressional decision makers have expressed concern regarding the resources devoted to the Office of the Secretary of Defense, the Joint Staff, and the military services’ secretariats and military staff, and we have conducted a number of reviews in response to such concerns. We previously reported that DOD has experienced challenges in accounting for headquarters resources, including concerns with the completeness and reliability of data on its headquarters personnel, weaknesses in DOD’s process for sizing its geographic combatant commands, difficulty accounting for the resources being devoted to management headquarters to use a starting point for tracking reductions, and the absence of a systematic requirements-determination process. Within DOD, in July 2013, the Secretary of Defense directed a 20-percent cut in management headquarters spending throughout the department, to include spending within headquarters organizations such as the Office of the Secretary of Defense, the Joint Staff, and the military services’ secretariats and military staff. DOD can take steps that would positively contribute to the development of transparent and comprehensive personnel information, including associated costs, for decision makers at present and in the future. For example, an annual budget exhibit in the Congressional Budget Justification delineating personnel costs allocated to MHS administrative and headquarters organizations and the costs allocated to military treatment facilities would provide information relevant to decision makers’ concerns about the cost of administering the MHS. Such a budget exhibit would be in line with federal accounting standards that are aimed at providing relevant and reliable cost information to assist Congress and executives in making decisions about allocating federal resources. DOD has developed a business case analysis approach to help achieve cost savings and has applied this approach to eight of its ten shared services; DOD has not, however, developed comprehensive business case analyses for the remaining two shared services—Public Health and Medical Education and Training. For eight of its shared services, DOD has generally implemented recommendations we made in 2013 by identifying discrete costs and cost savings for each of its shared services’ product lines and identifying the major types of implementation costs. Regarding the remaining two shared services, the DHA has not, in accordance with best practices, identified a stated problem that those two shared services would be intended to address. Identifying a stated problem is the first step in developing a business case analysis. Specifically, the DHA has not identified any redundant functions to be consolidated in the Public Health and in the Medical Education and Training areas to justify the proposed transfer of responsibility of functions in those areas from a military-service-level entity to a DOD-level one. The National Defense Authorization Act for Fiscal Year 2013 required DOD to develop business case analyses for its shared service proposals as part of its submissions on its plans for the implementation of the DHA, including, among other things, the purpose of the shared service and the anticipated cost savings. According to the implementation plan that the DHA submitted, the DHA would establish ten shared services to achieve cost savings. In November 2013, we highlighted concerns regarding the basis of cost savings estimates and the potential impact of implementation costs on the DHA’s shared service projects. Since then, the DHA has developed a business case analysis approach for eight of its shared services that generally reflects best practices. We based our assessment on discussions with officials from the various shared services and their quarterly briefings to the Medical Deputies Action Group, which includes the Deputy Surgeons General of each service and representatives from Health Affairs and the DHA. In general, these briefings indicate that the business cases for Budget and Resource Management, Contracting and Procurement, Facility Planning, Health Information Technology, TRICARE Health Plan, Medical Logistics, Pharmacy, and Medical Research, Development, and Acquisition generally reflect the characteristics of business case analyses outlined in GAO’s Business Process Reengineering Assessment Guide. The guide identifies a number of best practices that help make the business case for change. In addition, the guide recommends the use of an investment review process to evaluate the business case and decide whether to proceed with proposed changes. In November 2013, we reported that, while the information in DOD’s implementation plans generally reflected key characteristics of business case analyses, DOD did not present sufficient information to explain the basis for its cost-savings estimates. Specifically, we reported that DOD did not include detailed quantitative analysis regarding the sources of its cost-savings estimates or provide a basis for or an explanation of key assumptions and rationales used in estimating such savings. For example, we noted that while the Medical Logistics shared service is composed of three product lines, DOD presented one net savings estimate for Medical Logistics, but did not provide estimates for each of its three product lines. We noted that a business case should include detailed qualitative and quantitative analysis in support of selecting and implementing the new process that includes a statement regarding benefits, costs, and risks. We recommended that DOD provide a more thorough explanation of the potential sources of cost savings from the implementation of its shared services, and DOD concurred. During our current review, we found that in its quarterly reports to the Medical Deputies Action Group, DOD provided additional information on each of the eight shared services. For example, the Medical Logistics shared service team now identifies discrete costs and cost savings for each product line—Supply Management, Health Care Technology, and MEDLOG Services (Housekeeping). After accounting for implementation costs, the net savings estimate for each product line within this shared service from fiscal years 2014 through 2019 range from $5.96 million for services to $197.86 million for supplies. By differentiating between these product lines, decision makers are able to obtain a sense of the relative size and scope of each proposed change. In addition, in 2013 we reported on the effect of potential increases in implementation costs on net cost savings. We noted DOD’s past experience in managing the implementation of large-scale projects, particularly those involving investments in information technology, illustrates such risk. According to the guide, business case analyses should demonstrate the sensitivity of the outcome to changes in assumptions, with a focus on the dominant benefit and cost elements and the areas of greatest uncertainty. We recommended that DOD monitor implementation costs to assess whether the shared services are on track to achieve projected net cost savings or if corrective actions are needed, and DOD concurred. During our current review, we found that briefings to the Medical Deputies Action Group now identify the major types of implementation costs where relevant, or otherwise address their potential impact. For example, information technology costs are identified as one primary type of costs for the Health Information Technology and Medial Logistics shared services, while contract costs are identified for the Budget and Resource Management, Medical Logistics, and Health Information Technology shared services. By identifying the major types of implementation costs, decision makers are better able to gauge the sensitivity of areas of uncertainty as they make decisions concerning future investments in shared services. DHA has also developed and implemented an investment review process to assess the business case for shared services on their merits. Our Business Process Reengineering Assessment Guide states that use of an agency’s investment review process to evaluate a business case and decide whether to proceed with a given change is a vital aspect of business process reengineering. These estimates were reviewed by the Council of Cost Assessment and Program Evaluation bodies, a group of cost assessment subject matter experts from the services and the DHA. Shared service teams presented the basis and reasoning for developing costs and savings for each product line, and cost assessment experts commented on the proposals. For example, when reviewing the business case analysis for the health information technology shared service, the Council of Cost Assessment and Program Evaluation’s report stated that the analysis was based on an estimated reduction from current spending based on, among other things, industry benchmarks and estimates from subject matter experts. Each representative registered their agreement or objection to the estimates and voted to express concurrence or non- concurrence. Ultimately, this process includes review by the Medical Deputies Action Group, which includes the Deputy Surgeons General and the Deputy Director of the DHA, followed by the Senior Military Medical Action Council, which includes the Surgeons General, the Director of the DHA, and is chaired by the Assistant Secretary. By sustaining its current approach to shared services, DOD can help ensure it has a framework to help it achieve cost savings. The DHA has proposed transferring existing public health and medical education and training organizations from the services to the DHA; however, the DHA has not developed a business case about how doing so would consolidate activities to eliminate redundancies and result in cost savings. During the process of planning for MHS governance reform, the Deputy Secretary of Defense noted in a March 2012 memorandum that this process should “realize savings in the MHS through the adoption of common clinical and business processes and the consolidation and standardization of various shared services.” This focus on achieving savings through consolidation differentiates the objective of establishing shared services from the six other objectives outlined in DOD’s plans for the implementation of the DHA. However, in the case of both the Public Health and Medical Education and Training shared services, the transfer of responsibility for military-service level organizations to a defense agency without consolidation of programs runs contrary to the stated purpose of shared services. While these two shared services propose some efficiencies in their operations, they either overlap with other shared services or propose changes which could have been implemented without a transfer of responsibility to the DHA. The Public Health shared service consists of, in part, the adoption of a number of Army public health agencies into the DHA. One proposed efficiency initiative of this shared service, including the consolidation of several redundant databases with an estimated net savings, accounting for implementation costs, of about $1 million between fiscal years 2014 and 2019, partially overlaps with the responsibilities of the Health Information Technology shared service. In addition, in briefings to the Medical Deputies Action Group, the Public Health shared service team stated that the “vision for health surveillance may not be fully realized in terms of a true shared service.” Further, DHA officials stated that cost savings was not a major goal of the Public Health shared service, contrary to DOD’s stated intention for shared services. Similarly, the Education and Training shared service adopts a number of existing organizations, and the additional small changes it has proposed overlap with other shared services. In 2014, we reported on this shared service, and highlighted problems regarding its rationale. For example, we noted that the “product line” proposals concerning modeling and simulation and online learning overlap with the DHA’s Contracting and Procurement and Health Information Technology shared services. Specifically, while cost savings for modeling and simulation are allocated to the Education and Training Directorate, implementation costs are to be incurred by the Contracting and Procurement shared service. In addition, the savings for the online learning project are found within the Health Information Technology shared service portfolio. We have previously highlighted the challenges DOD faces when it does not fully analyze potential changes to business processes. For example, in 2007, we reported that DOD did not comprehensively analyze the costs, benefits, or risks of any of the four options for governance reform under consideration at the time. Further, DOD developed and decided to implement the fourth option as a compromise among the military departments and the Office of the Secretary of Defense (Health Affairs) similar to the current shared services concept, but did not develop a supporting business case analysis. When we later reviewed implementation of this compromise option, we found that DOD had only implemented those steps that related to implementation of the Base Realignment and Closure process, while others had not been sufficiently addressed. The Business Process Reengineering Assessment Guide states that a business case analysis begins with (1) measuring performance and identifying problems in meeting mission goals, which is then addressed through (2) the development and selection of a new process. The new process is to include a description of estimated benefits, costs, and risks. However, in developing the Public Health and Medical Education and Training shared services, the DHA did not address this first step of the business case analysis process. Without first identifying what redundant functions can be consolidated to achieve efficiencies, the reason for creating these shared services will remain unclear. Some shared services, such as the TRICARE Health Plan shared service, entail more efficient provision of previously centralized services provided by the former TRICARE Management Activity, an agency which was absorbed into the DHA. However, both Public Health and Medical Education and Training were not previously centralized. Therefore, this central rationale of consolidation for shared services that were not previously centralized is absent from the approach to Public Health and Medical Education and Training and, as a result, their purpose is inconsistent with the spirit of shared services. DOD could articulate an alternative reason for the transfer of responsibility for these services to the DHA. However, absent such an alternative explanation, the rationale for the transfer of responsibility of the functions of these services to the DHA remains unclear. The DHA has made progress in developing measures to assess the progress of its ten shared services toward achieving their respective goals; however, these measures continue to not fully demonstrate key elements that can contribute to success in assessing performance. In November 2013, we found that the performance measures DOD provided in its 2013 congressionally required MHS reform implementation plans did not fully exhibit attributes that can help agencies determine whether they are achieving their goals, such as accompanying explanations, definitions, quantifiable targets, or baselines. To provide decision makers with more complete information on the planned implementation, management, and oversight of the DHA, we recommended that DOD develop and present to Congress performance measures that are clear, quantifiable, objective, and include a baseline assessment of current performance. Through our prior work on performance measurement, we have identified several important attributes of performance measures (see table 1). While these attributes may not cover all the attributes of successful performance measures, we believe they address important areas. DOD provided us performance measures for the DHA shared services in April 2015. To assess the extent of their development, we compared the performance measures with the ten key attributes of successful performance measures. The results of our analysis are depicted in table 2. Shared service and product line (if applicable) Joint Billing Solution (Post- ABACUS) *The Medical Education and Training shared service consists of three product lines: (1) Professional Development, Sustainment, and Program Management; (2) Academic Review and Policy Oversight; and (3) Administrative Support Functions. However, the initial measures for this shared service were not aligned by these product lines. Instead, they were aligned by overall shared service deliverables. While these product lines were integrated into the Public Health shared service at initial operating capability on September 30, 2014, the respective business case analysis and business process reengineering plans were not conducted. No metrics had been developed for these areas as of April 24, 2015; thus, we were unable to assess performance measures for these product lines. A fourth product line, Health Surveillance, is pending transfer to the Defense Health Agency in July 2015. As such, we did not review measures for that product line, either. Since our November 2013 review, DHA has made progress in developing performance measures to assess its shared services. Specifically, all ten shared services have measures that demonstrate at least some of these attributes; however, collectively, they do not demonstrate all of the attributes, as we had previously recommended. In our analysis of the ten shared services and their associated product lines, we made the following observations: Linkage. We found that the measures for all of the shared service product lines we assessed addressed the attribute of linkage. A measure demonstrates linkage when it is aligned with division and agency-wide goals and mission and is clearly communicated throughout the organization. DHA officials have communicated the shared service performance measures throughout the organization through the 2013 implementation plan submissions and through regular updates to leadership and all the measures were aligned with shared service goals as defined in those submissions and updates. Clarity. We found that the measures for 13 of the 18 shared service product lines we assessed addressed the attribute of clarity, while 4 partially demonstrated this attribute, and 1 did not address clarity. A measure achieves clarity when it is clearly stated and the name and definition are consistent with the methodology used for calculating the measure. For instance, we found that although the name and definitions for both measures under the Contracting and Procurement shared service Acquisition Planning and Program Management product line were consistent and clearly stated, the methodology for those measures was under review, preventing a comparison of the names and definitions with the methodology. We have previously reported that a measure that is not clearly stated can confuse users and cause managers or other stakeholders to think that performance was better or worse than it actually was. Measurable Target. We found that the measures for 8 of the 18 shared service product lines we assessed addressed the attribute of measurable targets, while 9 partially demonstrated this attribute, and 1 did not have measurable targets. Where appropriate, performance goals and measures should have quantifiable, numerical targets or other measurable values. Some of DOD’s measures, however, lacked such targets. For instance, DHA officials within the Joint Billing Solution product line of the Budget and Resource Management shared service have proposed 10 measures, such as “Billing Turnaround” or “Revenue Collected” for various functions, but none of these measures had targets. Objectivity. We found that the measures for 12 of the 18 shared service product lines we assessed addressed the attribute of objectivity, while 5 partially demonstrated this attribute, and 1 did not address objectivity. We have previously reported that to be objective, measures should indicate specifically what is to be observed, in which population or conditions, and in what time frame, and be free of opinion and judgment. However, for instance, within the MEDLOG Services (Housekeeping) product line of the Medical Logistics shared service, one of the measures is “Frequency of Complaints,” but it is not specific as to how complaints will be evaluated or remedied, or what the comparison standard or criteria is for assessing the measure. Reliability. We found that the measures for 7 of the 18 shared service product lines we assessed addressed the attribute of reliability, while 9 partially demonstrated this attribute, and 2 did not address reliability. Reliability refers to whether a measure is designed to collect data or calculate results such that the measure would be likely to produce the same results if applied repeatedly to the same situation. We have previously reported that if errors occur in the collection of data or the calculation of their results, it may affect conclusions about the extent to which performance goals have been achieved. Officials provided information indicating specific data quality control processes for several measures, such as the frequency of when data are reviewed and whether the reviews are automated or done manually; however, other measures we reviewed did not have any data quality control processes specified or those processes were still in development. For instance, within the Budget and Resource Management shared service, officials indicated that a technical solutions working group had been established to assess the data needs and to determine applicable data quality control processes for each of the product line performance measures, but did not indicate whether any quality control processes had been developed. Baseline and Trend Data. We found that 3 of the 18 shared service product lines we assessed had baselines for each of their measures, addressing this attribute, while 10 had baselines for only some of their measures, and 5 did not have baselines for any of their measures. Several measures that did not have baselines had expected dates by when the baselines would be available. For instance, the baselines for all of the metrics within the Contracting and Procurement shared service were in the process of being re-established, but officials anticipated that these baselines would be developed between fiscal year 2018 and fiscal year 2019. Without adequate baseline data, goals may not permit subsequent comparison with actual performance or allow determination of whether net savings have been achieved. Core Program Activities. We found that the measures for 4 of the 18 shared service product lines we assessed addressed the attribute of core program activities, while 12 partially demonstrated this attribute, and 2 did not address core program activities. Several of the sets of measures did not address all program activities that would be expected based on the descriptions of the shared services provided by DOD in its 2013 submissions to Congress. For example, within the Supply Management product line of the Medical Logistics shared service, the two proposed measures were “Use of Standardized Items” and “Use of eCommerce by All Enterprise.” These measures align with the stated product line objective to “increase usage of standardized consumable supplies across the services.” However, another product line objective is to “increase visibility and controls on purchase card usage and other local contracts,” which did not appear to be addressed by the proposed measures. We have previously reported that core program activities are the activities that an entity is expected to perform to support the intent of the program, and that performance measures should be scoped to evaluate those activities. Limited Overlap. We found that the measures for 16 shared service projects we reviewed demonstrated limited overlap, while 2 had the potential for overlap. We have reported that each performance measure in a set should provide additional information beyond that provided by other measures. We found that, within the various product lines of the Budget and Resource Management shared service, the potential for overlap existed because the definitions were unclear as to whether they would provide new information. Specifically, two of the measures are “Percent of transactions including valid Budget Activity, Budget Sub Activity, and Budget Line Item data” and “Percent of transactions captured” in both legacy and up-to-date systems that “include valid Budget Activity, Budget Sub Activity and Budget Line Item data.” As written, it was unclear whether data for the legacy and up-to-date systems are captured in both measures. When an agency has overlapping measures, it can create unnecessary or duplicate information, which does not benefit program management. Balance. We found that the measures for 8 of the 18 shared service product lines we assessed addressed the attribute of balance, while 5 partially demonstrated this attribute, and 5 did not address balance. We have previously reported that balance exists when a set of measures ensures that an organization’s various priorities are covered. Some of the sets of measures are not balanced. For instance, the measures we reviewed for the Contracting and Procurement shared service were only focused on cost savings attributed to the product line initiatives, but they did not address other program activities, such as variation reduction, redundancy elimination, and ensuring the timely completion of contractor performance evaluations. Officials explained that the initial metrics focused on savings, but they plan to develop and track other metrics after initial operating capability in a phased approach. Performance measurement efforts that lack balance overemphasize certain aspects of performance at the expense of others, and may keep DOD from understanding the effectiveness of its overall mission and goals. Government-wide Priorities. We found that the measures for 10 of the 18 shared service product lines we assessed addressed the attribute of government-wide priorities, while 8 partially demonstrated this attribute. We have previously reported that agencies should develop a range of related performance measures to address government-wide priorities, such as quality, timeliness, efficiency, cost of service, and outcome. For instance, within the Medical Logistics shared service, the MEDLOG Services (Housekeeping) product line has a measure “Cost per Square Foot,” which addresses cost. Shared service officials are also developing a measure that would address “Frequency of Complaints,” which will address quality of service when completed. When measures do not cover government-wide priorities managers may not be able to balance priorities to ensure the overall success of the program. A senior DHA official noted that the development of metrics for the DHA shared services continues as a work in progress, with the more mature shared services having made more progress on their specific measures, and further noted that officials are continuing to evolve these measures through the shared services work groups. DOD implemented the ten DHA shared services in a phased approach between October 1, 2013, and September 30, 2014. We found that the maturity of the shared services’ metrics is determined in part by the date they reached initial operating capability and also by the extent to which those services were already consolidated prior to incorporation into the DHA: Already Joint Areas: According to DOD, 3 of the 10 shared services represent areas that were already joint efforts prior to DHA. These shared services had either previously been executed by the former TRICARE Management Activity, including the TRICARE Health Plan and the Pharmacy shared services, or were led by a single service and managed through a series of joint program committees, such as the Medical Research and Development shared service. According to DOD, these shared services required the development of new measures related to their respective business process reengineering plan initiatives and improvements to be developed and included in an enterprise-level “dashboard” set of metrics. Newly Consolidated Areas with Mature Measurement Capabilities: According to DOD, 5 of the 10 shared services already had mature measurement capabilities within their respective communities prior to their incorporation as a shared service into the DHA and required the development of some enterprise-wide standard measures to be included in enhanced dashboards. The shared services in this category include Facility Planning, Medical Logistics, Health Information Technology, Budget and Resource Management, and Contracting and Procurement. Newly Consolidated Areas without Mature Measurement Capabilities: The final two shared services to reach initial operating capability, Medical Education and Training and Public Health, are newly consolidated areas that did not have mature measurement capabilities prior to incorporation into DHA. Within both of these shared services, DOD developed preliminary metrics prior to initial operating capability to aid leadership, with continued development and implementation of measures to occur after initial operating capability. As the least mature shared services, we identified the following issues with the performance measures in these areas: DHA officials told us that the Medical Education and Training shared service represents the first instance of Office of the Secretary of Defense-level oversight in that area, and initial performance measures in this area were not developed until April 2015. While we found that the measures address six of the attributes and partially address four attributes, as noted previously, the Education and Training shared service overlaps with other shared services, including Health Information Technology and Contracting and Procurement, and does not directly address the consolidation of education programs. For instance, four of the five measures address cost savings related to the implementation of a single learning management system across the MHS, savings for which are being applied within the Health Information Technology shared service. Additionally, as we reported in July 2014, the Medical Education and Training shared service consists of three product lines, involving: (1) management of professional development, sustainment, and related programs; (2) academic review and policy oversight functions, including management of online courses and modeling and simulation programs; and (3) management of academic and administrative support functions. However, the initial measures provided by DOD were not aligned by product line, but by overall shared service deliverables. Out of twelve proposed deliverables, the measures only addressed two. Within Public Health, officials told us that deployment health data elements and reporting requirements were standardized prior to initial operating capability; however, the services varied in how they captured and exported the data. For example, the Deployment Health product line has several measures for assessing the percentage of the total force that is medically ready; however, these metrics are tracked and reported at the service level. While officials noted that a working group including the three military services developed these metrics, the services may differ in how they collect the data. Further, officials have not yet developed joint metrics to assess their progress in meeting the goals of the shared service. Shared service officials have begun to develop additional metrics, such as “Business Process Reengineering Savings Achieved” and “Completion of Periodic Health Assessments”; however these metrics do not yet have enough information to determine the extent to which they address the key attributes and officials do not anticipate completing these metrics until September 2015 and March 2016 respectively. Further, Public Health officials have not yet conducted business case analyses and business process reengineering plans for the remaining Public Health product lines, precluding the development of metrics for those areas; however, officials have developed a timeline outlining their plans to conduct each of these assessments. DOD’s further development of performance measures for the ten DHA shared service projects shows progress toward addressing our prior recommendation—to develop and present to Congress performance measures that fully exhibit those key attributes identified in our prior work. However, because these measures continue to lack key elements that can contribute to success in assessing performance, we continue to believe that our prior recommendation is still valid and should be completely implemented for all the objectives of the MHS reform. DOD officials have stated, among other criteria, the shared services will be considered to have reached full operating capability when they have developed performance measures to help manage actions, report progress, identify gaps, and identify areas for improvement. By developing measures that reflect these attributes, DOD can help ensure that decision makers have information needed to assess the DHA shared services’ efforts and to measure progress toward achieving their stated goals and whether, as part of DOD’s overall medical governance reform effort, they are on track to achieve desired results. In general, DOD has made progress in addressing concerns we highlighted in our two most recent reviews of its implementation process for the DHA. However, while the DHA plans to reach its formal full operating capability in October 2015, some issues will remain unresolved into late 2016. As such, the DHA may not be fully operational in practical terms until it definitively addresses these issues. As the personnel requirements assessment process for the DHA moves forward, it does so without a detailed timeline, a clear understanding of the size and scope of the DHA’s mission, a lack of a baseline to measure against, and the status of responsibility for military personnel. Further, the DHA has made significant improvements to its approach to achieving cost savings through shared services. In general, the DHA’s approach provides more detail, recognizes the impact of changing events, and reflects a review of investments for eight of ten of the shared services. However, the DHA has not sufficiently explained the role of its public health and medical education and training shared services, as these do not currently reflect the primary goal of achieving cost savings through consolidation. Finally, while the DHA has made improvements to its performance measures for shared services, some aspects of these metrics are still evolving, and our analysis identified a number of instances where DHA’s measures do not contain key attributes of successful performance measures. As we noted in our February 2014 testimony, the successful implementation of the DHA will require committed senior leadership to sustain the momentum created by the current reform effort. However, senior leaders need appropriate information to make decisions and guide the reform. Given that the DHA’s evolution will continue far beyond its formal full operating capability, DOD leaders will need to continue to strengthen their framework for managing this major reform to the MHS with attention to these areas. To provide decision makers with appropriate and more complete information on the continuing implementation, management, and oversight of the DHA, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to take the following five actions: Develop a timeline for completion of the personnel requirements assessment that includes milestones and interim steps; Develop a comprehensive requirements assessment process that accounts for needed future skills through the consideration of potential organizational changes and helps ensure appropriate consideration of workforce composition through the determination of the final status of military personnel within the DHA; Develop a plan for reassessing and revalidating personnel requirements as the missions and needs of the DHA evolve over time; Develop information concerning the number and cost of administrative and headquarters personnel within the MHS and provide this information as an annual exhibit in the President’s budget; and Determine the future of the Public Health and Medical Education and Training shared services by either identifying common functions to consolidate to achieve cost savings or by developing a justification for the transfer of these functions from the military services to the DHA that is not premised on cost savings. In written comments provided in response to our draft report, DOD concurred with four of our five recommendations and partially concurred with a fifth. DOD’s written comments are reprinted in appendix II of this report. In concurring with our recommendation that DOD develop a timeline for completion of the personnel requirements assessment that includes milestones and interim steps, DOD stated that it had developed a project plan to track progress in this area. This action is a positive development in DOD’s progress toward developing staffing requirements for the DHA. At the time of our review, DOD provided a timeline with tentative dates for completion of personnel requirements assessments for parts of the DHA. In its response, DOD provided scheduled completion dates for assessments of the National Capital Region Medical Directorate, DHA Special Staff, Medical Education and Training Directorate, and the Research, Development and Acquisition Directorate. However, as of June 2015, assessment dates for these divisions, with the exception of the DHA Special Staff, were listed as “To Be Determined.” In addition, DOD’s response does not address one of the DHA’s divisions, Defense Health Agency Support, which was similarly listed as “To Be Determined” in DOD’s initial timeline. Establishing a milestone for this division, along with the milestones set for the other DHA divisions, will help DOD to complete its assessment of staffing requirements. DOD’s response states that it will complete this assessment by the end of fiscal year 2016. DOD concurred with our recommendation to develop a comprehensive requirements assessment process that accounts for needed future skills through consideration of potential organizational changes and helps ensure appropriate consideration of workforce composition, through determination of the final status of military personnel within the DHA. DOD stated that it had drafted a “desktop reference” document to guide this process. We noted in our report that as of July 2015, DOD stated that this document had not yet been fully developed. In its comments, DOD stated that this document was still not yet finalized, but that it had implemented the associated processes. We look forward to the finalization of this document and are encouraged that DOD stated it will take into account the current and future needs of the agency, including required skills and workforce mix. DOD partially concurred with our recommendation that it develop a plan for reassessing and revalidating personnel requirements as the missions and needs of the DHA evolve over time. In its response, DOD stated that it had issued temporary guidance, expiring in January 2016, which established processes for manpower and organization changes to the DHA. However, as discussed in our report, this guidance does not specifically address the need for a plan to reassess and revalidate personnel requirements on a regular, recurring basis. Therefore, we continue to believe that our recommendation is valid. Further, DOD noted in its comments that it concurs that an Administrative Instruction detailing the ongoing process for personnel requirements determination and management within the DHA should be formalized before the current guidance expires. In concurring with our recommendation that DOD develop information concerning the number and cost of administrative and headquarters personnel within the MHS and provide this information as an annual exhibit in the President's budget, DOD stated that such an effort is currently underway as part of a larger DOD initiative to better define and account for management headquarters functions. However, DOD noted that it does not agree with the inclusion of administrative personnel in this assessment given the lack of a department-wide definition of what constitutes such personnel. For purposes of this report, we have defined administrative personnel to include those other than headquarters personnel who are assigned to MHS organizations, including the DHA, that do not directly provide health care services within DOD military treatment facilities. This includes personnel performing DHA shared services activities. We believe the inclusion of administrative personnel as defined in this report is crucial to accurately determining the number and cost of personnel serving within MHS. As a result, we continue to recommend that DOD include the number and costs of administrative personnel in combination with similar information on headquarters personnel within the MHS. DOD concurred with our recommendation that it determine the future of the Public Health and Medical Education and Training shared services by either identifying common functions to consolidate to achieve cost savings or develop a justification for the transfer of these functions from the military services to the DHA that is not premised on cost savings. In its response, DOD stated that it plans to revisit the application of the business case analysis process to this shared service, including the development of a “recommended future state.” Further, DOD stated that it plans to employ its governance process to resolve issues related to responsibilities and authorities within its Medical Education and Training shared service to identify opportunities to reduce and eliminate redundancies in this area. DOD also highlighted the status of its “eLearning” and Modeling and Simulation product lines. We are encouraged by the steps outlined in DOD’s response. However, as we noted in our report, these product lines significantly overlap with the Health Information Technology and Contracting and Procurement shared services, with some associated costs and cost savings attributed to these shared services. As a result, the reason for creating this shared service remains unclear. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Deputy Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Assistant Secretary of Defense (Health Affairs), the Defense Health Agency Director, the Surgeon General of the Air Force, the Surgeon General of the Army, and the Surgeon General of the Navy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. For many years, we and other organizations have highlighted long- standing issues surrounding the Department of Defense’s (DOD) Military Health System (MHS) and DOD’s efforts to reorganize its governance structure. Over the years, many efforts to control the increase in health care costs led to a long series of studies to address the governance structure of the MHS and to recommend major organizational realignments. Recently, as a result of the report of the 2011 Task Force on Military Health System Governance, the department began implementation planning for the creation of a new Defense Health Agency (DHA). Subsequently, the National Defense Authorization Act for Fiscal Year 2013 required DOD to submit its plans for implementing its reform effort in three submissions—the first in March 2013, the second in June 2013, and the third in September 2013—and mandated that GAO review DOD’s first two submissions. We examined the March and June 2013 submissions as well as an August 2013 supplemental report to Congress of DOD’s plan to implement the reform effort and reported the results in November 2013. In February 2014, we examined DOD’s third and final reform plan, which was submitted to Congress in November 2013. In reviewing the submissions, we identified several areas in DOD’s implementation plan where sustained senior leadership attention is needed to help ensure the reform achieves its goals including determining personnel requirements, clarifying cost estimates, and fully developing performance measures. DHA officially began operations in October 2013 and DOD anticipates that the organization will be fully operational in October 2015. See figure 4 for a timeline of our work related to DOD’s MHS governance reform and key DOD documents. For more detailed information on our past recommendations to DOD on MHS governance issues and the status of DOD’s implementation of them, see table 3. In addition to the contact named above, Lori Atkinson, Assistant Director; Rebekah Boone; Jeffrey Heit; Mae Jones; Amie Lesser; Felicia Lopez; Carol Petersen; Terry Richardson; Adam Smith; and Sabrina Streagle made key contributions to this report. Defense Health Care Reform: Actions Needed to Help Realize Potential Cost Savings from Medical Education and Training. GAO-14-630. Washington, D.C.: July 31, 2014. Military Health System: Sustained Senior Leadership Needed to Fully Develop Plans for Achieving Cost Savings. GAO-14-396T. Washington, D.C.: February 26, 2014. Defense Health Care Reform: Additional Implementation Details Would Increase Transparency of DOD’s Plans and Enhance Accountability. GAO-14-49. Washington, D.C.: November 6, 2013. Defense Health Care: Additional Analysis of Costs and Benefits of Potential Governance Structures Is Needed. GAO-12-911. Washington, D.C.: September 26, 2012. Defense Health Care: Applying Key Management Practices Should Help Achieve Efficiencies within the Military Health System. GAO-12-224. Washington, D.C.: April 12, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Follow-up on 2011 Report, Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue. GAO-12-453SP. Washington, D.C.: February 28, 2012. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Military Personnel: Enhanced Collaboration and Process Improvements Needed for Determining Military Treatment Facility Medical Personnel Requirements. GAO-10-696. Washington, D.C.: July 29, 2010. Defense Health Care: DOD Needs to Address the Expected Benefits, Costs, and Risks for Its Newly Approved Medical Command Structure. GAO-08-122. Washington, D.C.: October. 12, 2007.
In 2013, DOD created the DHA to provide administrative support for the services' respective medical programs and combine common “shared” services to achieve cost savings. House Report 113-446 included a provision that GAO review DOD's progress in implementing the DHA. This report addresses the extent to which DOD has made progress in (1) assessing the personnel requirements of the DHA and its effect on MHS personnel levels; (2) developing an approach to achieving cost savings through shared services; and (3) fully developing performance measures to assess its shared services. GAO reviewed DOD's personnel requirements assessment process, business case analyses, and performance measures for the DHA's shared services. GAO compared this information with key management practices and DOD guidance. Additionally, GAO interviewed officials from the DHA and the military services. Nearly 2 years after the creation of the Defense Health Agency (DHA), the Department of Defense (DOD) has made progress toward completing its implementation process, but has not addressed issues related to GAO's past recommendations regarding personnel requirements, an approach to cost savings, and performance measures. Personnel - The DHA has initiated the process of assessing personnel requirements, but this process has been delayed, does not have a detailed timeline for completion with milestones and interim steps, and is not comprehensive. It does not address key issues—such as the effect of possible personnel growth in the DHA and workforce composition issues. DOD cannot determine the DHA's effect on the Military Heath System's (MHS) administrative and headquarters staff levels because (1) the DHA has not completed the personnel requirements assessment process and (2) it has not, as GAO recommended in November 2013, developed a baseline estimate of personnel in the MHS before the DHA was created. DOD stated that the requirements assessment process will not be completed until September 2016. Further, although DOD does not plan to develop a baseline estimate and is not tracking personnel-related savings, DOD can take steps that would contribute to the development of comprehensive personnel information, such as including information concerning the number and cost of administrative and headquarters personnel within the MHS in annual budget documents. Approach to help achieve cost savings - The DHA has developed a business case analysis approach to help it achieve cost savings for 8 of its 10 DHA shared services. This approach largely addresses GAO's November 2013 recommendations that DOD provide more information on its cost savings estimates and monitor implementation costs. However, the DHA has not developed comprehensive business case analyses for 2 shared services—Public Health, and Medical Education and Training. Specifically, the DHA has proposed the transfer of their functions from the military services, but has not identified common functions to consolidate in order to achieve cost savings, which is the primary purpose of establishing shared services. Performance measures – The DHA has made progress in developing measures to assess the progress of its10 shared services toward achieving their respective goals; however, these measures do not demonstrate some key elements that GAO has found can contribute to success in assessing performance, such as clarity, measurable targets, and baseline data. Specifically, all 10 DHA shared services have measures that demonstrate at least some of these attributes; however, collectively, they do not demonstrate all of the attributes, as GAO recommended in November 2013. These key attributes can help ensure that DOD officials have the information necessary to measure progress toward achieving the stated goals of the shared services. While DOD has made progress in the development of these performance measures, GAO's November 2013 recommendation that DOD develop performance measures that fully exhibit those key attributes is valid and should be completely implemented. In addition to GAO's prior recommendations, GAO is making a number of recommendations related to the DHA's personnel requirements and approach to achieving cost savings. DOD concurred with all but one recommendation to develop a plan for reassessing its personnel requirements, partially concurring and citing existing guidance. GAO continues to believe that current guidance in this area is insufficient, and that DOD would benefit from a plan for reassessing its personnel needs as the DHA's missions and needs evolve.
To meet our three objectives, we took the following steps. Reviewed and analyzed IRS reports, testimonies, budget submissions, and other documents and data, including performance and workload data, and compared these to IRS’s goals and past performance to identify trends and anomalies in performance. We also tested for statistically significant differences between annual performance rates based on IRS sample data. Observed operations at the Joint Operations Center (which manages IRS’s telephone services) and IRS’s walk-in sites in Atlanta, Ga. and Baltimore, Md. and a volunteer site in Washington, D.C. We selected these particular offices for a variety of reasons, including the location of key IRS managers; Analyzed staffing data for paper and electronic filing, telephone assistance, and walk-in assistance. Reviewed information from other organizations who compile information pertinent to our objective, such as Keynote Systems, which evaluates Internet performance. Reviewed IRS reports and analyzed IRS data on RALs and RACs to identify trends and opportunities to reduce taxpayers’ reliance on them. Reviewed IRS data and analyzed methods IRS currently employs to identify taxpayer compliance with eligibility requirement for higher education tax benefits. Reviewed MEA-related statutes to determine IRS’s existing and possible new authorities. Interviewed IRS officials about current operations, trends, and significant factors and initiatives that affected performance; efforts to reduce reliance on RALs and RACs; and monitoring and oversight of compliance issues, including higher education credit claims. Interviewed representatives of some of the larger private and nonprofit organizations that prepare tax returns, including H&R Block and trade organizations that represent both individual paid preparers, tax preparation companies, and professional associations, including the American Institute of Certified Public Accountants. Reviewed Treasury Inspector General for Tax Administration (TIGTA) reports and interviewed a TIGTA official about IRS’s performance and initiatives. Reviewed prior GAO reports and followed up on our recommendations made in filing season and related reports. This report discusses numerous filing season performance measures and data covering the quality, accessibility, and timeliness of IRS’s services that, based on our prior work, we consider sufficiently objective and reliable for purposes of this report. To the extent possible, we corroborated information from interviews with documentation and data and where not possible, we attribute the information to IRS officials in our report. We reviewed IRS documentation, interviewed IRS officials about computer systems and data limitations, and compared those results to our standards of data reliability. Data limitations are discussed where appropriate. Finally, we conducted our work primarily at IRS headquarters including at the Small Business/Self-Employed Division in Washington, D.C., and the Wage and Investment Division headquarters in Atlanta, Ga. as well as the other sites mentioned earlier. We conducted this performance audit from January 2009 through December 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We received technical and written comments on a draft of this report, which we addressed. A letter from the IRS Deputy Commissioner for Services and Enforcement providing those comments is reprinted in appendix I. In that letter, the Deputy Commissioner explicitly agreed with five of our recommendations and described the steps IRS is taking with respect to our two other recommendations. Most taxpayers file their individual income tax returns electronically, although millions still mail paper returns. Compared to paper, electronic filing allows taxpayers to receive refunds faster, is less prone to transcription and other errors, and provides IRS with significant cost savings. Last year we reported that IRS estimated it used 39 percent fewer staff years for processing tax returns in 2007 than in 1999 for a savings of $85 million. The Free File program provides taxpayers below an income ceiling with access to a consortium of tax preparation companies that offer free on-line tax preparation and filing services for qualifying taxpayers. CADE, part of IRS’s high-risk Business System Modernization program (BSM), is intended to eventually replace IRS’s antiquated Master File legacy processing system and facilitate faster refund processing and provide IRS with more up-to-date account information. Primarily through its telephone, Web site, and, to a much lesser extent, through its face-to-face assistance, IRS also provides tax law and account assistance, limited return preparation, tax forms and publications, and outreach and education. IRS staff provides assistance at 401 walk-in sites where taxpayers can receive basic tax law assistance, receive assistance with their accounts, and have returns prepared by IRS if their annual income is $42,000 or less. IRS also has volunteer partners that staff over 12,000 sites, which help serve traditionally underserved taxpayer segments, including elderly, low-income, and disabled taxpayers, and taxpayers with limited English proficiency. IRS developed the Taxpayer Assistance Blueprint (TAB), a 5-year plan designed to assist the agency in providing, evaluating, and improving taxpayer services at lower cost. TAB also provided estimates of the cost- per-service contact for different types of taxpayer services and conducted preliminary research about the effect of taxpayer service on compliance. IRS delivered an update of TAB to Congress in October 2009. Millions of taxpayers who do not want to wait for their tax refunds from IRS choose to obtain RALs, which are offered by paid preparers or banks to taxpayers in connection with federal and/or state tax refunds. RALs are short-term, high interest rate bank loans. We found that the annual percentage rate on RALs can be over 500 percent, RALs offer taxpayers the benefit of receiving cash quickly based on an expected refund. Combined with tax preparation fees, RALs may considerably reduce a taxpayer’s refund. However, RALs remain popular, especially among low-income taxpayers. RAL providers might also offer RACs, which are not loans, but instead are a refund delivery option where IRS direct deposits a refund into a temporary account set up by a financial institution, which withdraws the tax return preparation fee, and then makes the remaining funds available to the taxpayer. RALs and RACs allow taxpayers to pay return preparation and fees out of their refunds. IRS uses its many tools to identify and correct noncompliance, whether intentional or unintentional. Over the years, Congress granted IRS statutory authority to cover specific areas so that the agency could correct tax return errors during processing, including calculation errors and entries that are inconsistent or exceed statutory limits, without having to issue the taxpayer a statutory notice of deficiency (see app. II for details). Math error checks are automated and low-cost relative to audits. Prompt compliance checks, such as math error checks, increase the likelihood of IRS collecting all or part of the amount owed. However, IRS must be granted MEA from Congress by statute for specific purposes, and as noted above, we recently suggested areas where IRS could benefit from new authorities. IRS gets most of the information returns during the filing season. These returns are provided by third parties, such as employers, banks, or educational institutions, file returns with IRS and taxpayers that provide information on a variety of taxpayer transactions. IRS tries to match information from the information returns filed by third parties against taxpayers’ income tax returns to see if taxpayers have filed returns and reported their income and expenses. This approach tends to lead to high levels of taxpayer compliance. As of October 2, 2009, IRS processed 139 million individual income tax returns. As shown in table 1, 94 million taxpayers (68 percent) electronically filed their returns compared to 88 million (62 percent) last year, excluding the 9 million stimulus-only returns. Electronic filing provides IRS with significant cost savings—IRS estimates the cost savings of electronic filing to be $2.71 per return over the costs in 2008. It helps taxpayers to receive their refunds faster and aids IRS in achieving the electronic filing goal of having 80 percent of all federal tax and information returns filed electronically by 2012. IRS issued approximately 109 million refunds, up 4 million from last year, for $298 billion. Approximately 66 percent of all refunds were directly deposited, 9 percent more than last year. This increase is important, because direct deposit is faster, more convenient for taxpayers, and less expensive for IRS than mailing paper checks. IRS attributes the increase in electronic filing, in part, to a 19 percent increase in people filing from home computers, which may be related to the elimination of separate fees for electronic filing. According to IRS, the elimination of fees by some paid preparers also contributed to the decline in the Free File program—as of September 20, 2009, the number of taxpayers who filed through Free File decreased to 3 million, down 37 percent from last year. IRS also attributed part of the decrease to the migration of taxpayers to other free offers in the marketplace. The Free File program offered a new option this year, fillable forms, that allow taxpayers to download forms from IRS and fill them in on a home computer without using tax preparation software. About 270,000 taxpayers used this option. Finally, IRS met or exceeded its goals for six out of eight of the processing measures (see app. II for details). For example, IRS exceeded its goals for refund timeliness, and deposit timeliness and accuracy. The one measure where performance was significantly below IRS’s goal and last year’s level was the correspondence error rate, which is the percentage of incorrect notices and letters issued to taxpayers. According to IRS officials, this resulted from a high number of erroneous notices sent to taxpayers during the filing season related to the recovery rebate credit. In our interim report, we noted that millions of tax returns had these types of errors, which resulted in a delay in refund timeliness from 1 day to a week. IRS took actions to address the errors, including developing an automated tool to correct the errors more quickly. IRS’s CADE processed 40 million tax returns and 35 million refunds worth $59 billion. This accounts for about 29 percent of all returns processed. CADE processes returns and refunds between 1 and 8 days faster than legacy systems. Additionally, for the first time this year, CADE processed returns with payments to IRS—7 million returns with payments of $9 billion. After over 5 years and $400 million, CADE is only processing about 15 percent of the functionality originally planned for completion by 2012. In addition, each successive release of the system was expected to process more complex returns and several technical challenges had not been addressed. Given this, IRS estimated that full implementation of CADE would not be achieved until at least 2018 or possibly as late as 2028. As a consequence, IRS decided to stop development of new CADE functionality and rethink its strategy for modernizing individual taxpayer accounts to determine whether an alternative approach could deliver improvements sooner. IRS also stopped its plans for adding new functionality to a CADE- related system that allows its telephone assistors to access and work with taxpayer accounts. According to IRS officials, the need to enhance computer security and the availability of new technologies also influenced the decision to rethink the CADE strategy. Stopping CADE development has trade-offs in that IRS will not be able to materially increase the number of returns processed on CADE during the 2010 filing season, which, in turn, means that the number of taxpayers benefiting from faster refund processing will not increase. On the other hand, IRS’s new strategy for modernizing individual taxpayer accounts is intended to address the risks and challenges of the initial approach. Importantly, IRS officials responsible for implementing this new strategy told us that they expect to provide all taxpayers with faster refund processing by the 2012 filing season. IRS plans to do this by continuing daily processing of the roughly 40 million taxpayer accounts currently on CADE while converting the legacy Individual Master File system from weekly to daily processing for the roughly 100 million remaining accounts. IRS also plans to develop a new database that would be the single authoritative source of taxpayer account information and use the new database for daily processing by 2014. IRS established a program management office to guide the implementation of the new strategy and developed a preliminary road map and high-level cost estimates for the effort. It also defined the overall business benefits the strategy is expected to provide. IRS officials also stated that they are working on a more detailed plan for the strategy, including milestones, deliverables, and detailed costs for the first phase of the strategy and expect to have them completed in December. These documents as well as plans for fully implementing the new strategy are critical to justifying IRS’s change in direction and we plan to evaluate them as part of the review of IRS’s fiscal year 2010 Business Systems Modernization expenditure plan, which we recently initiated. As shown in table 3, taxpayers’ access to IRS’s telephone assistors was better than last year, but was below IRS’s original goal for 2009 and remains well below 2005 through 2007 performance. IRS reduced its 2009 goal for providing assistor services from the goals for 2005 through 2008, as shown in table 3. IRS initially set the fiscal year 2009 goal for the percentage of taxpayers seeking assistor service who actually received it at 77 percent. Perhaps more importantly, IRS reduced the goal for 2010 to 71 percent. According to IRS officials, the goals were reduced because of resource trade-offs related to call volume increases starting in 2008. As shown in table 4, IRS’s call volume in 2008 and 2009 was substantially higher than in prior years. Although IRS received 40 million fewer calls in 2009 than in 2008, the volume was still well above earlier years. While IRS’s automated call systems answered an increasing number of calls, the number of calls abandoned by taxpayers, busy signals, and calls disconnected by IRS also went up substantially. This may have limited many taxpayers from reaching IRS assistors with questions. IRS attributed the heavier-than-anticipated call volume in part due to stimulus-related questions and taxpayers needing authentication information. Taxpayers had to provide their last year’s adjusted gross income (AGI) or personal identification number (PIN) to authenticate their identity in order to electronically file. Taxpayers who did not know their AGI or PIN and tried to get it from IRS had to call an assistor or visit an IRS walk-in site. While IRS took actions to minimize the effect of these calls, heavy volume continued through the filing season. According to IRS, its assistors answered 3 million calls from taxpayers needing their AGI, nearly 10 percent of all assistor calls, at a cost of $36 million through June 2009. We recently reported that IRS is developing an automated Web and phone application to provide taxpayers with authentication information for electronic filing in the 2010 filing season. In that report, we also made recommendations to improve telephone service by reducing the volume of telephone calls, which could improve taxpayer access to IRS assistors. For example, in addition to recommending ways to reduce the number of rejected returns, which often lead to taxpayers calling IRS, we recommended that IRS develop a low-cost automated method to respond to taxpayer questions about volunteer site locations and hours of operation. IRS is in the process of addressing those recommendations. Despite the heavy call volume, the accuracy of the telephone assistors’ responses to tax law and account questions was higher by a statistically significant amount compared to the same period last year and exceeded IRS’s fiscal year 2009 goals (see table 5). Since 2005, IRS has maintained a level of accuracy of about 90 percent or more. According to IRS officials, the high accuracy is due to training and the introduction of new tools, particularly the Interactive Tax Law Assistant (ITLA), which is a Web- based probe and response guide to help assistors provide more accurate and consistent responses to specific tax law questions. IRS has limited information on why taxpayers call to speak IRS assistors. To help obtain better information on why taxpayers call, IRS recently implemented a major data collection effort called Contact Analytics at all its 26 call sites. According to IRS, Contact Analytics is a significant investment that will allow IRS to search recorded telephone interactions between taxpayers and IRS assistors for key words or phrases. It is intended to improve service by providing IRS with a research tool to better understand why taxpayers are calling and take corrective action if necessary as well as identify areas to reduce costs, such as identifying calls that can be moved to self-service. However, the agency does not have a comprehensive and detailed analysis plan for effectively using Contact Analytics data to determine how to improve taxpayer service or reduce costs. According to IRS officials, because Contact Analytics is a new program and data is only beginning to be collected, IRS has not yet considered a plan to analyze the data produced by the program. IRS officials recently indicated that now that Contact Analytics has been implemented, they intend to eventually develop an analysis plan. However, a standard approach for major data collection efforts is to develop a research plan before data collection begins. Such a plan helps ensure that necessary data is collected and that resources are not wasted collecting information that will not be needed. Now that IRS is actually collecting the data, the lack of a research plan delays the time when improvements to taxpayer service, based on the results of the research, could be implemented. Having a comprehensive and detailed analysis plan that includes, for example, a research design, dissemination of results, and involvement of relevant stakeholders, provides a number of benefits, perhaps most importantly, increasing the likelihood that the analysis will yield methodologically sound results, thereby supporting effective policy decisions. While many taxpayers require service from live assistors, diverting calls to automated services is also important because of the costs involved—IRS assistors answered about 26 million calls between January 1, 2009 and June 30, 2009 at a cost of $25.75 per call, for a total of $670 million. Also, taxpayers would benefit from reduced wait times and having the capability to obtain information immediately and without having to speak to an assistor. IRS continues to launch new features on its Web site to provide better access to information and reduce taxpayer burden, including the “How Much Was My 2008 Stimulus Payment” application and the recovery rebate check calculator that used the economic stimulus payment amount from 2008 along with several other factors to determine eligibility for recovery rebate credit and the appropriate amount to claim; the Online Payment Agreement application that provides taxpayers with an online, interactive payment agreement process that reduces the need for contact with an assistor and eliminates paper processing. a “What if” page on IRS.gov that describes different scenarios for taxpayers on the possible impact of, for example, loss of job or house, on the taxpayers ability to pay taxes; and information on the new tax credits provided in the Recovery Act with details on, for example, claiming the first-time homebuyer credit and tax breaks for vehicle purchases. As table 6 shows, compared to before 2008, visits to IRS’s Web site are substantially higher than in the last 4 years with the exception of 2008. The year 2008 was anomalous, in part, because of the high number of visits to stimulus-related features on IRS’s Web site in 2008. One measure of the quality of IRS’s Web site is its ranking in the Keynote Systems top 40 government Web sites. During each week of the 2009 filing season, IRS ranked fourth and fifth in response time out of the top 40 government Web sites in the Keynote Government Index weekly ratings, compared to ranging between first or second last filing season. Finally, IRS is working on a Web portal strategy to expand taxpayers’ access to self-assistance tools for account and tax law issues. Both the TAB and IRS’s 2009-2013 strategic plan focus on enhancing features on IRS’s Web site. As of June 28, 2009, IRS’s volunteer partners prepared 3 million tax returns, a slight increase of 1 percent over last year. IRS provides training and certification for volunteer staff to help ensure quality. However, assessing the quality of assistance at volunteer sites is a challenge for IRS because of the large number of volunteer sites and staff providing return assistance. IRS officials stated that the agency partnered with community- based organizations to run 12,160 sites in 2009, 320 sites more than last year, staffed with nearly 83,000 volunteers. Despite these challenges, IRS conducted several types of quality reviews, including site and tax return reviews as well as mystery shopping reviews in both 2008 and 2009. For 2009, IRS combined site and return reviews into its Quality Statistical Sample (QSS) reviews. According to IRS officials, QSS reviews were based on a statistically valid sample of sites. IRS reported that it collected data from 240 site reviews and 679 return reviews, generally reviewing 3 tax returns per site visit. As of mid-April, return preparation accuracy was 78 percent. In contrast, IRS’s mystery shopping reviews resulted in a 68 percent accuracy rate. However, IRS officials stated that the QSS reviews are statistically valid and, therefore, provide a better overall assessment of return accuracy than mystery shopping. Consequently, IRS officials reported they will not conduct mystery shopping in 2010. While we acknowledge that the QSS reviews represent an important advancement in IRS’s assessment of the accuracy of return assistance at volunteer sites, it is important to consider how the data are collected and how the results will be used. The results of the QSS return reviews could be biased because of volunteers’ awareness of the presence of IRS officials. According to IRS officials, since site visits were unannounced, volunteer staff may have been unaware of IRS’s presence while observing the first return, but were likely to have noticed IRS’s presence by the second and third return. As a result, volunteers could be more quality conscious while preparing the later returns, adhering to the quality process encouraged by IRS more than they might have been otherwise. IRS officials stated that they understand the limitations of how the results were obtained. In contrast to a slight increase at volunteer sites, as of April 30, 2009, the total number of taxpayers’ contacts at IRS’s 401 walk-in sites was 2.7 million, down 12 percent compared to previous year. Further, as of June 30, 2009, the accuracy of account assistance improved to 88 percent compared to 83 percent last year, and the accuracy of tax law assistance also improved to 76 percent from 67 percent last year. According to IRS officials, this increase is due in large part to management’s focus on the consistent use of IRS tools available to assistors. They identified IRS’s ITLA in particular, for the increase in accuracy in tax law and return assistance, which is used by both IRS’s telephone and walk-in site assistors to provide more accurate and consistent answers to taxpayers’ questions. While IRS officials acknowledged that using ITLA may take longer to get the answer, assistors properly using the tool will provide the right answer(s) to customer specific tax law questions more often than when ITLA is not used. TAB is IRS’s 5-year strategic plan for improving service to taxpayers and helping guide the agency’s budget and resource allocation decisions. However, the linkage between TAB and the 2010 budget request for IRS or IRS’s agency wide 2009–2013 strategic plan is not clear. TAB is mentioned once in the budget document and not at all in IRS’s strategic plan. This lack of transparency obscures the link between TAB and IRS’s overall strategic plan and budget. IRS officials acknowledged that while TAB is not specifically included and integrated in IRS’s budget and other planning documents, IRS considers TAB to be a guiding principle. According to the Office of Management and Budget and our own work, it is important to link general goals communicated in strategic plans with cross-cutting initiatives, such as those listed in TAB because they work together to form a budget and implementation framework. Without more explicit connections between TAB and IRS’s planning documents, Congress and other stakeholders may not be able to understand the priority that IRS places on improving taxpayer service. According to IRS, depending on the tax refund amount, RAL and RAC fees may range from $39 to over $600, which includes the account set-up fee, tax preparation, and interest. In a recent report, we noted that these charges may amount to an annual percentage interest rate of over 500 percent. IRS officials told us that IRS’s continued efforts to reduce RALs and RACs focus on increasing electronic filing with direct deposit, and improving refund timeliness. Table 7 shows that 8 million taxpayers applied for RALs from banks or other financial institutions, a decline of 20 percent compared to last year. One reason for this decline may have been reluctance by some lenders to offer RALs early in the filing season due to taxpayer errors related to recovery rebate credit claims. Because taxpayer’s anticipated refund is the collateral for a RAL, lenders could not be certain the collateral existed when many refund claims were in error. In contrast, the number of RAC requests increased by 10 percent to 11 million. RACs are less risky for the return preparer because the taxpayer receives no money until the preparer receives the refund and deducts associated fees. IRS’s 2006 RAL report to Congress provides valuable information on taxpayer use of RALs and RACs, benefits of improving refund timeliness on reducing taxpayers’ reliance on RALs and RACs, the cost associated with RALs and RACs, and information on RAL alternatives offered by both IRS and tax preparers. However, IRS has not released this report to the public nor has IRS updated it. Further, according to IRS officials, there is no requirement to do so. By not public releasing and updating the report, IRS is missing an opportunity to provide Congress and taxpayers with important information on how tax law changes, such as the economic stimulus package, might have affected taxpayers’ reliance on RALs and RACs and potentially reduce taxpayers’ reliance on them. Most refunds are claimed on electronically filed returns and then electronically deposited because taxpayers and preparers know that electronic filing and depositing speeds up refund processing. Refunds take 5 to 15 days, as shown in figure 1. Figure 1 also shows that refund processing time varies by day of the week and is shorter for refunds processed on CADE than on IRS’s legacy individual master file, three entities--IRS, Treasury’s Financial Management Service (FMS) and Automated Clearing House (ACH)—share responsibility for issuing refunds (see table 8 below). IRS runs pre-refund tax law compliance checks, FMS checks for non tax debt owed to the federal government, and ACH distributes the funds. IRS accounts for a varying proportion of the total processing time. In a number of cases, IRS accounts for less than half the time it takes to process and issue a refund. Improving refund timeliness is a goal of IRS, with the effort focused on shifting tax return processing to CADE. Figure 1 suggests that another approach would be to try to reduce the time taken by the other two entities involved in issuing refunds, particularly ACH. While IRS officials reported that they meet at least annually with FMS officials to discuss issues related to refunds and communicate intermittently to address issues as needed, they also said that they have not aggressively explored with the other two entities whether opportunities exist to shorten refund processing time. This is a timely issue because as IRS has shifted 40 million of tax returns to CADE, t he IRS proportion of overall refund processing time has decreased. While IRS offers paper check and direct deposit options for delivering refunds to taxpayers, it has not studied the feasibility of distributing refunds electronically through debit cards. Further, IRS has not determined the costs of issuing debit cards nor the benefit for taxpayers. Although tens of millions of taxpayers receive paper checks, they are less secure than electronic distribution of benefits. Further, many unbanked taxpayers may not have the benefit of faster refunds associated with direct deposit and, instead, receive their tax refund by checks, often incurring transaction costs, such as check cashing fees. Debit card programs are well established in a variety of state and federal government programs. For example, FMS’s Direct Express debit cards allow beneficiaries to receive their benefits as quickly as direct deposit while avoiding transaction fees associated with receiving check payments. Similarly, debit cards could provide taxpayers with a low- or no-cost refund option for receiving refunds quickly. According to a recent survey conducted by TIGTA, 63 percent of RAL applicants indicated a preference for receiving a debit card from IRS instead of purchasing a RAL. Finally, in its RAL report to Congress, IRS noted that transitioning unbanked taxpayers to debit cards would allow them to receive their refund in the same amount of time as taxpayers that have direct deposit. Without researching the benefits and costs of debit cards, IRS does not know whether direct distribution of cards is a viable option to distribute refunds, improve refund timeliness and reduce taxpayer reliance on RALs, and provide electronic payment options for unbanked taxpayers. We have previously reported that Treasury’s role as the federal government’s leader for payments and its experience with electronic payment methods suggests that it could provide valuable information and assistance to IRS, particularly when working with other entities to improve service. However, without aggressively collaborating with FMS and ACH to improve refund timeliness and explore other refund options, IRS may be missing an opportunity to further reduce the time taxpayers wait for refunds and taxpayers’ reliance on RALs. We identified higher education tax benefits as one area where an expansion of MEA and revisions to information returns might reduce taxpayer confusion and increase compliance (see app. V). Millions of taxpayers claim the Hope and Lifetime Learning tax credits to offset qualified education expenses. However, these tax provisions are complicated and may lead taxpayers to under claim benefits or unknowingly claim more benefits than they are entitled to claim. IRS faces challenges ensuring compliance with the eligibility requirements of the higher education credits. IRS relies on audits and limited MEA to ensure compliance. However, audits may not be an efficient method for enforcement in this case. Audits are labor intensive, and therefore costly, for IRS. According to IRS officials, the maximum amount most taxpayers can claim per student each year—$1,800 per student for the Hope credit and $2,000 per return for the Lifetime learning credit for tax year 2008— may not yield sufficient revenue to justify expanded enforcement. Because of the relatively high costs and small revenue gain, IRS does relatively few audits of the millions of education credit claims. IRS has MEA to verify compliance with some of the higher education credit eligibility requirements. However, IRS lacks the statutory authority to use MEA to verify compliance with the limit on the number of years that taxpayers can claim the Hope credit. If IRS had authority to use information from prior years’ returns to check taxpayers’ eligibility, it could correct claims during processing, before refunds are issued, and enhance compliance. Eligible educational institutions are required to report information on qualified expenses for higher education to both taxpayers and IRS so that taxpayers can determine the amount of educational tax benefits that can be claimed (see app. V). However, the information currently reported by educational institutions on tuition statements sent to IRS and taxpayers (on Form 1098-T) may be confusing for taxpayers who use the form to prepare their tax returns and not very useful to IRS. IRS requires institutions to report on Form 1098-T either the (1) amount of payments received, or (2) amount billed for qualified expenses. IRS officials stated that most institutions report the amount billed and do not report payments. However, the amount billed may not equal the amount that can be claimed as a credit. For example, the amount billed may not account for all scholarships or grants the student received. In such cases, the Form 1098-T may overstate the amount that can be claimed as a credit, confusing taxpayers. Conversely, if institutions are not providing information on other eligible items, such as books or equipment, taxpayers might be understating their claims. In addition to confusing taxpayers, the existing Form 1098-T is not very useful to IRS in its enforcement efforts. According to IRS officials, because the amount billed may not be the amount taxpayers are eligible to claim as a credit, IRS does not compare tuition statement information to the information reported on a tax return. IRS officials stated that a change in legislation, which TIGTA recommended in a recent report, would be needed to require institutions to report only the amount paid. However, IRS does not currently use some of the more basic information from the tuition statement to verify eligibility for the credit. For example, a tuition statement includes the student’s SSN that could be matched to tax return information. Additionally, IRS does not use the location of the institution to determine whether it is located in a federal disaster area, which substantially increases the amount of the eligible credit. Using IRS’s compliance computer matching systems to automatically compare information on statements to taxpayers’ claims could be a low-cost enforcement tool for IRS to verify certain aspects taxpayers’ eligibility for of the credit. While changing the requirements for how higher education institutions report qualified expenses on tuition statements would likely impose some burden on those institutions, the additional burden could be low because the institutions are already required to fill out Form 1098-T. Further, this form could be revised to more clearly provide additional information about qualified expenses, such as for books, supplies, and equipment— information institutions might already collect—and potentially reduce taxpayer confusion and noncompliance. IRS met many of its 2009 filing season goals. The major exception was telephone service, where, for a second year in a row, unanticipated increases in call volume significantly reduced performance, in part, because of inquires related to tax law changes. However, IRS does not have a research plan for conducting analyses of its telephone contacts that could identify areas for more automated services. The declines in telephone performance also highlight the importance of TAB, which is intended to provide the strategy for improving service to taxpayers, and how TAB is integral to IRS’s overall strategic plan. Millions of taxpayers continue to use expensive RALs and RACs. By not coordinating more closely with FMS and ACH, IRS may be missing opportunities to improve refund timeliness and expand options for refund delivery, both of which might reduce taxpayers’ demand for RALs and RACs. Further, by not updating its RAL report and studying the feasibility of debit card options, IRS may be missing other opportunities to reduce the transaction costs imposed on taxpayers, particularly low income taxpayers, when they receive tax refund payments. Finally, steps could be taken by Congress to provide IRS with the statutory authority to automatically verify some aspects of higher education credits claims and by IRS to improve and better utilize information reported by higher education institutions. Without such steps, taxpayers may remain confused by the information reported to them, and IRS will not make use of some low-cost, less intrusive tools for helping ensure compliance. Congress should consider providing IRS with MEA to use prior years’ tax return information to automatically verify taxpayers’ compliance with the limit on the number of years the Hope credit can be claimed. Related to improving IRS’s performance during the filing season, the Commissioner of Internal Revenue should Develop as soon as possible an analysis plan for using the data IRS captures through Contact Analytics; and Explicitly integrate the TAB in strategic planning documents. To further improve refund timeliness and reduce reliance on RALs and RACs, IRS should Update and publicly release a report on RAL and RAC use; Work more proactively with FMS and ACH to help improve refund Determine the feasibility of offering debit cards for refunds. To reduce taxpayer confusion and enhance compliance with the eligibility requirements for higher education benefits, IRS should Determine the feasibility of using current information reported on Form 1098-T, such as school location and taxpayer identification number or SSN, in IRS’s compliance programs; and Revise Form 1098-T to improve the usefulness of information on qualifying education expenses. In written comments on a draft of this report (which are reprinted in appendix I), the IRS Deputy Commissioner for Services and Enforcement explicitly agreed with five of our recommendations and described the steps IRS is taking with respect to our two other recommendations. IRS officials also provided technical comments which we incorporated as appropriate. IRS agreed to develop a comprehensive and detailed evaluation plan for Contact Analytics, work to define the scope and objectives for potentially updating and releasing a RAL/RAC report, and work with FMS and ACH to improve refund timeliness. IRS also agreed to consider the feasibility of using current information on Form 1098T in its compliance programs, and develop a plan to address possible changes to that form. With respect to our recommendation to explicitly integrate TAB in strategic planning documents, the Deputy Commissioner said that although they are not repeated verbatim in IRS’s Strategic Plan, TAB’s guiding principles resonate throughout the document. We acknowledged IRS’s position in making our finding and recommendation. However, TAB is not mentioned once by name in IRS’s strategic plan. Without an explicit and transparent connection between TAB and IRS’s other planning documents, Congress and other stakeholders may not be able to understand the priority that IRS is giving to improving taxpayer service. Concerning our recommendation to determine the feasibility of offering a debit card option for refunds, the Deputy Commissioner said that the agency is exploring options for debit card use including an option to provide debit cards directly. Because the Deputy Commissioner’s letter does not provide any detail on what exploring options means, we want to reiterate the basis for our recommendation. A small debit card pilot program was conducted at several volunteer sites around the country, but that pilot did not provide information on the benefits or costs of IRS issuing debit cards directly. Given the large number of unbanked taxpayers, we believe IRS should determine the feasibility of directly issuing debit cards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions are listed in appendix VI. Table 8 below summarizes the Internal Revenue Service’s (IRS) existing math error authority (MEA) as well as authorities we recently suggested that Congress provide to enhance compliance, including for certain tax credits. In addition, last year we recommended that the IRS Commissioner use existing MEA to identify and correct child and dependent care credit claims on “Married Filing Separately” returns and assess the effectiveness of combining the Federal Case Registry and other data on taxpayer characteristics to verify the eligibility of Earned Income Tax Credit claims from noncustodial parents. As shown in table 9, the Internal Revenue Service (IRS) met or exceeded goals for six out of eight of its goals for the percentage of errors included in deposits and correspondence (which was separated into letter and notice errors in previous years); deposit and refund timeliness (i.e., interest foregone by previous years); productivity; and individual master file (IMF) efficiency. One measure where performance was significantly below IRS’s goal and below last year’s level was the correspondence error rate, the percentage of incorrect notices and letters issued to taxpayers. According to IRS officials, this resulted from a high number of erroneous notices sent to taxpayers claiming the recovery rebate credit early in the filing season. During the 2009 filing season, the Internal Revenue Service (IRS) received most of the calls early on, with the heaviest call volume during February (see fig. 2 below). Most of the calls were related to taxpayers’ need for authentication information and tax law changes. In contrast, during the 2008 filing season IRS received most of the calls after the filing season, between April and June and were primarily stimulus-related questions. Taxpayers can generally claim tax benefits to help offset qualified higher education expenses for an eligible student if the eligible student is the filing taxpayer, their spouse, or a dependent for whom they claim an exemption on a tax return. Tax benefits include credits, deductions, as well as a number of other programs to help taxpayers offset qualified education expenses. Unlike other student aid programs such as federal grants that offer assistance to the taxpayer in determining their entitlements, tax benefits require the taxpayer to understand the pertinent rules and, ultimately, choose the option that provides the most benefit. Information reported by educational institutions on the tuition statement (Form 1098-T) assists taxpayers in determining the amount of benefits to which they are entitled. Consequently, inaccurate information on the tuition statement may contribute to taxpayer confusion and result in taxpayers making less-than-optimal claims or being unintentionally noncompliant. Table 10 below provides information on tax benefits for education that are available to qualifying taxpayers for the 2008 and 2009 tax years. James R. White, (202) 512-9110 or [email protected]. In addition to the contact named above, Joanna M. Stamatiades, Assistant Director; Vida Awumey; John P. Dell’Osso; Kara Eusebio; Melanie D. Helser; Lina Khan; Kirsten B. Lauber; Angela Leventis; Natalie Maddox; Paul B. Middleton; Karen V. O’Conor; Sabine R. Paul; Neil Pinney; Sabrina C. Streagle; and Jessica Thomsen made key contributions to this report.
The Internal Revenue Service's (IRS) filing season is an enormous undertaking that includes processing tax returns, issuing refunds, and responding to taxpayer questions. IRS's efforts to ensure compliance begin during the filing season. GAO was asked to assess IRS's 2009 filing season performance, identify ways to reduce taxpayers' use of short-term, high-interest refund anticipation loans (RAL) offered by paid preparers or banks, and identify ways to enhance compliance during processing. GAO analyzed IRS performance data, reviewed IRS operations, interviewed IRS officials, and reviewed its compliance programs and relevant statutes. IRS processed 139 million returns and issued $298 billion in refunds as of October 2, 2009. Electronic filing, which provides IRS with significant cost savings and taxpayers with faster refunds, increased to 68 percent of all returns filed. While taxpayers' access to telephone assistors was better than last year, it remained lower than in 2007 in part because of calls about tax law changes. Compared to 2005 through 2007, IRS reduced its goal for assistor answered calls in 2009 and set its 2010 goal at 71 percent. Despite heavy call volume, the accuracy of IRS responses to taxpayers' questions remained above 90 percent. IRS started a major data collection effort on why taxpayers call, but lacks a plan to analyze the data and improve telephone service. According to IRS, issuing refunds faster reduces taxpayers' use of RALs, high-interest loans made by paid tax preparers or banks in anticipation of a refund. Issuing refunds is a joint effort by IRS, Treasury's Financial Management Service, which checks for non-tax debt owed to the federal government, and the Automated Clearing House, which distributes funds. However, IRS has not coordinated extensively with them to expedite refunds. Further, IRS has not studied the use of debit cards for unbanked taxpayers, which could also reduce taxpayers' use of RALs by providing faster and more secure refunds. IRS automatically identifies and corrects select types of errors while processing tax returns. It could also correct tax returns that claim the Hope credit, a tax credit to help offset qualified education expenses, for longer than the number of years allowed. However, IRS lacks the authority to use prior years' tax return information for this purpose. Also, information reported by education institutions to taxpayers and IRS about qualifying educational expenses on the Form 1098-T is confusing for taxpayers and not useful for IRS. Many institutions report the total amount billed to students, but not what is actually paid after taking into account scholarships and grants. This results in some taxpayers under-claiming benefits, while others over-claim. Finally, because Form 1098-T can show the amount billed, which may not be the amount paid, IRS is unable to use the information to automatically verify taxpayers' claims for the credit through its computerized matching program.
Economic growth—which is central to almost all our major concerns as a society—requires investment, which, over the longer term, depends on saving. Since the 1970s, nonfederal saving has declined while federal budget deficits have consumed ever-higher levels of these increasingly scarce savings. The result has been to decrease the amount of national saving potentially available for investment. (See figure 1.) Since we last reported on this issue in 1992, overall national saving has remained low. These conditions—less nonfederal saving and a greater share of this saving absorbed by deficits—do not bode well for the nation’s future productive capacity and future generations’ standard of living. The surest way to increase the resources available for investment is to increase national saving, and the surest way to increase national saving is to reduce the federal deficit. Our 1992 analysis showed that an indefinite continuation of then-current federal budget policy was not sustainable. Without policy change, the continuation of large increases in health care costs, a jump in Social Security costs after 2010 as the baby boom generation retires, and escalating interest costs would fuel progressively larger deficits. Growing deficits and the resulting lower saving would lead to dwindling investment, slower growth, and finally a decline of real GDP. Living standards, in turn, would at first stagnate and then fall. Our view was that a “no action” path with respect to the deficit was not sustainable. Action on the deficit might be postponed, but it could not be escaped. Our simulation of several hypothetical deficit reduction paths further showed that the timing and magnitude of deficit reduction would affect both the amount of sacrifice required and the economic benefits realized. Acting sooner would reduce future interest costs and therefore total deficit reduction required from other sources. Achieving and sustaining balance or surplus would yield long-term benefits in the form of higher national saving, higher investment, and more rapid economic growth. By promoting economic growth, deficit elimination would give future generations more resources to finance the baby boom’s retirement. Since our 1992 report, the Congress and the President have taken action on the deficit. According to Congressional Budget Office (CBO) estimates, the Omnibus Budget Reconciliation Act of 1993 (OBRA 93) will reduce the federal deficit cumulatively for fiscal years 1994 through 1998 by over $400 billion. Despite this short-term progress, however, OBRA 93 did not fundamentally alter the growth of the major entitlement programs driving the long-term deficit problem. The Bipartisan Commission on Entitlement and Tax Reform, created in late 1993, highlighted the nation’s vulnerability to the growth of these programs and their potential fiscal effects. Currently, the Congress and the administration are again considering proposals which could reduce future deficits. As in our 1992 work, our updated simulation results show that continuing current spending and taxation policies unimpeded over the long term would have major consequences for economic growth. A fiscal policy of “no action” through 2025 implies federal spending of nearly 44 percent of GDP and a deficit of over 23 percent of GDP. (See figure 2.) By drastically reducing national saving, rising deficits would shrink private investment and eventually result in a declining capital stock. Given our labor force and productivity growth assumptions, GDP would inevitably begin to decline. These negative effects of rapidly increasing deficits on the economy would, we believe, force action at some point before the end of the simulation period. If policymakers did not take the initiative, external events—for example, the unwillingness of foreign investors to sustain a deteriorating American economy—would compel action. While the “no action” simulation is not a prediction of what would actually happen, it illustrates the pressures to change the nation’s current fiscal course. The shift in the composition of federal spending by the end of the simulation period shows that, under a long-term “no action” path, health care, interest costs, and—after 2010—Social Security spending drive increasingly large and unsustainable deficits. (See figure 3.) As federal spending in the simulation heads toward 44 percent of GDP in 2025, the major federal health care programs—Medicare and Medicaid—would become the major programmatic driver of budget deficits. Their share of the economy would more than triple between 1994 and 2025. Health care cost inflation and the aging of the population work together to produce this rapid growth. At the same time, simulated interest spending increases dramatically. Escalating deficits resulting from the increased spending add substantially to the national debt. Rising debt, in turn, raises spending on interest, which compounds the deficit problem, driving a vicious circle. The effects of compound interest are clearly visible, as interest spending rises from about 3 percent of GDP in 1994 to over 13 percent in 2025. Social Security also grows, but its rise is much slower than health care. Its expansion occurs mainly after 2010 as the baby boom generation retires. The expansion of the three forces fueling budget deficits means that the federal government would find it increasingly difficult to fund other needs. The economic benefits of deficit reduction are illustrated by the three fiscal paths we simulate in our model. (See table 1.) As discussed above, a fiscal policy of “no action” is not economically sustainable over the long term. The “muddling through” and “balance” paths show that the further away fiscal policy moves from a path of “no action,” the better the outlook for the economy in the long term. The differences in GDP per capita at 2025 reflect major differences in the underlying capacity of the economy in our illustrative simulations to generate growth. Our “no action” simulation, when maintained unimpeded through 2025, portrays the potential long-term economic impact of a declining national saving rate. Under a policy of “no action” on the deficit, investment would peak in the next decade and then decline steadily due to the lack of national saving. Shortly thereafter, capital depreciation would outweigh investment, and the capital stock would actually begin to decline. Given our assumptions about labor force and productivity growth, the declining capital stock would lead inevitably to a decline in GDP. By 2025, investment would be entirely eliminated, the capital stock would have declined to less than half of its 1994 level, and per capita GDP—only about 5 percent greater in real terms than at the start of the 30-year period—would be poised for a precipitous drop. Compared to a policy of “no action,” more stringent fiscal policies would result in greater economic growth. Tighter fiscal policies can promote greater private investment in the long term, a larger capital stock, and therefore a larger future GDP. The “muddling through” simulation shows such GDP growth but because of persistent deficits, debt increases well above current levels. In the model, the larger debt requires increased foreign capital inflows. Our “balance” simulation, compared to “muddling through,” achieves greater deficit reduction and a larger GDP with lower debt and, accordingly, less reliance on foreign capital. And as we stated in 1992, a strongly growing economy will be needed to support present commitments to the future elderly and a rising standard of living for the future working population. In actuality, the differences between alternative fiscal policies would likely be even greater than our simulation results suggest. Our model incorporates conservative assumptions about the relationship between savings, investment, and GDP growth that tend to understate the differences between the economic outcomes associated with alternative fiscal policies. For example, in our model, interest rates, productivity, and foreign investment all hold steady regardless of economic change. In the “no action” simulation, we assumed that they all remain constant in the face of a collapsing U.S. economy; this is unlikely to be true. Similarly, under our “balance” simulation, interest rates, productivity, and foreign investment do not respond favorably to increased national savings and investment. While the magnitude of any response is difficult to predict, some change could be expected. To the extent that our assumptions are conservative, differences between a “balance” path and the other two paths would be larger than simulated. We recognize that deficit reduction would have costs in the short term. The deficit reduction necessary to achieve beneficial long-term economic outcomes and reduced interest costs would entail difficult budgetary reductions and require a greater share of national income to be devoted to saving, thus foregoing some consumption in the short term. The greater the fiscal austerity, the more consumption would need to be sacrificed. However, more stringent deficit reduction measures mean correspondingly larger increases in consumption in the long term. The decision policymakers face, then, involves a trade-off between the immediate sacrifice of deficit reduction and the deferred but more severe economic costs associated with continued deficits. The share of the federal budget devoted to interest costs would be reduced through deficit reduction, freeing up scarce resources to satisfy other public needs. This will be particularly important for future budgets when the aging of the population will prompt greater spending pressures. The dynamics of compound interest which, given no action on the deficit, lead inexorably to spiralling deficits, yield dividends under a balance simulation. The more rapidly real debt is reduced and real interest costs brought down, the less long-term programmatic sacrifice required. Action taken to achieve balance by 2002 and to sustain it shrinks interest as a percent of total outlays from 12 percent in 1994 to less than 5 percent in 2025, assuming a constant interest rate. (See figure 4.) In contrast, higher interest costs would approach 18 percent of outlays by 2025 under the “muddling through” path because the deficit is maintained at 3 percent of GDP, resulting in higher debt. Moreover, due to growing pressures from health and Social Security commitments, the “muddling through” path requires progressively greater spending reductions just to keep the deficit from growing above 3 percent of GDP. Not all spending cuts have the same impact over the long run. Decisions about how to reduce the deficit will reflect—among other considerations—judgments about the role of the federal government and the effectiveness of individual programs. In our 1992 work, we drew particular attention to federal investment in physical capital, human capital, and research and development. Such public investment plays a key role in economic growth, directly and by creating an environment conducive to private sector investment. Accordingly, in addition to the overall level of deficit or surplus, the proportion of the budget devoted to investment spending will also affect long-term growth. The extent to which deficit reduction affects spending on fast-growing programs also matters. Although a dollar is a dollar in the first year it is cut—regardless of what programmatic changes it represents—cutbacks in the base of fast-growing programs generate greater savings in the future than those in slower-growing programs, assuming the specific cuts are not offset by increases in the growth rates of the programs. Figure 5 illustrates this point by comparing the long-run effects of a $50-billion cut in health spending with those of the same dollar amount cut from unspecified other programs. For both paths the cut occurs in 1996 and is assumed to be permanent but, after 1996, spending is assumed to continue at the same rates of growth as those shown in the “no action” simulation.We used the simple assumption that a reduction either in health or in other programs would not alter the expected growth rates simply to illustrate the point that a cut in high-growth areas of spending will exert greater fiscal effects in the future than the same size cut in low-growth areas. Because the 1996 cuts are equal dollar amounts, the two simulations appear very similar in the early part of the period. A gap develops between them as time passes, however, and by 2025 the difference between the two paths has widened to nearly 4 percent of GDP. The gap appears and then widens because health spending grows much faster than other areas of spending. A cut in this spending area reduces the proportion of the budget growing quickly, thereby reducing the total budget growth. The effects of compound interest, discussed earlier in this report, magnify the difference. Even if a balanced budget is achieved early in the next century, deficits could reemerge as the coming demographic changes continue to exert fiscal pressures. Depending upon the types of spending reductions adopted, future growth in health, Social Security, and interest costs—the deficit drivers—will continue to place demands on federal budgetary resources. As the Bipartisan Commission on Entitlement and Tax Reform recently observed, the decreasing ratio of the labor force to retirees will exacerbate the fiscal effects of the growing elderly population. In addition to the effects of the known demographic shift, uncertainties about the growth of health care costs also promise to complicate future budget policy. Recent budgetary history has shown that health care costs have proven very difficult to predict. Experts we contacted agreed on only one thing—long-range cost projections made today will be wrong. Whether they are too high or too low is unclear, although historically health projections have nearly always been too low. For these reasons, sustaining a balanced budget over the long term could be an ongoing challenge. Rather than discouraging efforts to reduce the deficit, an awareness of future fiscal pressures might instead be used to help inform current fiscal policy choices. For example, some program changes, if made today, would generate little in immediate savings but would exert large future outlay reductions. Program changes with such “wedge-shaped” savings paths might be important elements of a strategy to mitigate the longer-term spending pressures, as they were in several other nations that reduced fiscal deficits. Phasing in such changes over a longer time frame would give affected populations more time to adjust to these changes. Moreover, other nations found that phasing in program changes strengthened prospects for public support of needed fiscal policy changes. The analysis presented in this report of the long-term economic and fiscal implications of these alternative fiscal policy paths relies in substantial part on an economic growth model that GAO adapted from a model developed by economists at the Federal Reserve Bank of New York. The model reflects the interrelationships between the budget and the economy over the long term and does not capture their interaction during short-term business cycles. The main influence of budget policy on long-term economic performance is through the effect of the federal deficit on national saving. Conversely, the rate of economic growth helps determine the overall federal deficit or surplus through its effect on revenues and spending. Higher federal budget deficits reduce national saving while lower deficits increase national saving. The level of saving affects investment and, in turn, GDP growth. Budget assumptions in the model rely upon CBO estimates through 2004 to the extent practicable. These estimates are used in conjunction with our model’s simulated levels of GDP. For Medicare, we assumed growth consistent with CBO’s projections and HCFA’s long-term intermediate projections from the Medicare Trustees’ April 1995 report. For Medicaid through 2004, we similarly assumed growth consistent with CBO’s projections. For 2005 and thereafter, in the absence of long-range Medicaid projections from HCFA, we used projections developed in 1994 by the Bipartisan Commission on Entitlement and Tax Reform. For Social Security, we use the April 1995 intermediate projections from the Social Security Trustees throughout the simulation period. Other mandatory spending is held constant as a percentage of GDP after 1999, the last year in which CBO projections are available in a format usable by our model. Discretionary spending is held constant as a percentage of GDP after 2005. Receipts are held constant as a percentage of GDP after 1999. Our interest rate assumptions are based on CBO through 1999 and then move to a fixed rate. (See appendix I for a more detailed description of the model and the assumptions we used.) We conducted our work from June 1994 through April 1995. We received comments from experts in fiscal and economic policy and have incorporated them as appropriate. We are sending copies of this report to the President of the Senate and the Speaker of the House of Representatives and to the Ranking Minority Members of your Committees. We are also sending copies to the Director of the Congressional Budget Office, the Secretary of the Treasury, and the Director of the Office of Management and Budget. Copies will be made available to others upon request. This report was prepared under the direction of Paul L. Posner, Director for Budget Issues, and James R. White, Acting Chief Economist. They may be reached at (202) 512-9573. Major contributors to this report are listed in appendix II. This updated analysis of the long-term economic and budgetary implications of alternative fiscal policy paths relies in substantial part on an economic growth model that GAO adapted from a model developed by economists at the Federal Reserve Bank of New York (FRBNY). The model represents growth as resulting from labor force increases, capital accumulation, and the various influences affecting total factor productivity. To allow a closer analysis of the long-term effects of fiscal policy, we added a set of relationships describing the federal budget and its links to the economy. The relationships follow the definitions of national income accounting, which differ slightly from those in the budget. The model is helpful for exploring the long-term implications of policies and for comparing alternative policies within a common economic framework. The results provide qualitative illustrations, not quantitative forecasts, of the budget or economic outcomes associated with alternative policy paths. The model reflects the interrelationships between the budget and the economy over the long term and does not capture their interaction during short-term business cycles. Figure I.1 illustrates the core relationships of the model. The main influence of budget policy on long-term economic performance is through the effect of the federal deficit on national saving. Higher federal budget deficits reduce national saving while lower deficits increase national saving. The level of savings affects investment and, hence, GDP growth. Gross domestic product (GDP) is determined by the labor force, capital stock, and total factor productivity. GDP in turn influences nonfederal saving, which consists of private saving and state and local government surpluses or deficits. Through its effects on federal revenues and spending, GDP also helps determine the federal budget deficit or surplus. Nonfederal and federal savings together comprise national saving, which influences private investment and the next period’s capital stock. Capital combines with labor and total factor productivity to determine GDP in the next period, and the process continues. There also are important links between national saving and investment and the international sector, not shown in figure I.1 in order to keep the overview simple. In an open economy such as the United States, a decrease in saving due to, for example, an increase in the federal budget deficit, does not require an equal decrease in investment. Instead, part of the saving shortfall may be filled by foreign capital inflows. A portion of the net income that results from such investments flows abroad. If capital were perfectly mobile, foreign capital inflows could fully offset the effect on domestic investment of a decline in U.S. saving. The evidence continues to suggest, however, that a nation’s investment is correlated with its own saving. Hence, we retained our 1992 assumption (based on the work of FRBNY) that net foreign capital inflows rise by one-third of any decrease in the national saving rate. Table I.1 lists the key assumptions incorporated in the model. The assumptions used tend to provide conservative estimates of the benefit of deficit reduction and the harm of deficit increases. The interest rate on the national debt is held constant, for example, even when deficits climb and the national saving rate plummets. Under such conditions, the more likely result would be a rise in the rate of interest and a more rapid increase in federal interest payments than our results display. Another conservative assumption is that the rate of total factor productivity growth is unaffected by the amount of investment. Productivity is assumed to advance 1 percent each year even if investment collapses. Such assumptions suggest that deficit changes could have greater effects than our results indicate. We have made several modifications to the model since the 1992 report, but its essential structure remains the same. The model incorporates the National Income and Product Accounts (NIPA) shift from 1982 to 1987 as the base year, and the switch from gross national product to GDP as the primary measure of overall economic activity. The more recent data prompted several parameter changes. For example, the inflation rate is now assumed to be 3.4 percent, down from 4.0 percent in our previous work, while the average interest rate is reduced to 7.2 percent from 7.8 percent. Our work also incorporates the CBO projection that deficits in the next few years will be somewhat lower than was foreseen in 1992. The distinction between the mandatory and discretionary components of the budget is important. Our approach has been modified to accommodate this distinction by reclassifying budget data based on the NIPA framework as mandatory or discretionary spending. From 1995 through 1999, CBO data were used for this reclassification. For the years from 2000 through 2005, we adopted CBO’s assumption that discretionary spending would increase at the rate of inflation, and, thereafter, we assumed it would keep pace with GDP growth. Mandatory spending includes Health, Old Age Survivors’ and Disability Insurance (OASDI, or Social Security), and a residual category covering other mandatory spending. For the first 9 years, health spending incorporates CBO’s Medicare and Medicaid assumptions. Thereafter, Medicare follows the Trustees’ 1995 Alternative II projections. We smoothed the path of Medicaid spending from 2005 through 2011 in order to link CBO’s spending assumptions to those of the Bipartisan Commission on Entitlement and Tax Reform. OASDI reflects the April 1995 Social Security Trustees’ Alternative II projections. Other mandatory spending is a residual category consisting of all nonhealth, non-Social Security mandatory spending. It equals CBO’s NIPA projection for Transfers, Grants, and Subsidies less Health, OASDI, and other discretionary spending. Through 1999, CBO assumptions are the main determinant of other mandatory spending, after which its growth is linked to that of GDP. The interest rates for 1994-1999 are consistent with the average effective rate implied by CBO’s interest payment projections. We assume that the average rate then moves to 7.2 percent by 2003, where it remains for the rest of the simulation period. Receipts follow CBO’s dollar projections to 1999. Thereafter, they continue at 20.3 percent of GAO’s simulated GDP, which is the percent the model projects for 1999. As these assumptions differ somewhat from those used in our earlier report, the results are not directly comparable. An appendix to the 1992 report provides additional detail on the model’s structure. Interest rate (average on the national debt) Surplus/Deficit 1995-99 (% of GDP) Spending rises at the rate of inflation Spending rises at the rate of economic growth Grows at the rate CBO assumes Medicare follows HCFA; Medicaid follows assumptions of the Bipartisan Commission on Entitlement and Tax Reform (continued) Follows the Trustees’ Alternative II projections CBO’s assumed levels Spending rises at the rate of economic growth CBO’s assumed levels Receipts equal 20.3 percent of GDP (1999 ratio) Addressing the Deficit: Budgetary Implications of Selected GAO Work for Fiscal Year 1996 (GAO/OCG-95-2, Mar. 15, 1995). Deficit Reduction: Experiences of Other Nations (GAO/AIMD-95-30, Dec. 13, 1994). Budget Policy: Issues in Capping Mandatory Spending (GAO/AIMD-94-155, July 18, 1994). Budget Issues: Incorporating an Investment Component in the Federal Budget (GAO/AIMD-94-40, Nov. 9, 1993). Federal Budget: Choosing Public Investment Programs (GAO/AIMD-93-25, July 23, 1993). Budget Policy: Long-Term Implications of the Deficit (GAO/T-OCG-93-6, Mar. 25, 1993). Budget Policy: Prompt Action Necessary to Avert Long-Term Damage to the Economy (GAO/OCG-92-2, June 5, 1992). The Budget Deficit: Outlook, Implications, and Choices (GAO/OCG-90-5, Sept. 12, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the long-term economic impacts of the budget deficit. GAO found that: (1) some progress has been made in reducing the deficit since 1992, but the long-term deficit outlook remains a national problem; (2) inaction in reducing the deficit would inevitably result in a declining economy; (3) although taking action to reduce the deficit would promote long-term economic growth and reduce interest costs, such action would require significant budget adjustments; (4) early reductions in fast growing areas, such as health programs, would contribute more to the elimination of long-term deficits than other types of spending reductions; (5) even after a balanced budget is achieved, deficits could continue to emerge as demographic changes exert fiscal pressures; and (6) Congress faces difficult tradeoffs between the short- and long-term economic benefits of deficit reduction.
CBP began operation of Predator B aircraft in fiscal year 2006, and as of fiscal year 2016, operates nine Predator B aircraft from four AMO National Air Security Operations Centers (NASOC) in Arizona, Florida, Texas, and North Dakota. Based on CBP data provided to us for fiscal year 2015, annual obligations for CBP’s Predator B program were approximately $42 million and the cost per flight hour was $5,878. AMO is responsible for operation of CBP’s Predator B aircraft and coordinates with other CBP components and government agencies to perform federal border security activities. CBP’s Predator B aircraft are equipped with video and radar sensors primarily to provide intelligence, surveillance, and reconnaissance capabilities. For more information on sensors equipped on Predator B aircraft, see figure 1. CBP’s Predator B aircraft are launched and recovered at its NASOCs in Sierra Vista, Arizona; Corpus Christi, Texas; and Grand Forks, North Dakota; while the NASOC in Jacksonville, Florida remotely operates Predator B aircraft launched from other NASOCs. Each NASOC where Predator B aircraft are launched and recovered is generally assigned a broad geographic area of responsibility (AOR); see figure 1 for more information. CBP is responsible for operating Predator B aircraft in accordance with Federal Aviation Administration (FAA) procedures for authorizing UAS operations in the national airspace system. Pursuant to FAA requirements, all Predator B flights must comply with procedures for obtaining Certificates of Waiver or Authorization (COA). The COA- designated airspace establishes operational corridors for Predator B activity both along and within 100 miles of the northern border, and along and within 25 to 60 miles of the southern border, exclusive of urban areas. COAs issued by FAA to CBP also include airspace for training missions which involve take offs and landings around a designated NASOC and transit missions to move Predator B aircraft between NASOCs. CBP reported 80 percent of Predator B aircraft flight hours were along border and coastal areas of the United States in COA- designated airspace (see fig. 2). CBP began operation of tactical aerostats in August 2012 in south Texas. As of the end of fiscal year 2016, it had deployed five tactical aerostats in Border Patrol’s Rio Grande Valley sector and one tactical aerostat in Laredo sector. For locations of CBP’s tactical aerostats, see figure 3. Based on data CBP provided to us for fiscal year 2015, annual obligations for CBP’s tactical aerostat program were approximately $41 million and the cost per flight hour ranged between $424 to $677, depending on the type of aerostat. CBP currently operates three types of tactical aerostats equipped with video surveillance cameras that vary in size and altitude of operation. CBP is responsible for operating its tactical aerostats in accordance with FAA regulations through the issuance of a COA authorizing use of moored balloons. CBP’s tactical aerostats were obtained through its Department of Defense ReUse program and are comprised of a mix of Department of Defense-loaned and CBP-owned equipment. CBP manages the technology and operation of each tactical aerostat site through contracts, and Border Patrol agents operate tactical aerostat video surveillance cameras and provide security at each site. See figure 4 for a photograph of a CBP tactical aerostat. The former U.S. Customs Service began the TARS program in 1978 with the establishment of its first site located in Florida followed by a second site in Arizona in 1983. From 1988 to 1991, the U.S. Customs Service established additional TARS sites at Yuma, Arizona, and three sites in Texas, including Marfa, Eagle Pass and Rio Grande City. In 1992, management of the TARS program was transferred to the Department of Defense, with the U.S. Air Force designated as the executive agent. During the Department of Defense’s management of the TARS program, sites were located in Florida, Texas, Arizona, Puerto Rico and the Bahamas. In July 2013, the TARS program was transferred from the Department of Defense to DHS with a total of eight sites. At the time of transfer, two TARS sites were inoperable due to past system crashes and CBP restored operations at those sites in September 2014. Based on data CBP provided to us for fiscal year 2015, annual obligations for CBP’s TARS program were approximately $45 million and the cost per flight hour was about $950. CBP manages the technology and operation of each TARS site through contracts and CBP’s Air and Marine Operations Center (AMOC) has operational control over each asset. As of fiscal year 2016, there were a total of eight TARS sites along the southern U.S. border and in Puerto Rico. For a map of TARS sites, see figure 3 above. CBP uses Predator B aircraft to conduct various border security activities and to support a range of government agencies. First, with regard to the types of activities for which Predator B aircraft are used, our analysis of CBP data showed that over 80 percent of Predator B missions were in support of law enforcement and extended border missions, as shown in table 1. In particular, about 67 percent of missions were for law enforcement activities, such as use of Predator B aircraft to locate individuals illegally crossing the border and provide aerial surveillance during investigations in joint operations with federal law enforcement agencies and during special security events like Pope Francis’s visit to Juarez, Mexico, in February 2016. About 16 percent of missions were in support of extended border operations, which are operations beyond U.S. territorial lands and seas in support of federal and international law enforcement partnerships. For example, CBP uses its Predator B aircraft for interdiction operations with Joint Interagency Task Force – South, including maritime patrol missions in the source and transit zones, which encompass the area from South America through the Caribbean Sea and the eastern Pacific Ocean that is used to transport illicit drugs to the United States (transit zone) from drug producing countries in South America (source zone). In addition, about 14 percent of missions were for training, and about 1 percent was for non-enforcement activities, such as for natural disaster recovery efforts. With regard to CBP’s use of Predator B aircraft in support of government agencies, based on our analysis of CBP data, we found that half of all Predator B flight hours from fiscal years 2013 through 2016 (51 percent) were in support of Border Patrol to help, among other things, in its efforts to detect the illegal entry of goods and people between U.S. ports of entry. We also found that 32 percent of Predator B flight hours from fiscal years 2013 through 2016 were in support of AMO, such as for missions to train AMO UAS pilots. CBP’s Predator B operations in support of other federal agencies accounted for 15 percent of Predator B flight hours from fiscal years 2013 through 2016, such as conducting aerial surveillance for investigations during controlled deliveries of illegal contraband by U.S. Immigration and Customs Enforcement. We found that 2 percent of Predator B flight hours from fiscal years 2013 through 2016 were attributed to support for state and local government agencies. For example, CBP’s Predator B operations in support of local law enforcement agencies have included performing aerial surveillance during officer safety incidents, such as an active shooting incident. As part of using Predator B aircraft to support other government agencies, CBP has established various mechanisms to coordinate Predator B operations. For example, at NASOCs, personnel from other CBP components are assigned to support and coordinate mission activities involving Predator B operations. Border Patrol agents assigned to support NASOCs assist with directing agents and resources to support its law enforcement operations and collecting information on asset assists provided for by Predator B operations. Further, two of DHS’s joint task forces help coordinate Predator B operations. Specifically, Joint Task Force – West, Arizona and Joint Task Force – West, South Texas coordinate air asset tasking and operations, including Predator B operations and assist in the transmission of requests for Predator B support and communication with local field units during operations, such as Border Patrol stations and AMO air branches. Further, CBP uses a dedicated system—called BigPipe—to coordinate Predator B operations. BigPipe distributes operational mission data to supported federal, state, and local law enforcement agencies. In particular, BigPipe distributes real-time and recorded mission information, including information from sensors on AMO assets, including Predator B aircraft. According to CBP officials, BigPipe allows for seamless and efficient secure communication between AMO, Joint Task Force – West, Arizona, and other locations. In addition to these mechanisms, CBP has also documented procedures for coordinating Predator B operations among its supported or partner agencies in Arizona specifically by developing a standard operating procedure for coordination of Predator B operations through its NASOC in Arizona. These documented procedures include a description of the responsibilities of participating agencies; procedures for sharing mission information and collecting asset assist information from supported agencies related to seizures and apprehensions; and requirements for reviewing and tasking of air support requests for Predator B aircraft from non-CBP government agencies. Joint Task Force – West, Arizona also created an air integration strategy outlining the surveillance assets and associated capabilities available to support operations in its AOR, including a model to guide use of air and ground assets, including Predator B aircraft and towers equipped with video and radar surveillance technology. According to CBP officials we met with in Arizona, the integration strategy, in conjunction with information on surveillance technology deployment, were used to help plan and prioritize Predator B patrol missions in areas lacking existing surveillance technology; for example, along federal and tribal lands in Tucson Border Patrol sector’s AOR. However, CBP has not documented procedures for coordination of Predator B operations among its supported agencies through its NASOCs in Texas and North Dakota. CBP has established national policies for its Predator B operations that include policies for prioritization of Predator B missions and processes for submission and review of Predator B mission or air support requests. However, these national policies do not include coordination procedures specific to Predator B operating locations or NASOCs, such as local tasking of air support requests to Predator B versus other aircraft, procedures for sharing mission information across multiple locations and agencies, and collection and reporting of asset assist information from supported agencies during and after Predator B missions. For example in Texas, Predator B operations are coordinated in part through DHS’s Joint Task Force – West, South Texas and air support requests for Predator B aircraft may be submitted by government agencies to the NASOC in Texas. Without documented coordination procedures for Predator B operations in Texas, it is not clear how requests submitted to the NASOC in Texas are reviewed, prioritized, and coordinated with Joint Task Force – West, South Texas, including how both entities reach agreement on requests that may involve competing priorities. Standards for Internal Control in the Federal Government states that significant events should appear in management directives, policies, or operating manuals to help ensure management’s directives are carried out as intended. The Trade Facilitation Act also requires that, as part of standard operating procedures regarding use of UAS, AMO is to develop a formal procedure to determine how UAS mission requests from non- CBP law enforcement agencies are prioritized and coordinated. Further, CBP’s strategic plan states that integrating surveillance capabilities into the planning and execution of law enforcement operations is enabled by sound standards, procedures, and processes that require interagency coordination. CBP’s Predator B aircraft are national assets used primarily for detection and surveillance during law enforcement operations, independently and in coordination with federal, state, and local law enforcement agencies throughout the United States. AMO officials acknowledged that developing documented coordination procedures for Predator B operations in North Dakota and Texas could strengthen ongoing coordination efforts; however, CBP has not yet taken actions to develop documented coordination procedures in those operating locations due to differences across those locations. For example, AMO officials told us that the current coordination process in Texas, which relies on direct operator to operator coordination, may be inefficient at times. Further, AMO officials stated that a coordination process through Joint Task Force – West, South Texas is under development and that documented coordination procedures for Predator B operations could strengthen CBP’s coordination with other government agencies. Without documenting its procedures for coordination of Predator B operations with supported agencies, CBP does not have reasonable assurance that practices at NASOCs in Texas and North Dakota align with existing policies and procedures for joint operations with other government agencies. Within CBP, Border Patrol uses tactical aerostats to identify cross-border illegal activity through video surveillance cameras attached to an aerostat during flight, and these aerostats are deployed in south Texas. Border Patrol agents operate video surveillance cameras on tactical aerostats in flight from a local command and control center to detect, identify, monitor, and track items of interest. For example, Border Patrol agents can position video surveillance cameras to observe an item of interest such as a person attempting to illegally enter the United States from Mexico by crossing the Rio Grande River using a raft. Once an item of interest has been identified, a Border Patrol agent operating a video surveillance camera can relay information to other agents through direct land-mobile radio communication or through a Border Patrol station Tactical Operations Center. For an example of Border Patrol’s use of tactical aerostat technology, see figure 5. In its Aerostat Operational Need Statement, Border Patrol’s Rio Grande Valley sector stated that video surveillance cameras on tactical aerostats in flight can be used to detect items of interest across wide coverage areas. The Operational Need Statement further stated that video surveillance cameras on tactical aerostats in flight can enhance surveillance coverage by increasing the elevation and viewing angle to overcome obstructions from vegetation and bends in the Rio Grande River, which limit the use of tower-mounted camera systems. Further, CBP reported that video surveillance cameras on tactical aerostats can view items of interest from a distance between 10 to 24 kilometers based on the type aerostat and altitude of operation. Border Patrol agents also use tactical aerostats in south Texas to support bi-national law enforcement efforts between Border Patrol and the government of Mexico. Border Patrol officials we met with in south Texas told us that information from video surveillance cameras on tactical aerostats shared with the government of Mexico has resulted in seizures of weapons and narcotics. For example in fiscal years 2014 through 2016, Border Patrol reported over 60,000 pounds of known marijuana seized by the government of Mexico through assistance provided by tactical aerostats. Border Patrol agents use other surveillance technologies to aid in identifying items of interest with tactical aerostat video surveillance cameras. For example, Border Patrol agents may use tactical aerostat and relocatable tower video surveillance cameras concurrently within tactical aerostat command and control centers. According to CBP officials, video surveillance cameras on these towers help to enhance the coverage area and provide continued coverage when aerostats are not in flight. Border Patrol agents assigned to tactical aerostat sites are also able to access information from unattended sensors, which can provide cueing information to locate items of interest with tactical aerostat video surveillance cameras. Border Patrol plans to use tactical aerostats until Remote Video Surveillance System technology is deployed in Rio Grande Valley and Laredo Border Patrol sectors, but is too soon to tell when such technology will be deployed. According to CBP officials, as of fiscal year 2016, there are no plans to deploy any additional tactical aerostats. In April 2016, DHS approved deployment of Remote Video Surveillance System technology in two Border Patrol station AORs in Rio Grande Valley sector. Specifically, CBP plans to deploy 18 Remote Video Surveillance System technology sites in Rio Grande City and 12 in McAllen stations’ AORs. According to CBP officials, as of November 2016, CBP had chosen its sites for Remote Video Surveillance System technology deployment in the two AORs and initiated survey studies to plan activities for its chosen sites. CBP has not finalized plans for deployment of Remote Video Surveillance System technology in other areas of Rio Grande Valley or Laredo Border Patrol sectors. CBP uses TARS to provide domain awareness along U.S. southern borders. TARS captures continuous radar information and detects moving objects passing through its radar coverage area that can include lawful and unlawful aircraft, vessels, or vehicles entering or approaching the United States. TARS provides radar coverage from an altitude of operation up to 15,000 feet and can detect objects within 200 nautical miles. The elevated radar sensor on the TARS aerostat mitigates curvature of the earth and terrain masking limitations of common ground based radar systems. According to CBP data provided to us for 2015, the eight TARS sites collectively detected approximately 4.73 billion moving objects such as aircraft and vessels. Data captured from TARS are disseminated to AMOC. Two TARS sites in Florida and Puerto Rico provide constant radar surveillance of vessel traffic to enable maritime domain awareness. One TARS site in Eagle Pass, Texas is also equipped with a video surveillance camera to assist Border Patrol Del Rio sector’s efforts to detect cross-border illegal activity in its Eagle Pass South Border Patrol station’s AOR. Information from TARS is primarily used by CBP detection enforcement officers (DEO) at AMOC in combination with other information to detect and identify tracks of interest (TOI) or potential occurrences of illegal air, land, and maritime border incursions (see fig. 6 below). Specifically, radar information captured across all TARS sites is integrated through AMOC’s operational system—the Air and Marine Operations Surveillance System (AMOSS). AMOSS provides a single display showing multiple real-time radar images, including from TARS and FAA and Department of Defense radar systems, as a single fused output or track. AMOSS provides users with information from government databases, such as FAA databases for aircraft registration and flight plan information. CBP’s law enforcement databases are also integrated with AMOSS to provide users with enforcement and case-related information. AMOSS automatically populates information from other government databases which is available in real-time along with a single radar image including information from multiple radar systems and TARS. According to CBP officials, the ability of DEOs to detect border incursions through AMOSS is significantly diminished when a TARS site is not operational since there are limited radar systems near TARS sites and none that can provide similar radar coverage. As of November 2016, CBP has a study underway to evaluate future use of TARS. As we reported in May 2016, CBP is conducting an analysis of alternatives for the TARS program to inform future decisions related to the legacy program. According to CBP, TARS are obsolete and no longer manufactured or supported and could be out of service by early 2020. CBP officials told us TARS sites face immediate risks of being out of service earlier than 2020 should a crash occur at a TARS site rendering the radar components inoperable because there are no spare radar systems available. CBP has no specific plans for replacement or modernization of TARS, but is currently undergoing an analysis of alternatives to determine whether the agency should modernize or replace them. According to CBP officials, the analysis of alternatives for TARS is expected to be completed during fiscal year 2017. The purpose of the analysis of alternatives is to identify the most appropriate capability that CBP should consider to fill the impending gap in persistent surveillance of the air domain provided by the eight current TARS sites. CBP has initiated various studies and evaluations to help assess the effectiveness of its Predator B program, and also collects and tracks various data related to Predator B operations. With regard to studies and evaluations, in October 2015 AMO initiated a capability gap assessment process to evaluate how specific AMO platforms, including Predator B aircraft, contribute to its border security mission. Further, in 2015 AMO also initiated a study to analyze the contributions of its assets and identify metrics for domain awareness along the southwestern U.S. border. AMO’s objectives for this study include (1) defining domain awareness as it pertains to border security; (2) characterizing and measuring AMO operations in terms of their contributions to domain awareness; and (3) developing candidate metrics for domain awareness. In June 2016, as part of this study, AMO released a report including analysis of Predator B aircraft Vehicle and Dismount Exploitation Radar (VADER)—a radar system which collects radar images of moving objects—use along the U.S-Mexico border in Arizona and proposed performance metrics for missions involving domain and situational awareness. AMO officials told us it plans to analyze VADER use along the U.S.-Mexico border in Texas as part of this study starting in fiscal year 2017. In addition, in August 2015 DHS’s Science and Technology Directorate (S&T) initiated a study of CBP’s Predator B program through DHS’s Joint Requirements Council in partnership with AMO. The purpose of this study is to (1) identify a recommended number of Predator B aircraft assigned to operating locations or NASOCs and (2) determine whether expanding the use of Predator B aircraft is the best use of funds for border security. The study was initiated in response to the findings of a 2014 DHS Office of Inspector General report on CBP’s Predator B program and following a January 2015 Acquisition Decision Memorandum that called for CBP to provide the Joint Requirements Council with justification for the number of Predator B aircraft and their assignment to operating locations. In September 2016, DHS released a report based on the S&T-led study that recommended CBP not expand its Predator B program beyond its current nine aircraft based on challenges identified in the report. Challenges identified in DHS’s report included, for example, the availability of UAS pilots and staff to operate, maintain, and repair Predator B aircraft. According to CBP officials, the agency plans to resolve the challenges identified by the S&T-led study and maintain the nine aircraft already in its Predator B program. With regard to data on Predator B operations, CBP collects and tracks data that can help it monitor operational effectiveness, including (1) asset assists, (2) detections of cross-border illegal activity, (3) the launch rate of Predator B aircraft (i.e., the percentage of launched versus scheduled missions), and (4) annual flight hour goals. Asset assists. CBP collects and maintains data on known assists for apprehensions and seizures attributed to support by Predator B aircraft. According to CBP data provided to us, Predator B aircraft assisted in the apprehension of 7,951 individuals from fiscal years 2013 through 2016 as shown in table 2. Further, CBP data shows Predator B aircraft support assisted with the seizure of 9,190 pounds of cocaine and 223,817 pounds of marijuana from fiscal years 2013 through 2016. Detections of cross-border illegal activity. CBP tracks the number of detections made using VADER equipped on its Predator B aircraft. CBP began use of VADER at its NASOCs in Arizona in fiscal year 2012 and Texas in fiscal year 2015. In 2014, CBP began collecting data on the number of individuals detected by VADER in relation to cross-border illegal activity. Our analysis of CBP data showed that from fiscal years 2014 through 2016 about 98 percent of these detections (20,858 of 21,384 detections) were attributed to VADER used on Predator B aircraft operated from CBP’s NASOC in Arizona. Launch rate. CBP tracks the launch rate of its Predator B aircraft. Our analysis of Predator B aircraft launch rate data showed a 69 percent launch rate from fiscal years 2013 through 2016. CBP also tracks reasons for Predator B mission cancelations. Our analysis of Predator B mission cancellations found that 20 percent of all Predator B scheduled missions from fiscal years 2013 through 2016 were canceled due to weather. We also found that CBP’s NASOC in Texas had the highest percentage of weather-related mission cancelations (37 percent) among NASOCs, followed by North Dakota (28 percent) and Arizona (26 percent), for Predator B missions scheduled from fiscal years 2013 through 2016. Other reasons identified for cancellations of scheduled missions from fiscal year 2013 through 2016 include aircraft maintenance (5 percent of scheduled missions) and lack of available crew (4 percent of scheduled missions), among others. For more information on the limitations related to CBP’s use of Predator B aircraft, see appendix III. Annual flight hour goals. CBP sets and tracks annual Predator B program flight hour goals as part of AMO’s annual operational flight hour planning process. Our analysis of CBP data showed that CBP met 93 percent of its set flight hour goals from fiscal years 2013 through 2016. For more information on CBP’s Predator B flight hour goals, see table 3. These studies and data collection efforts provide CBP with useful information for assessing the effectiveness of its Predator B program. However, CBP could strengthen these efforts by more consistently recording data on Predator B missions across its operating locations. Our analysis of CBP data found that users recorded information about supported agencies and asset assists inconsistently across Predator B operating locations within its system for recording Predator B mission data. We found that for law enforcement missions in support of multiple agencies and part of operations through joint task forces, users did not consistently record the name of the joint task force supported. For example, some users listed the supported agency only as ”joint task force” without identifying the task force name, while others identified the name of the task forces supported. Similarly, we found that some users listed one or multiple agencies that are members of the Joint Interagency Task Force – South (such as U.S. Coast Guard or AMO) rather than identifying the task force itself. Users also misidentified supported agencies for other types of missions. For example, users did not consistently record AMO as the supported agency for maintenance and training missions and at times recorded Border Patrol instead as the supported agency. We also found users did not consistently record information on asset assists for Predator B missions. For example, no asset assist information for seizures of narcotics were recorded for CBP’s NASOC in North Dakota from fiscal years 2013 through 2015 despite participation in law enforcement operations involving investigations related to the illegal production and distribution of narcotics. For example in fiscal year 2013, Predator B aircraft launched from its NASOC in North Dakota provided aerial support for investigations related to methamphetamine production and marijuana cultivation; however, as CBP does not consistently record information on asset assists, it is not known if those investigations resulted in the seizure of narcotics. CBP’s written guidance for recording Predator B mission data has not been updated since 2014 to include new data variables and system functions added to its system for recording Predator B mission data. Further, it does not include definitions and instructions for collecting and recording information on missions with multiple supported agencies and asset assists for seizures and apprehensions. According to AMO officials, guidance for recording Predator B mission information has not been updated due to resource and time constraints. AMO officials at NASOCs responsible for recording Predator B mission information told us that in the absence of updated guidance from CBP, they developed local guidance to address changes made to CBP’s system for recording Predator B mission data. However, according to AMO officials, not all data are collected and recorded consistently at NASOCs as a result of differing local guidance. For example, in Arizona, NASOC officials record multiple supported agencies while NASOC officials in North Dakota only record the agency submitting the air support request even if more than one agency is supported. Moreover, according to AMO officials, not all users across Predator B operating locations have received training to use its system for recording Predator B mission data. While AMO has provided training to users responsible for recording mission information for manned aircraft and vessels in the same system at its operating locations, such as air branches, according to AMO officials, no training has been provided at its NASOCs for Predator B operations. According to AMO officials, a working group of NASOC officials met in 2015 to discuss Predator B mission data collection across its NASOCs and identified the need to record data consistently across operating locations. AMO officials we spoke with told us that limited resources have affected its ability to develop and implement training across all Predator B operating locations. According to NASOC officials responsible for recording Predator B mission information, updated guidance and training could help ensure data is collected consistently across locations as its system for recording Predator B mission data is periodically updated with new features and requirements. Standards for Internal Control in the Federal Government calls for pertinent information to be recorded and communicated to management in a form and within a time frame that enables them to carry out internal control and other responsibilities. This includes the accurate recording and reporting of data necessary to demonstrate agency operations. Additionally, internal control standards state that all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. Without clear guidance and training, users may continue to record Predator B mission data inconsistently for certain mission variables such as supported agencies and outcomes for asset assists. As a result, CBP is not well positioned to capture complete and consistent data from which to assess the effectiveness of its Predator B program. Updated guidance and training for users for Predator B mission data collection would help CBP ensure consistent recording across all Predator B operating locations to support CBP’s efforts to assess the effectiveness of its Predator B program. We also found that CBP collects additional information on its use of Predator B aircraft for non-CBP law enforcement agencies, but this information is not linked to data in its system for recording Predator B mission data. As mentioned previously, CBP uses Predator B aircraft to conduct missions in support of a variety of government agencies which are submitted to AMO as air support or mission requests. AMO Procedure 2014-07 includes procedures for submission and review of requests for Predator B aircraft from non-CBP law enforcement agencies and states that Predator B air support requests provide for oversight to ensure that mission requests are vetted and prioritized and requires an electronic archive be kept of all requests. CBP’s procedures require requests from non-CBP law enforcement agencies for Predator B aircraft support to be submitted through an air support request form which is reviewed and approved by CBP. Although CBP has a process for submission and review of air support requests for Predator B aircraft by non-CBP law enforcement agencies, these air support request forms are not recorded in its system for recording Predator B mission data. Standards for Internal Control in the Federal Government notes that internal controls are an integral part of each system that management uses to regulate and guide its operations, and that communication of information and control activities are an integral part of an entity’s planning, implementing, review, and accountability for stewardship of government resources and achieving effective results. Furthermore, the standards state that control activities, which include a wide range of diverse actions and maintenance of related records, need to be clearly documented and help to ensure that all transactions are completely and accurately recorded. CBP currently tracks air support request forms submitted from non-CBP law enforcement agencies for Predator B missions using a separate internal system. However, by not logging or recording these requests in its system for recording Predator B mission data, CBP does not document in a single system complete information about its use of Predator B aircraft for non-CBP law enforcement purposes, such as the names of the state and local government agencies supported, making it more difficult for CBP to analyze its Predator B mission data. While CBP currently documents requests, recording air support request forms in its system for recording Predator B mission data could allow CBP to further facilitate implementation of provisions included in the Trade Facilitation Act by ensuring such requests are documented in the same system as mission data. Specifically, these provisions require CBP to develop a process and procedure for the submission, approval, prioritization and coordination of air support requests from non-CBP law enforcement agencies for command, control, communication, surveillance, and reconnaissance assistance through unmanned aerial systems. By documenting Predator B air support request forms in its system for recording Predator B mission data, CBP could also better oversee implementation of procedures related to Predator B air support requests along with mission information. CBP has conducted two evaluations of tactical aerostats and collects and tracks data on its use of tactical aerostats. First, with regard to the two evaluations, CBP conducted these to inform its initial deployment in 2012 and continued use in 2014. In August 2012, CBP, in collaboration with the Department of Defense, conducted an initial evaluation of the operational utility of tactical aerostats in Border Patrol’s Rio Grande Valley sector. CBP completed a second evaluation of its use of tactical aerostats in July 2014 to further assess the operational utility and functionality of tactical aerostats within Border Patrol’s Rio Grande Valley sector. The purpose of the second evaluation was to support Border Patrol’s continued use of tactical aerostats by examining operational effectiveness and comparison of capabilities and performance of tactical aerostats compared to other existing surveillance technology—for example, towers equipped with video surveillance cameras and ground sensors. CBP concluded that tactical aerostats enhance Border Patrol’s situational awareness; are deployable over typical terrain with proper planning and site preparation; are operable and maintainable in the operational environment; and are a visual deterrent to transnational criminal organizations. CBP also concluded that tactical aerostats contributed to its operational effectiveness and that they provide greater cumulative surveillance coverage than towers or other ground-based systems. Second, with regard to data, CBP collects and tracks data on its use of tactical aerostats to assess effectiveness, including data on (1) operational availability (i.e., percentage of time an aerostat is in flight providing surveillance data to CBP), (2) asset assists for apprehensions of individuals and seizures of narcotics, and (3) the number of detections of items of interest. Operational availability. According to CBP data provided to us, tactical aerostat operational availability from May 2015 through fiscal year 2016 across all sites averaged 62 percent with a range of 51 to 81 percent. CBP set a performance goal of greater than 60 percent for tactical aerostat operational availability. A variety of factors can influence operational availability of a tactical aerostat, including equipment maintenance and weather as aerostat flight is subject to weather restrictions. See appendix III for more information about limitations on CBP’s use of tactical aerostats. Asset assists. Border Patrol collects and maintains data on known assists for apprehensions and seizures attributed to support provided by tactical aerostats. Border Patrol collects data on asset assists for known seizures and apprehensions using DHS’s Enforcement Integrated Database (EID), which includes an asset assist data field in which Border Patrol agents can specify whether an asset, such as a tower or ground sensor, contributed to the apprehension of individuals and seizure of drugs and other contraband. In May 2014, CBP added an asset assist data field in EID to capture known asset assist data for “aerostats”. Our analysis of CBP data showed that a majority of asset assists attributed to aerostats for the apprehension of individuals and seizure of narcotics were in Rio Grande Valley Border Patrol sector from May 2014 through fiscal year 2016. Specifically, 98 and 99 percent of total asset assists for apprehensions of individuals and seizures of narcotics were recorded in Rio Grande Valley Border Patrol sector, respectively. For more information on asset assists attributed to aerostats for seizures of narcotics and apprehension of individuals, see appendix IV. Detections of items of interest. Border Patrol also collects information on the number of detections of items of interest at each tactical aerostat site using video surveillance cameras on tactical aerostats through an internal system. Border Patrol developed a system for agents operating tactical aerostat surveillance cameras to record detections made at each site. According to Border Patrol officials, detections recorded by agents are detections of cross-border illegal activity and exclude other activities observed through tactical aerostat surveillance cameras such as recreational activities including fishing. Table 4 summarizes the number of detections recorded across all tactical aerostat sites from fiscal years 2014 through 2016. Standards for Internal Control in the Federal Government states that agencies should promptly and accurately record transactions to maintain their relevance and value for management decision making. In collecting and recording asset assist information, Border Patrol does not distinguish between tactical aerostats and TARS. Border Patrol issued guidance in May 2016 to users for recording tactical aerostat asset assist information in EID, which stated users were to include tactical aerostat and TARS under the same asset assist data field for aerostat. Without a separate mechanism to record tactical aerostat asset assists, users could potentially misidentify asset assist information related to tactical aerostats to TARS, limiting the completeness and usefulness of the data. For example, CBP has two tactical aerostats and one TARS deployed in Rio Grande City Border Patrol station’s AOR (see figure 3), and the two types of systems provide distinct types of support when assisting with, for example, seizures and apprehensions. By recording asset assist information in EID for TARS and tactical aerostats under the same data field, CBP is not able to determine the contribution of each separate program. According to Border Patrol officials, the agency has not yet taken actions to develop a mechanism to collect asset assist information that distinguishes between TARS and tactical aerostats, but such a mechanism would be useful and could be developed outside of EID. In addition, according to Border Patrol officials, asset assist data in conjunction with detection information has been used to evaluate tactical aerostat site deployment locations related to proximity to cross-border illegal activity. Specifically, CBP officials told us the data were used to inform the relocation of a tactical aerostat site due to changes in cross- border illegal activity in Rio Grande Valley Border Patrol sector. Due to the constant change in cross-border illegal activity, costs associated with re-deployment of tactical aerostat sites (between $60,000 to over $100,000 per site), and time needed to locate and obtain access for land use for re-deployment, better data collection practices to distinguish between asset assists associated with tactical aerostats and TARS would help CBP to better ensure its data are complete to help guide resource allocation decisions. CBP collects and tracks data on its use of TARS to assess effectiveness, including (1) operational availability or percentage of time an aerostat is in flight providing surveillance data to CBP, (2) tracks of interest (TOI) identified using TARS, and (3) assets assists associated with TOIs from TARS. Specifically: Operational availability. CBP collects performance information on TARS at each site by tracking operational availability. CBP established a performance goal of greater than 64 percent operational availability across all eight TARS sites. According to CBP data provided to us, operational availability across all TARS sites for fiscal years 2013 through 2016 ranged between 59 to 61 percent, with a range across each site of 34 to 85 percent. Table 5 shows operational availability of TARS sites from fiscal years 2013 through 2016. According to CBP data provided to us, weather-related TARS downtime resulted in an average 30 percent reduction in operational availability across all TARS sites from fiscal year 2013 through 2016. Other factors that affect operational availability include, for example, maintenance or incursion of an aircraft into the restricted airspace encompassing a TARS site. TOIs. CBP collects and tracks information on TOIs identified by DEOs at AMOC using information from TARS. According to CBP data, DEOs identified 1,989 TOIs from TARS from fiscal years 2013 and 2016 as shown in table 6. Further, CBP data shows that TARS were used to identify 50 to 63 percent of total TOIs detected by AMOC along the southwest border between fiscal years 2013 through 2016. From fiscal years 2013 through 2016, over 90 percent of TOIs detected were short landings in Mexico and aircraft border incursions. Specifically, we found 73 percent (range of 55 to 80 percent) of TOIs were short landings and 20 percent (range of 15 to 30 percent) were aircraft border incursions. See table 6 for more information on TOIs identified by DEOs at AMOC. Asset assists. CBP collects known asset assist information on TOIs identified by DEOs using TARS by coordinating with mission partners. The majority of asset assists were violations issued related to federal regulations for operation of aircraft entering, exiting, and flying in U.S. airspace. From fiscal years 2013 through 2016, CBP identified 377 violations issued related to federal regulations attributed to TOIs detected by DEOs through information from TARS; for example, pilot deviations related to penetration of restricted airspace without permission. CBP also reported 14 arrests and 40 seizures attributed to TOIs identified through TARS. See table 7 for more information on known asset assists associated with TOIs identified using TARS from fiscal years 2013 through 2016. CBP uses a variety of technologies and assets to secure the border, including Predator B UAS. CBP’s Predator B aircraft are used as national assets and support numerous federal and non-federal government agencies. CBP established and implemented collaborative efforts to coordinate its Predator B operations among the government agencies it supports. By documenting its procedures in all operating locations, CBP could better oversee coordination procedures as use changes based on needs and cross-border illegal activity. CBP currently has ongoing efforts to assess the effectiveness of its Predator B program. However, improving its collection of Predator B mission data by updating guidance and implementing training would help to ensure information is collected and recorded in a standardized and consistent way across all operating locations and could also further its efforts to assess the effectiveness of its Predator B program. CBP uses tactical aerostats in south Texas to support Border Patrol’s ability to detect and interdict cross-border illegal activity. CBP’s tactical aerostats may be relocated as cross-border illegal activity changes, and collecting data on effectiveness of these assets could help better guide CBP’s use and deployment of these assets. Border Patrol currently collects information on asset assists for its tactical aerostat program for apprehensions and seizures, but this information does not distinguish asset assists from CBP’s other aerostat program—TARS. Revising its collection of asset assist data to distinguish between tactical aerostats and TARS would help better position CBP to support its use and deployment of tactical aerostats. To improve its efforts to coordinate Predator B operations among supported agencies and assess the effectiveness of its Predator B and tactical aerostat programs, we recommend that the Commissioner of CBP take the following five actions: Develop and document procedures for Predator B coordination among supported agencies in all operating locations. Update and maintain guidance for recording Predator B mission information in its data collection system. Provide training to users of CBP’s data collection system for Predator B missions. Record air support forms for Predator B mission requests from non- CBP law enforcement agencies in its data collection system for Predator B missions. Update Border Patrol’s data collection practices to include a mechanism to distinguish and track asset assists associated with TARS from tactical aerostats. We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix V, and technical comments, which we incorporated as appropriate. DHS concurred with the five recommendations in the report and described actions to address them as noted below. With regard to the first recommendation related to developing and documenting procedures for Predator B coordination among supported agencies in all operating locations, DHS concurred. DHS stated that CBP plans to develop and implement an operations coordination structure and document its coordination procedures for Predator B operations through Joint Task Force – West, South Texas by September 30, 2018. DHS also stated that CBP plans to document its coordination procedures for Predator B operations through its NASOC in Grand Forks, North Dakota by September 2017. With regard to the second recommendation related to updating and maintaining guidance for recording Predator B mission information in its data collection system, DHS concurred and stated that CBP will take actions to update and maintain guidance for recording Predator B mission information, including incorporating a new functionality in its data collection system to include tips and guidance for recording Predator B mission information and updating its user manual for its data collection system by September 2019. With regard to the third recommendation related to providing training to users of CBP’s data collection system for Predator B missions, DHS concurred and stated that CBP is developing a schedule to train users of its data collection system for Predator B mission information. DHS provided an estimated completion date for the training of September 2018. With regard to the fourth recommendation, related to recording air support request forms for Predator B mission requests from non-CBP law enforcement agencies in its data collection system for Predator missions, DHS concurred and stated that AMO plans to develop a process and disseminate guidance to users explaining how to maintain Predator B air support request forms in its data collection system for Predator B missions. DHS provided an estimated completion date of September 2017. With regard to the fifth recommendation related to updating Border Patrol’s data collection practices to include a mechanism to distinguish and track asset assists associated with TARS from tactical aerostats, DHS concurred and stated that Border Patrol is making improvements to capture data to ensure asset assists are properly reported and attributed to TARS and tactical aerostats. DHS provided an estimated completion date of September 2017. DHS’s planned actions, if implemented effectively, should address the intent of our recommendations. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. CBP has taken steps to implement a small UAS program to support the U.S. Border Patrol (Border Patrol). Specifically: CBP collaborated with the Department of Homeland Security’s Science and Technology Directorate (S&T) to demonstrate and analyze the potential benefits, limitations and risks of operating existing commercial off-the-shelf small UAS technology. Starting in June 2014, through its Robotic Aircraft for Public Safety (RAPS) project, S&T evaluated more than 22 small UAS platforms to assess the extent to which they could provide situational awareness in support of Border Patrol. In July 2016, S&T’s Silicon Valley Office released a solicitation for small UAS capabilities and technologies to augment CBP and Border Patrol mission capabilities. In fiscal year 2017, S&T initiated a project to perform assessments of small UAS sensors, similar to RAPS, to fill or mitigate identified Border Patrol operational capability gaps. Border Patrol officials told us it plans to continue collaboration with S&T to leverage results from past and ongoing work for asset selection, training, and concepts of operation in fiscal year 2017. CBP established a memorandum of agreement with the Federal Aviation Administration for small UAS operations in September 2016. The memorandum of agreement specifies provisions for operation of small UAS by CBP in the national airspace system, including advance notification requirements prior to each launch and operations at an altitude of operation at or below 1,200 feet. In September 2016, CBP conducted a small UAS feasibility assessment in El Paso Border Patrol sector. The purpose of the assessment was to assess the capability of small UAS technology to provide situational awareness to support Border Patrol’s mission along the southwest U.S.-Mexico border and the ability of Border Patrol agents to employ small UAS. The assessment was a joint effort between CBP’s Border Patrol and Air and Marine Operations and included two small UAS platforms provided by the U.S. Army. This report addresses the following questions: How does U.S. Customs and Border Protection (CBP) use unmanned aerial systems (UAS) and aerostats for border security activities, and to what extent has CBP developed and documented procedures for UAS coordination? To what extent has CBP taken actions to assess the effectiveness of its UAS and aerostats for border security activities? For the purposes of our report, UAS includes Predator B UAS and aerostats include tactical aerostats and the Tethered Aerostat Radar System (TARS) program. As of November 2016, CBP is currently developing a small UAS program to support the U.S. Border Patrol (Border Patrol), see appendix I. To address both questions, we reviewed Department of Homeland Security (DHS) and CBP strategic plans, such as DHS’s fiscal years 2014 to 2018 strategic plan, CBP’s fiscal years 2015 to 2020 strategic plan, Border Patrol’s fiscal years 2012 through 2016 strategic plan, and Air and Marine Operation’s (AMO) strategic plan. We also reviewed DHS strategies for border security, such as its Campaign Plan for Securing the U.S. Southern Border and Approaches 2015 to 2018 and Northern Border Strategy. In addition, we reviewed DHS and CBP documents related to UAS, aerostats, and surveillance technology, for example: plans, reports, concepts of operation, operational requirements, and standard operating procedures. We also reviewed relevant laws; Federal Aviation Administration (FAA) procedures, requirements and regulations for operation of UAS and aerostats in the national airspace system; and past GAO work related to UAS, aerostats, and surveillance technologies used for border security activities. We also conducted site visits to observe CBP’s use of Predator B UAS and aerostats in Arizona, Texas, North Dakota, and California and interviewed CBP officials responsible for their operation. Specifically: Arizona: We visited AMO’s National Air Security Operations Center (NASOC) in Sierra Vista, Arizona to observe Predator B operations; TARS site at Fort Huachuca, Arizona; and met with CBP officials responsible for their operation. Texas: We visited AMO’s NASOC in Corpus Christi, Texas to observe Predator B operations; three, of six total, tactical aerostat sites in south Texas to observe their use; and met with CBP officials responsible for their operations. The three tactical aerostat sites we visited included one of each type of tactical aerostat used by CBP. North Dakota: We visited AMO’s NASOC in Grand Forks, North Dakota to observe Predator B operations and met with responsible CBP officials. California: We visited CBP’s Air and Marine Operations Center in Riverside, California to observe use of the Air and Marine Operations Surveillance System which includes radar information from TARS sites used to detect border incursions and met with responsible CBP officials. Findings from our observations and interviews during our site visits cannot be generalized to all occurrences of CBP’s use of Predator B UAS and aerostats, but provided useful insights about the operations of these assets. To determine how CBP uses Predator B UAS and aerostats, we reviewed and analyzed CBP data and information and interviewed responsible CBP program officials. Specifically: For Predator B UAS, we reviewed and analyzed CBP data on Predator B aircraft flight hours and types of mission activities from fiscal years 2013 through 2016, a time period for which data was available in CBP’s current system for recording Predator B mission data—Tasking, Operations, and Management Information System. We compared CBP’s available documentation of its procedures for coordination of its use of Predator B UAS with Standards for Internal Control in the Federal Government. In addition, we reviewed provisions included in the Trade Facilitation and Trade Enforcement Act of 2015 requiring the establishment of standard operating procedures for CBP’s UAS program to address coordination of mission requests, among other things. For tactical aerostats, we reviewed CBP documents, such as Rio Grande Valley Border Patrol sector’s Fiscal Year 2015 Aerostat Operational Need Statement, and information on bi-national law enforcement efforts between Border Patrol and the government of Mexico from fiscal years 2014 through 2016, a time period for which information was available. For TARS, we reviewed information on CBP’s Air and Marine Operations Surveillance System and ongoing analysis of alternatives for the TARS program. To determine how CBP assesses the effectiveness of its Predator B UAS and aerostats for border security activities, we analyzed CBP performance assessment documentation, such as reports and concepts of operation, related to use of Predator B UAS and aerostats and interviewed CBP officials responsible for performance measurement activities. We also analyzed and reviewed CBP data on Predator B UAS and aerostat performance for assists provided for apprehensions of individuals, seizures of narcotics, and other events (asset assists). Specifically: For Predator B UAS, we analyzed asset assist, launch rate, and flight hour goal data from fiscal years 2013 through 2016, a time period for which data was available in CBP’s current system of record for Predator B mission data—Tasking, Operations, and Management Information System. For TARS, we reviewed summary data on asset assists and operational availability from fiscal years 2013 through 2016, the time period starting when CBP assumed control of the program from the Department of Defense. For tactical aerostats, we analyzed asset assist data beginning in May 2014 through fiscal year 2016, a time period starting when CBP began collecting information in Border Patrol’s system of record for assist information—e3 within DHS’s Enforcement Integrated Database. We also reviewed summary data on tactical aerostat operational availability from May 2015 through fiscal year 2016, a time period for which data was available. To determine reliability of Predator B and aerostat data, we examined data for any anomalies, reviewed CBP guidance and documents, and interviewed CBP officials to understand their methods for collecting, reporting, and validating the data. We found these data were reliable for our purposes of reporting summary data across fiscal years 2013 through 2016. We assessed CBP’s data collection and management practices for Predator B UAS and tactical aerostats for usage and performance data against the management standards and practices contained in Standards for Internal Control in the Federal Government. To identify costs associated with UAS and aerostats, we obtained and reviewed financial summary data and cost per flight hour information for Predator B UAS, TARS, and tactical aerostats. These data include summary information on end-of-year obligations and expenditures by cost category compiled by CBP for fiscal year 2015, the most recent fiscal year for which complete data was available. We also reviewed Aviation Governance Board Bulletin 2015-001: DHS Standard Aviation Comparable Cost per Flight Hour Reporting Methodology and supporting documentation to determine how CBP applied that methodology to develop its fiscal year 2015 cost per flight hour for Predator B UAS, TARS, and tactical aerostats. We determined that CBP’s summary financial and obligation data and cost per flight hour information for its Predator B, tactical aerostat, and TARS programs were sufficiently reliable for our reporting purposes. We conducted this performance audit from November 2015 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Based on our review of data and interviews with CBP officials, CBP’s use of Predator B aircraft is limited, in part, due to weather and airspace access. For example: Weather. Hazardous weather can directly impact take off and landing of Predator B aircraft at Air and Marine Operations (AMO) National Air Security Operations Centers (NASOC), including storms and cloud cover. As previously mentioned, weather accounted for 20 percent of Predator B mission cancelations from fiscal years 2013 through 2016. Hazardous weather can also limit where Predator B aircraft operate away from its NASOC launch site. According to CBP officials we spoke with in North Dakota, weather in locations distant from where a Predator B aircraft is launched can also limit operating locations as there are no diversion airports along the U.S.-Canadian border for CBP’s Predator B aircraft to land. CBP took steps to mitigate the impact of hazardous weather in January and February 2016 by deploying Predator B aircraft from Corpus Christi, Texas, to San Angelo, Texas, at San Angelo Regional Airport which had favorable weather conditions. CBP’s deployment of Predator B aircraft at San Angelo Regional Airport was in accordance with a Federal Aviation Administration (FAA)-issued Certificate of Waiver or Authorization (COA) to conduct its border security mission in Texas, lasted approximately 3 weeks, and was supported by an existing AMO Air Unit co-located at San Angelo Regional Airport. According to AMO officials, three Predator B aircraft are scheduled to be deployed to San Angelo Regional Airport from January 2017 through April 2017. National airspace access. CBP’s Predator B aircraft operations are limited to COA-designated and restricted airspace. To fly outside of COA-designated airspace, CBP must request and receive an addendum to an existing COA or submit a new COA to FAA. CBP’s Predator B operations in special use or restricted airspace are subject to agreements established between CBP and the Department of Defense which may limit airspace access to certain time periods. According to CBP officials we spoke with in Arizona, Predator B flights are often excluded from restricted airspace managed by the Department of Defense along border areas which can affect the ability of Predator B aircraft to support U.S. Border Patrol (Border Patrol). For example, CBP officials told us Predator B aircraft infrequently support Yuma Border Patrol sector as portions of its area of responsibility (AOR) along the border are covered by an area of restricted airspace managed by the U.S. Marine Corps who typically limit Predator B aircraft access to one hour or less per day. In addition, CBP officials also told us it infrequently flies Predator B aircraft in southern California due to airspace restrictions related to the volume of commercial air traffic in and around Los Angeles, California and San Diego, California. Based on our review of documents and interviews with CBP officials, CBP’s use of tactical aerostats is limited, in part, due to weather, access to airspace, and real estate. For example: Weather. According to CBP officials, weather has the greatest impact on CBP’s ability to use tactical aerostats, including storms which require CBP to cease flight operations. For example in May 2015, a tactical aerostat broke free from its tether as of the result of severe winds during a thunderstorm. Access to airspace. We also found that airspace access can impact CBP’s ability to deploy and use tactical aerostats in south Texas as aerostat site placement is subject to FAA approval to ensure the aerostat does not converge on dedicated flight paths. According to CBP, airspace access is limited in Border Patrol’s Rio Grande Valley sector’s Weslaco and Rio Grande City stations’ AOR. Real estate. Aerostat sites used by CBP may involve access to private property, and according to CBP, it would generally seek land owner approval prior to placement. According to CBP, tactical aerostat sites are used by CBP at no cost to the government and challenges may arise with respect to processes for landowners to request compensation for use of their property or restoration of land back to its original condition. In addition, according to CBP officials, it must take into consideration any relevant environmental and wildlife impacts prior to deployment of a tactical aerostat, such as flood zones, endangered species, and migratory animals, among others. Department of Homeland Security’s (DHS) Enforcement Integrated Database includes a field that enables U.S. Border Patrol (Border Patrol) agents to identify whether a technological or nontechnological asset assisted in the apprehension of illegal entrants or the seizure of drugs or other contraband. This appendix provides summary statistics on the reporting of asset assists by Border Patrol agents for the data field “aerostat” across all Border Patrol sectors from May 2014 through fiscal year 2016. As mentioned earlier, the asset assist data field “aerostat” does not distinguish asset assists identified by Border Patrol agents from U.S. Customs and Border Protection’s (CBP) tactical aerostats and Tethered Aerostat Radar System program. As of fiscal year 2016, CBP deployed one tactical aerostat in Laredo and five tactical aerostats in Rio Grande Valley Border Patrol sectors. CBP’s Tethered Aerostat Radar System Program includes sites in Yuma, Arizona; Fort Huachuca, Arizona; Deming, New Mexico; Marfa, Texas; Eagle Pass, Texas; Rio Grande City, Texas; Cudjoe Key, Florida; and Lajas, Puerto Rico. As shown in table 8, the majority of apprehension and seizure events with asset assists attributed to aerostats occurred in the Rio Grande Valley sector from May 2014 through fiscal year 2016 (98 and 99 percent, respectively). As shown in table 9, the majority of seizure events and largest quantity of narcotics seized occurred in the Rio Grande Valley sector from May 2014 through fiscal year 2016. Specifically, asset assists attributed to aerostats in Rio Grande Valley sector accounted for 674 seizure events, including 257,692 and 129 pounds of marijuana and cocaine, respectively. In addition to the contact named above, Kirk Kiester (Assistant Director), Chuck Bausell, Eric Hauswirth, Daniel McKenna, Amanda Miller, Sasan J. (Jon) Najmi, and Carl Potenzieri made key contributions to this report. Border Security: DHS Surveillance Technology Unmanned Aerial Systems and Other Assets. GAO-16-671T. Washington, D.C.: May 24, 2016. Homeland Security Acquisitions: DHS Has Strengthened Management, but Execution and Affordability Concerns Endure. GAO-16-338SP. Washington, D.C.: March 31, 2016. Southwest Border Security: Additional Actions Needed to Assess Resource Deployment and Progress. GAO-16-465T. Washington, D.C.: March 1, 2016. Border Security: Progress and Challenges in DHS’s Efforts to Implement and Assess Infrastructure and Technology. GAO-15-595T. Washington, D.C.: May 13, 2015. Unmanned Aerial Systems: Department of Homeland Security’s Review of U.S. Customs and Border Protection’s Use and Compliance with Privacy and Civil Liberty Laws and Standards. GAO-14-849R. Washington, D.C.: September 30, 2014. Border Security: Opportunities Exist to Strengthen Collaborative Mechanisms along the Southwest Border. GAO-14-494: Washington, D.C.: June 27, 2014. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-411T. Washington, D.C.: March 12, 2014. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-368. Washington, D.C.: March 3, 2014. Border Security: Progress and Challenges in DHS Implementation and Assessment Efforts. GAO-13-653T. Washington, D.C.: June 27, 2013. Border Security: DHS’s Progress and Challenges in Securing U.S. Borders. GAO-13-414T. Washington, D.C.: March 14, 2013. Border Security: Opportunities Exist to Ensure More Effective Use of DHS’s Air and Marine Assets. GAO-12-518. Washington, D.C.: March 30, 2012. U.S. Customs and Border Protection’s Border Security Fencing, Infrastructure and Technology Fiscal Year 2011 Expenditure Plan. GAO-12-106R. Washington, D.C.: November 17, 2011. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011.
As the lead federal agency charged with securing U.S. borders, the Department of Homeland Security's (DHS) CBP has employed a variety of technologies and assets to assist with its border security efforts. In support of its mission, CBP operates a fleet of remotely piloted Predator B UAS and uses aerostats, including tactical aerostats and TARS. GAO was asked to review CBP's use of UAS and aerostats for border security. This report addresses the following questions: (1) How does CBP use UAS and aerostats for border security activities, and to what extent has CBP developed and documented procedures for UAS coordination? and (2) To what extent has CBP taken actions to assess the effectiveness of its UAS and aerostats for border security activities? GAO reviewed CBP documents; analyzed Predator B UAS, tactical aerostat, and TARS data on use and effectiveness from fiscal years 2013 through 2016; interviewed field and headquarters officials; and conducted site visits to observe CBP's use of UAS and aerostats along U.S. borders. U.S. Customs and Border Protection (CBP) uses Predator B unmanned aerial systems (UAS) for a variety of border security activities but could benefit from documented coordination procedures in all operating locations. CBP uses its Predator B UAS to support a variety of efforts, such as missions to support investigations in collaboration with other government agencies (e.g., U.S. Immigration and Customs Enforcement) and to locate individuals illegally crossing the border. GAO found that CBP established various mechanisms to coordinate with other agencies for Predator B missions but did not develop and document coordination procedures in two of its three operational centers. Without documented coordination procedures in all operating locations consistent with internal control standards, CBP does not have reasonable assurance that practices in all operating locations align with existing policies and procedures for joint operations with other federal and non-federal government agencies. CBP uses aerostats—unmanned buoyant craft tethered to the ground and equipped with video surveillance cameras and radar technology—to support its border security activities along the southern U.S. border. In south Texas, the U.S. Border Patrol (Border Patrol) uses relocatable tactical aerostats equipped with video surveillance technology to locate and support the interdiction of cross-border illegal activity. At eight fixed sites across the southern U.S. border and in Puerto Rico, CBP uses the Tethered Aerostat Radar System (TARS) program to support its efforts to detect occurrences of illegal aircraft and maritime vessel border incursions. CBP has taken actions to assess the effectiveness of its UAS and aerostats for border security, but could improve its data collection. CBP collects a variety of data on its use of Predator B UAS, tactical aerostats, and TARS including data on their support for the apprehension of individuals, seizure of drugs, and other events (asset assists). For Predator B UAS, GAO found mission data—such as the names of supported agencies and asset assists for seizures of narcotics—was not recorded consistently across all operational centers, limiting CBP's ability to assess the effectiveness of the program. CBP has not updated its guidance for collecting and recording mission information in its data collection system to include new data elements added since 2014, and it does not have instructions for recording mission information such as asset assists. In addition, not all users of CBP's system have received training for recording mission information. Updating guidance and fully training users, consistent with internal control standards, would help CBP better ensure the quality of data it uses to assess effectiveness. For tactical aerostats, GAO found that Border Patrol collection of asset assist information for seizures and apprehensions does not distinguish between its tactical aerostats and TARS. Consistent with internal control standards, data that distinguishes between support provided by tactical aerostats and support provided by TARS would help CBP collect better and more complete information and guide resource allocation decisions, such as the re-deployment of tactical aerostat sites based on changes in cross-border illegal activity. GAO is making five recommendations, including that CBP document coordination procedures for Predator B operations in all operating locations, update guidance and implement training for collection of Predator B mission data, and update Border Patrol's data collection practices for aerostat asset assists. CBP concurred and identified planned actions to address the recommendations.
To ensure core capability is maintained, Congress enacted Section 2464 of Title 10, which requires, in part, that the Secretary of Defense maintain a core logistics capability that is government owned, government operated, and that uses government personnel, equipment, and facilities. The authority and responsibility of the Secretary of Defense under Section 2464 has been delegated to the Under Secretary of Defense for Acquisition, Technology, and Logistics. Statutory guidance and DOD’s implementing guidance are aimed at ensuring that repair capabilities will be available to meet the military needs of the nation should an emergency or contingency arise (i.e., surge situations). The concept of core capability helps guide government policy on which activities DOD should perform at a military depot and which activities the private sector could or should perform. DOD’s two-pronged approach in its implementation of the core statute includes (1) the biennial core determination process for capturing and reporting core capability requirements and associated planned workloads for fielded systems and (2) the acquisition process for identifying and establishing core capability for new systems and those undergoing modifications. The following summarizes these processes. DOD’s 2007 biennial core determination process began with a December 2005 tasking letter from the Deputy Under Secretary of Defense for Logistics and Materiel Readiness to the services. The letter included guidance on the process and required the services to provide proposed depot maintenance core capability requirements and sustaining workloads for fiscal year 2007. The 2005 guidance in the tasking letter generally mirrored subsequent guidance issued in January 2007. The DOD core determination process is comprised of a series of mathematical computations and adjustments that are used to derive required core capability requirements and the associated planned workloads expected to be available to sustain those capabilities. The computations involved in this methodology are performed from the perspective of the service that owns the depot maintenance assets and are divided into two parts. Part 1 identifies the core capability requirements for DOD weapon systems. The services identify applicable weapon systems based on the JCS contingency scenarios. The JCS scenarios represent plans for responding to conflicts that may occur in the future. All systems required to execute the JCS scenarios are to be included in the core determination process regardless of whether depot maintenance is actually performed in the public or private sectors. The services exclude some systems for several allowable reasons (i.e., special access programs) that are documented citing the authority for each exclusion from the core process. After the applicable weapon systems are identified, the services compute annual peacetime depot maintenance capability requirements in direct labor hours to represent the amount of time it regularly takes to perform required maintenance, and a number of adjustments to these computations are then applied. Contingency requirements and resource adjustments are made to account for applicable surge factors during the different phases of a contingency (for example, preparation/readiness, sustainment, and reconstitution). The objective is to determine the most appropriate composite “surge” adjustment for a particular set of circumstances. Further adjustments are made to account for redundancy in depot capability. For example, a service may determine that repair capabilities for specific systems maintained in military depots are so similar that the capabilities for one system can effectively satisfy the capability requirements for another. Core capability requirements also are adjusted when one service’s maintenance capability requirements will be supported by other services. Throughout Part 1, core capability data for individual systems are incorporated into categories of equipment and technologies, which are also known as work breakdown structure categories, and these categories are to be broken down at a minimum, to the third level of indenture for aircraft and components, the second level of indenture for aircraft engines, and the first level of indenture for all other categories. For example, the aircraft equipment category includes subcategories for airframes, aircraft components, and aircraft engines, while the airframes category is further divided by types of airframes and the aircraft component category is subdivided into instruments, landing gear, avionics/electronics, and other areas. Part 2 of the biennial core process identifies the planned workloads associated with sustaining the depot maintenance core capability requirements identified in Part 1. In this part, after the amount of depot maintenance workloads (in direct labor hours) that are needed to sustain core capabilities are subtracted from funded public sector depot maintenance workload in each equipment/technology category, the difference could represent either an amount of workload that is not needed to sustain core capability requirements or a shortfall amount. This part establishes a minimum level of public sector depot maintenance workloads within each service. Applicable information on the results of each step in this process for Parts 1 and 2 are recorded on the DOD depot maintenance core capability worksheets and provided to OSD, which compiles the service data into a departmentwide assessment that is summarized in an internal report. DOD uses the acquisition process to identify and establish core capability requirements for new and modified systems. The department’s overarching acquisition guidance, DOD Directive 5000.01, states that the program manager shall be the single point of accountability for accomplishing program objectives for total life-cycle systems management, including sustainment. DOD Instruction 5000.02, which provides additional DOD guidance for managing and overseeing defense acquisition programs, requires that program managers perform a core logistics analysis, as part of the acquisition strategy, by the Milestone B acquisition decision point or by Milestone C, if there is no Milestone B. Milestone B is the second major decision point in the acquisition process and comes after the technology development phase. Milestone C, the third major decision point, comes after the system development and production phase. The core logistics analysis identifies whether the capability to maintain and repair a weapon system is necessary to support core requirements and whether the capability should be established at a military depot. Furthermore, according to DOD Directive 4151.18, capabilities to support identified depot maintenance core requirements shall be established not later than 4 years after the system’s initial operational capability. The program manager uses DOD’s acquisition management framework that is intended to translate mission needs and requirements into systems acquisition programs. The program manager develops an acquisition strategy that details how the program’s goals and objectives will be met. The acquisition strategy also serves as a “road map” for program execution from program initiation through post-production support. As part of the acquisition strategy, the core analysis is supposed to identify whether a weapon system will satisfy core logistics requirements. In 2001, we reported that DOD lacked assurance that core logistics capabilities were being maintained as needed to ensure timely and effective response to national defense emergencies and contingencies, as required by Section 2464 of Title 10, noting that several factors precluded this assurance. DOD’s core policy, which established a process for identifying core maintenance capability, was not comprehensive in that it did not provide a forward look at new weapon systems and associated future maintenance capability requirements. In addition, we also reported that DOD has had other limitations, including a lack of sufficient investment in facilities, equipment, and human capital to ensure the long- term viability of the military depots. DOD, through its core process, has not comprehensively and accurately assessed whether it has the required core capability to support fielded systems in military depots. Although DOD generally followed its own guidance for conducting the 2007 biennial core assessment, we found that (1) DOD’s method of compiling and internally reporting core requirements and associated workloads for the 2007 core process did not reveal shortfalls that the services had identified for specific equipment/technology categories, (2) the services had errors and inconsistencies in their identification of core requirements and associated workloads, and (3) there was no mechanism for ensuring that the services take corrective actions to resolve capability shortfalls. As a result of these deficiencies, DOD lacks assurance that it has the required capabilities to support its core requirements. Finally, DOD is not required to provide Congress information on its core process, and therefore the results of the core process are not readily and routinely visible for purposes of congressional oversight. The method by which DOD compiled and internally reported its 2007 core requirements and associated workloads did not reveal core capability shortfalls, even though the services, in their core computations, had identified shortfalls in specific equipment/technology categories. For example, the Army and the Navy identified workload shortfalls to support avionics and electronics components—a shortfall of 238,090 labor hours in the Army and 78,974 hours in the Navy. However, DOD did not disclose this and other specific shortfalls because it aggregated the results of the core determination process in its internal reporting on core capability. As a result, DOD did not present a comprehensive and accurate assessment of the services’ 2007 core capability. Core capability shortfalls exist when the military depots do not possess the combination of skilled personnel, facilities, equipment, processes, and technology that are needed to perform a particular category of work (e.g., composite repair) and that are necessary to maintain and repair the weapon systems and other military equipment needed to fulfill strategic and contingency plans. When shortfalls occur, DOD may not have the necessary capability to repair weapon systems, which could affect the readiness of troops that rely on these weapon systems that support future military operations. According to an internal memorandum summarizing the results of the core process for the Under Secretary of Defense for Acquisition, Technology, and Logistics, DOD determined that projected workload in military depots was adequate to support core capability requirements. The memorandum stated, “The Services are complying with their core capability requirements as they meet wartime needs. Their projected organic workloads in military depots are adequate to support core capability requirements.” More specifically, the memorandum stated that the department’s planned maintenance workload of 92.7 million hours was “well over” the minimum of 70.5 million hours needed to fulfill core requirements at military depots. The memorandum further reported that the Marine Corps, alone among the four services, had planned workload in military depots that was less than its core requirement due to funding constraints, although it added that depot capacity was available to meet the core requirement. According to the memorandum, using fiscal year 2007 funding requested for expenses associated with the Global War on Terrorism, the Marine Corps should be able to meet its core requirement. The memorandum recommended that the Under Secretary approve the DOD identification of core logistics capabilities, and the approval was subsequently given. Table 1 shows the total core capability requirements and planned workloads, by service, as summarized in the internal DOD memorandum. To derive its assessment of core capability, DOD compiled and reported aggregated totals of the services’ core requirements and associated workloads. DOD core determination guidance does not specify how to identify departmentwide core requirements and workload. However, DOD’s method of computing aggregated totals had the effect of masking workload shortfalls that the services had identified in specific equipment/technology categories. The services, in their respective core computations, identified planned workload shortfalls in a total of 18 equipment/technology categories. The Army identified the greatest shortfall in core capability workload, identifying a total shortfall of 1.4 million direct labor hours across 10 equipment/technology categories. The Marine Corps had shortfalls in 7 categories and the Navy in 6. However, the combined Marine Corps and Navy shortfall was only about 35 percent of the Army’s. Table 2 shows the 18 equipment/technology categories for which the services identified shortfalls in planned workload to meet core requirements. As shown in table 2, one area of significant shortfall in 2007 was in workload for Navy aircraft components. For example, the Navy had shortfalls of 218,728 hours in workload for dynamic components, instruments, landing gear, aviation ordnance, and avionics/electronics. In a Naval Air Systems Command briefing discussing the results of the 2007 core results, several reasons were identified for some shortfalls. For example, according to the Command, the Navy had designated 10 percent of the items in the component database for repair in a military depot, but these items had been repaired by contractors because a depot repair capability was not established. Additionally, in some cases where a military depot had established repair capability, sustaining workloads were still going to the private sector, according to Naval Air Systems Command officials. When we initially met with OSD officials responsible for developing DOD’s 2007 core assessment, the officials told us they were aware only of the Marine Corps’ shortfalls, but not the Army’s and Navy’s. However, when we brought the results of our analysis to their attention, they acknowledged that these two services had also identified shortfalls. The officials noted, however, that under OSD’s methodology for aggregating the services’ core data, shortfalls in specific equipment/technology categories could be offset if one service had sufficient depot repair capability to support the core requirements for another service. For example, the Marine Corps had a shortfall in capability to repair tactical wheeled vehicles while the Army had more workload in this equipment/technology category than it needed to support its core requirements. However, since OSD officials, at the time they made this assessment, were unaware that some services had shortfalls, it is unknown to what extent OSD could have made this offset determination. We applied OSD methodology for offsetting shortfalls in the same equipment/technology categories across services to identify the potential workload shortages that could be offset under these circumstances. The result of applying this methodology was that DOD’s 2007 core shortfall could be reduced by approximately 600,000 hours. However, this still leaves a net shortfall of more than 1.3 million hours. Furthermore, it is unclear that workload could be transferred cross-service as this analysis might indicate. On the basis of our discussions with DOD officials, we found that while the skill sets for repairing equipment may be the same or similar, particularly for repairing less complex equipment such as tactical wheeled vehicles, the ability to offset shortages in one service with excess capacity in another would depend on the two services having the same systems or systems so similar that repair capability in one service could support the other service’s equipment. Skilled labor capable of working on equipment from a given equipment/technology category may be able to repair similar equipment from another service if the workers have the required technical data, depot plant equipment is available, and the workers have received necessary training. However, technical data and, to some extent, depot plant equipment, are generally specific to a weapon system. Thus, without knowing the extent to which the excess workload from one service would represent maintenance that could be performed by another service with a shortfall of work, cross-service analyses of workload within the same equipment/technology category would not be meaningful. During our review, we found some errors and inconsistencies in the services’ implementation of the biennial core determination process. Moreover, DOD did not have effective internal controls to prevent these errors and deficiencies in the core process. First, most of the services did not accurately identify weapon systems required to support the 2007 core requirements. According to DOD’s core guidance, the starting point for calculating core requirements is to identify weapon systems and equipment that are included in the JCS contingency scenarios. The guidance states that when beginning to compute core requirements, the services should consider all scenario-tasked weapon systems that require depot maintenance, regardless of whether maintenance for particular systems is currently being accomplished in the public or private sector. The Marine Corps excluded some JCS-tasked systems, such as the Medium Tactical Vehicle Replacement system, from its core computation. Although systems repaired both in the public and private sectors should have been included in its core computation, Marine Corps officials stated that they did not include systems unless they had been previously repaired in military depots. Thus, they erroneously excluded some JCS systems from the starting point for calculating core requirements. The Marine Corps official who performed the analysis said he asked for guidance from Marine Corps Headquarters on what systems should be included, but did not get a list of systems. In addition, according to a May 2007 Army Audit Agency report, Army officials were unable to verify that all JCS-tasked systems were included in the service’s core reviews. Army officials said that they relied heavily on the program executive office to conduct accurate and thorough reviews, but could not prove that all weapon systems were assessed during the review. Because the Army could not show whether all systems were included in the biennial core process, the Army lacks the assurance that core capabilities were identified for all required weapon systems. Second, we found that the Navy and the Marine Corps omitted software maintenance workloads from their 2007 biennial core requirements computations, while the Army and the Air Force included software maintenance in their core computations. The Naval Air Systems Command’s rationale for not including software maintenance in its calculation was that the Navy does not consider software maintenance as maintenance in the usual sense of returning an item back to its original condition. Also, the Navy does not perform software maintenance in facilities that are traditionally considered depots. Nonetheless, cognizant OSD officials told us that because DOD’s biennial core guidance defines depot maintenance to include all aspects of software maintenance, the Navy and Marine Corps should be including software maintenance in their core analysis inputs. Given the services’ differing methodologies in computing their respective core requirements, DOD cannot logically compute the composite core capability requirements for the department as a whole, as required by its guidance. Most importantly, DOD increasingly relies on software to introduce or enhance performance of weapon systems, and making software adjustments is increasingly a key component of maintaining systems to prepare for emergency conditions. Thus, it is important to comprehensively identify the software core maintenance requirements. Third, the Air Force, as a part of its adjustment for redundant or duplicate capability, reduced its requirements because of private sector maintenance workload. Duplicate or redundant capabilities occur when multiple systems are similar and share a common or complementary base of repair processes, technologies, and capabilities, or when a large quantity of single platform requirements necessitate duplicate capabilities. DOD core guidance requires that as a part of the core assessment, the services adjust for duplicate maintenance work. According to DOD officials, the intent of this provision in the guidance was that redundant capability should only consider DOD depot workload—not private sector workload. However, the Air Force considered private sector workload in making these adjustments. For example, for most airframes, engines, and other major end items, the Air Force reduced its workload based on private sector workload. According to Air Force officials, they included private sector workload in the redundancy adjustment because they believed they had flexibility to include both public and private sector workloads. Cognizant OSD officials said they were unaware of the Air Force approach of including private sector workload. When we informed them of the Air Force’s practice, they agreed that this adjustment was not appropriate. They noted that since the purpose of the core capability requirements determination process is to identify public sector depot maintenance capability, reducing the depot maintenance direct labor hours because of workload that exists in the private sector is not what was intended by DOD core guidance. By including private sector capability in its redundancy adjustment, the Air Force is misstating its workload and limiting its flexibility to support critical systems using military depot capability. The Air Force was the only service that did not identify shortfalls using the same methodology as the other services. However, if they had not used private sector capability to adjust for redundancy, it is likely that some categories would have had shortfalls. For example, at least 10 equipment categories had core requirements equal to the planned core workloads assigned. Thus, adding back in the private sector adjustment that the Air Force made would mathematically result in a shortfall. Fourth, as noted above, the Air Force used a methodology for calculating core capability shortfalls that differed from the methodology used by the other services. Although DOD core guidance provides instructions for determining core requirements and associated workload, it does not specify how to calculate shortfalls based on the worksheets developed by each service. In our discussions with service and OSD officials, they agreed that the correct method is to subtract core requirements from planned workload. If the difference is a negative number, that would indicate a core shortfall. The Army, Navy, and Marine Corps all used this method in calculating the results of the core determination process, and we also applied this method in calculating shortfalls in specific equipment/technology categories. However, the Air Force used a different method of calculating shortfalls, which showed shortfalls in specific equipment/technology categories that did not materialize using the other services’ methodology. For example, unlike the other services, the Air Force adjusted its core requirements before computing shortfalls. According to Air Force officials, this method has been a long-standing practice within the Air Force. To compound this inconsistency between the Air Force and the other services, OSD’s internal memorandum, which summarized core results across the department, reported adjusted core requirements for the Air Force of 19.9 million direct labor hours, based on the Air Force’s methodology. If OSD had reported the Air Force’s total core requirements using the same methodology as the other services, the Air Force’s total core requirement would have been 18.7 million direct hours. Table 3 shows shortfalls the Air Force identified for 2007 using its methodology. As shown in table 3, the analysis identified core capability shortfalls for airframes, software, and components, with the component shortfall representing 82 percent of the total. The Air Force, in presenting these shortfalls, also considered depot maintenance workload for new and emerging systems that could mitigate shortfalls. For instance, the Air Force cited requirements for the F-22A, Joint Strike Fighter, and CV-22 aircraft and the Predator and Global Hawk unmanned systems. DOD’s core process lacks a mechanism for ensuring that corrective actions are taken to resolve core capability shortfalls for fielded systems. At the time the services prepared their 2007 biennial core calculations, the services were not required to and therefore did not develop plans to specifically address capability shortfalls at the equipment/technology category level for fielded systems. Further, some Army officials told us that the core process is an exercise in futility in that the services are required to conduct the core analysis, but nothing comes out of it to address shortfalls. Thus, the services compute their biennial requirements, workload, and shortfalls, as required by DOD’s core guidance, but the results are put on the shelf and little is done until the next biennial process. As shown in table 4, our analysis of the Army’s biennial core data found that shortfalls for some equipment/technology categories substantially increased from 2005 to 2007, while shortfalls in other categories were eliminated. Unlike the core guidance that was in effect for the 2007 core process, the guidance for the ongoing 2009 core process requires the services to include in their biennial reports plans to rectify capability shortfalls (if any), including a description of planned capital investment, timing, and planned workarounds until new capability is available. Although the new guidance is a step in the right direction, it falls short of establishing an effective mechanism to ensure that shortfalls are corrected. Further, the new guidance does not require that mitigation plans address shortfalls at the equipment/technology category level. Finally, since the core computations occur every 2 years, DOD would not know whether progress was being made in the interim period. Because there is no requirement to do so, DOD does not provide Congress information on the results of the biennial core determination process for fielded systems. Thus, Congress does not have readily available and routine visibility of core capability requirements, associated workloads, and shortfalls, if any exist. As a result, Congress is not in the best position to make oversight decisions, and DOD is not held accountable regarding the extent to which the military possesses the core logistics capabilities specified in Section 2464 of Title 10, U.S. Code. Conversely, DOD is required to report annually to Congress the percentage of depot-level maintenance and repair dollars spent in the public and private sectors—-also known as the 50/50 reporting requirement. Under Section 2466(a) of Title 10, not more than 50 percent of funds made available in a fiscal year to a military department or defense agency for depot-level maintenance and repair may be used to contract for the performance by nonfederal government personnel of such workload for the military departments and agencies. Further, the Secretary of Defense must submit an annual report to Congress showing the percentages of funds expended for public and private depot maintenance. Although we and some service audit agencies have cited shortcomings, service officials told us that the 50/50 reporting process has influenced the services to consider shifting maintenance work to military depots to improve their 50/50 posture. For example, because of the visibility associated with a breach in the 50/50 reporting requirement, DOD has required the services to prepare a get-well plan when they come within 2 percent of the 50 percent private sector depot maintenance funding ceiling. As a result, we previously reported that the visibility of the 50/50 reporting requirement helps to ensure that the percentage maximum is not exceeded. In contrast, there are no external reporting requirements associated with Section 2464 of Title 10. DOD has neither identified nor established core capabilities in a timely manner to prepare military depots to support future core requirements for some new and modified systems included in our review. As older systems phase out of the inventory and new or modified systems phase in, it is essential that the acquisition process ensures that program offices take the actions necessary to establish core depot maintenance capability in military depots. Two key actions that must occur are, first, the identification of any core depot maintenance capability requirements associated with the new system and, second, if there are no existing organic capabilities, the establishment of depot maintenance capabilities through the acquisition of all resources necessary to achieve those capabilities. Our review of the acquisition process demonstrated that program offices are not taking these actions in a timely manner. We identified shortcomings in the acquisition process that contributed to the lack of timely identification and establishment of core capabilities. DOD did not identify core capabilities for some new and modified systems in the acquisition process in a timely manner. Although DOD acquisition guidance requires that core logistics capabilities be identified no later than Milestone B, or by Milestone C if there is no Milestone B, the identification of core requirements did not normally occur until later for most of the systems we reviewed. Specifically, for 20 of the 52 systems we reviewed, core capability was not identified until either the production and deployment or operations and support phases of the acquisition process (after Milestone C), which could be years after the identification was supposed to occur. For example, a core analysis for the Army’s Stryker Family of Vehicles and the Air Force’s Mobile Approach Control System were not completed until after Milestone C, by which time the systems were already in the production and deployment or sustainment phases. Our analysis also identified additional systems that should have had a core analysis completed by Milestone B, but for which analyses were not completed until Milestone C. Figure 1 shows the number of systems from our non-probability sample of 52 systems for which a core logistics analysis was prepared in each phase of the acquisition process. The U.S. Army Audit Agency’s 2007 report identified similar delays in the identification of core capability for some Army weapon systems that had achieved initial operational capability but had not been subjected to the core capabilities analyses required by Army and DOD guidance. These systems included the Secure Mobile Anti-Jam Reliable Tactical Terminal (December 1999 initial operational capability), the Advanced Field Artillery Tactical Data System (fiscal year 1996 initial operational capability), and the Bradley Fighting Vehicle System A3 Upgrades (fiscal year 2001 initial operational capability). The Army Audit Agency report also stated that Army officials who did not perform the required analyses may not be assured that they made the best decisions for the Army regarding the use of organic or contractor support. According to DOD officials and based on the results of our analyses, if core capability requirements for new and modified systems have not been identified early in the acquisition process, and if there is no existing DOD capability for a particular system, it is unlikely that core capability can be established at military depots within 4 years of initial operational capability, as required by DOD guidance. Also, delays in making maintenance decisions can significantly limit DOD’s sustainment concept options because as programs progress in the acquisition timeline, program decisions already made—such as not making provisions for the acquisition of technical data (or access to it), depot plant equipment, and other resources required to establish military depot maintenance capability— may limit the practicability of being able to establish core capability in a military depot. In addition to not identifying core capability requirements in a timely manner for new and modified systems in the acquisition process, program offices are also not taking the actions that are needed to establish required core capabilities in a timely manner. Although DOD Directive 4151.18 states that the capabilities to support identified depot maintenance core requirements shall be established not later than 4 years after initial operational capability for DOD materiel directly supporting the department’s strategic and contingency plans, this is not always occurring. Specifically, 24 of the 30 programs we reviewed with identified core requirements either had not established any core capability or had achieved only a partial core capability within 4 years of their initial operational capability. Table 5 summarizes our analysis of the 30 systems in our review. According to program officials, 6 of the 30 systems we reviewed were fielded and had required core capability established in military depots. For 11 of the 30 programs we reviewed, however, DOD had not established any of the required core capability to maintain and repair the systems. Another 13 of the programs had established some but not all core capability through either performance-based logistics arrangements (11 programs) or public-private partnerships (2 programs) with contractors. According to DOD officials, these arrangements and partnerships with contractors were intended to support the core workload. The following discussion illustrates cases among the 11 programs in our review where the required core capabilities were identified, but had not been established in the military depot system within 4 years of initial operational capability as required by DOD guidance. The Navy’s Air Launched Expendable-50 (ALE-50) System, which provides an electronic countermeasure method against anti-aircraft missile threats, reached initial operational capability in 2002. However, no core capability to maintain and repair this system exists in a military depot, even though it should have been established by 2006. The Navy determined that the ALE- 50 had core requirements, and the Navy and the Joint Depot Maintenance Activities Group agreed that organic capability should be established at Naval Aviation Depot, Jacksonville, Florida—now known as Fleet Readiness Center Southeast, Jacksonville. Interim commercial support was established until the organic capability could be established. Currently, the Jacksonville depot has only the capability to troubleshoot and to make limited repairs to certain components. However, the contractor performs most depot maintenance on this system. According to Navy and program officials, while the Naval Air Systems Command has requested funds to establish core capability, funds from the Office of the Chief of Naval Operations have not been made available to stand-up the required depot capability. Thus, 7 years after the system reached initial operational capability, the depot still does not have full capability to repair the ALE-50. The Navy’s Mission Computer Upgrade, which reached initial operational capability in 2002, is the primary computing device on the E-2C aircraft. The electrical cabinet component of this unit which was designated as core is used to house the computer’s circuit cards. This unit provides digital data signal interface and power to the circuit card assemblies and routes all external sensor and operator control inputs to the applicable circuit card assemblies. Eleven depot-level repairable items on the computer have core requirements and, therefore, a capability to maintain and repair these items should exist in military depots. Currently, no DOD depot has the capability to maintain and repair the mission computer cabinet. Although the program office has identified and requested the funding that would be required to purchase technical data and depot plant equipment needed to establish this capability, the funding from the Office of the Chief of Naval Operations has never been made available. Thus, the candidate repair depot—Fleet Readiness Center Southwest, North Island—did not have repair capability 7 years after initial operational capability, and the original equipment manufacturer is still repairing the equipment. The Navy’s Advanced Tactical Air Reconnaissance System (ATARS), which reached initial operational capability in 2000, is a reconnaissance avionics subsystem consisting of a sensor suite providing image acquisition, data storage, image manipulation, and reconnaissance system control functions. Reconnaissance system control functions include the capability to record radar sensor data and control a data-link subsystem for real-time and near-real-time transmission. Although ATARS was determined to have core requirements, program officials indicated that the manufacturer was determined to be the only cost-effective source of repair due to the limited number of systems and the unique tools needed for the complex repairs. Thus, funding to establish core capabilities for ATARS has been requested by the program officials; however, funds from the Office of the Chief of Naval Operations have not been made available 9 years after initial operational capability. While ATARS was fielded before DOD issued its guidance requiring core capability to be established within 4 years of initial operational capability, the guidance does not exempt systems that were already fielded or those where establishing capability is costly. The following discussion illustrates cases among the 13 programs we reviewed where a partial, but not full, core capability had been established in the military depot system through the implementation of performance- based logistics arrangements or public-private partnerships. Performance- based logistics involve the purchase of performance outcomes (such as the availability of functioning weapon systems) through long-term contractor support arrangements rather than the purchase of individual elements of support, such as parts, repairs, and engineering support. Public-private partnerships for depot-level maintenance are cooperative arrangements between a depot-level maintenance activity and one or more private sector entities to perform DOD or defense-related work, to utilize DOD depot facilities and equipment, or both. The Air Force’s Large Infrared Countermeasure (LAIRCM) System, which reached initial operational capability in 2004, provides fast and accurate threat detection, processing, tracking, and countermeasures to defeat current and future generation infrared missile threats. The LAIRCM has been maintained by the manufacturer because the Air Force did not acquire the technical data or depot plant equipment needed to support establishing a core capability at the depot. According to program officials, these resources were not a high enough program priority to be funded. The Air Force and Joint Depot Maintenance Activities Group agreed that organic capability should be established at the Warner Robins Air Logistics Center through a public-private partnership, but currently, there is no work being performed at the depot. According to program officials, the depot will receive some workload in 2009 and is expected to be fully capable of maintaining the system in 2010, but it is unclear whether this milestone will be achieved. The Army’s AN/MPQ-64 Sentinel Radar System achieved initial operational capability in 1997. While the Sentinel’s core depot assessment, completed in May 2004, determined the system to have core requirements, currently core capability has been established for only 11 of the 29 depot-level reparables, and these are the components that are common to the Firefinder radar system, the precursor to the Sentinel. By the third quarter of fiscal year 2009, core capability will be established to repair two additional depot-level reparable items, thus increasing the reparable capability to 13 depot-level items. The Sentinel is supported with a performance-based logistics arrangement with Thales Raytheon Corporation, which was supposed to partner with the Tobyhanna Army Depot to provide the capability to achieve core capability. Additionally, the Tobyhanna Army Depot does not have full capability to test either the original Sentinel system or the improved version, for which 62 of the Army’s 143 systems have been upgraded. According to Army officials, funding is not available to establish full core capability. Thus, 12 years after initial operational capability was achieved, the Army still had not established the required capability in the military depot system. We identified several shortcomings in the acquisition process that contributed to the lack of timely identification and establishment of core capability for new and modified systems. More specifically, (1) acquisition guidance provides little to no information on how to identify and plan for the establishment of core capability, (2) acquisition strategies do not fully address core requirements, and (3) some program offices are not procuring technical data required to establish core capabilities. While DOD requires the identification and establishment of core capability for new and modified systems, we found that DOD acquisition guidance does not explain how the required core analysis should be performed, or provide specific information on actions needed to establish core capability. As discussed earlier, DOD Instruction 5000.02 requires that a core logistics analysis be included as part of the acquisition strategy by Milestone B or by Milestone C, if there is no Milestone B. However, this guidance resides in a table of statutory and regulatory information requirements, which deemphasizes this requirement compared to guidance provided in the main text of the instruction. Further, the instruction provides no specifics about the elements —such as resource requirements and time frames—needed to effectively plan for the establishment of a core capability if a core requirement is identified through the core logistics analysis. In December 2008, DOD updated Instruction 5000.02 and made a change that requires that the core logistics analysis and source of repair analysis be addressed in the life-cycle sustainment plan for Milestone B and that the life-cycle sustainment plan be included in the acquisition strategy document. However, while this guidance provides more emphasis on the sustainment phase, it still does not require specific plans, including resource requirements and time frames, for establishing core capability. Other DOD acquisition guidance also lacks specific information on the elements necessary to effectively identify and establish core capabilities within the required time frames. The Defense Acquisition Guidebook, which was last updated in December 2004, is a resource for program managers to use as a reference guide supporting their management responsibilities. The guidebook does not establish mandatory requirements, but provides the program managers with discretionary best practices. Regarding core logistics analysis, the Defense Acquisition Guidebook states only that the program managers shall ensure that maintenance source of support selection complies with requirements identified in DOD Instruction 5000.2. It provides no further specific direction for identifying and establishing core capability. Moreover, DOD guidance overall places less emphasis on core capability relative to other guidance about sourcing sustainment activities, including maintenance, through performance-based logistics arrangements. DOD has identified performance-based logistics as the “preferred” support approach for DOD systems. This emphasis on performance-based logistics contributes to the lack of emphasis by program offices on integrating core capabilities into the acquisition process. Some program officials cited what they perceive as a conflict between the department’s emphasis on outsourcing logistics activities through private contractors and the guidance to establish core logistics capability in military depots. One official provided us with a copy of a 1997 training guide for acquisition officials that emphasized an outsourcing strategy for supporting weapon systems. The guide stated that under DOD’s outsourcing strategy, support concepts for new and modified systems maximize the use of contractor-provided, long-term, total life-cycle logistics support that combines depot-level maintenance with wholesale and selected retail materiel management functions. While this training guide is no longer in use, it illustrates the emphasis that has been placed on using sustainment approaches other than military depots. Recognizing more guidance was needed on how to perform a core logistics analysis, the Army’s Communications Electronics Life Cycle Management Command joined with the Program Executive Offices for Command, Control, Communications-Tactical and Intelligence, Electronic Warfare and Sensors in 2002 to develop standard operating procedures that document the steps needed to successfully complete this analysis, along with a corresponding source of repair analysis. The standard operating procedures address elements that should be part of the core logistics analysis, both at the system and component level. For example, the procedures address the need for a program manager to ensure that the component-level core logistics analysis incorporates a plan for obtaining the rights or access to technical data. Because of a backlog of legacy systems for which a core analysis had not been performed, these Army offices also created a streamlined standard operating procedure applicable to legacy systems. Another shortcoming in DOD’s acquisition guidance is that it does not make specific reference to the 4-year time frame for establishing core capability after a system reaches its initial operational capability. While this 4-year time frame is established within a DOD directive (4151.18) that is applicable across the department, this directive is generally used more by DOD’s logistics support community than by the acquisition community. DOD Instruction 5000.02 is silent on the required time frame for establishing core capability. Recognizing the importance of early identification of core requirements to the establishment of core capability, the Air Force since 2006 has required program offices to conduct an initial core assessment when the core analysis that is required by DOD guidance cannot be accomplished. The intent of this requirement is to allow for an earlier evaluation of the system’s sustainment concept. According to the Air Force, traditionally source of repair and core decisions have not been accomplished until later in the acquisition process—at least partially due to not having sufficient data available about the system to accomplish a source of repair and core analysis. These delays led to decisions that ultimately limited the government’s sustainment concept options. To address this concern, the Air Force in December 2006 added an additional requirement to its acquisition guidance that a strategic source of repair determination be conducted for systems when a depot source of repair determination cannot be accomplished for program initiation approval (for example, by Milestone B) in order to allow for an earlier assessment of the sustainment concept. Air Force officials told us that, under this new policy, the strategic source of repair determination should be conducted for new systems prior to Milestone B. Air Force officials pointed out that programs must still conduct a core logistics analysis as required under DOD Instruction 5000.02. The officials noted that from October 2007 to July 2008, the strategic source of repair determination was used on three weapon system acquisition programs: the F-15E Active Electronically Scanned Array Radar System, the KC-135 Replacement Tanker Aircraft, and the C-27 Joint Cargo Aircraft. The strategic source of repair determination for these three systems resulted in their being identified as having core capability requirements. According to the Air Force, the strategic source of repair determination will allow acquisition programs to identify anticipated sources of repair early enough in the acquisition process so that defense acquisition planning and programming documents, as well as resulting contracts, contain the appropriate sustainment elements needed to support the acquisition strategy. This initial core assessment appears to be a promising practice to support the timely establishment of core capability. Our review of acquisition strategies for 11 major acquisition programs prepared from April 2001 to February 2008 determined that this key acquisition documentation did not fully address core capability requirements for the systems or how the programs would go about establishing required core capabilities. The acquisition strategy is a business and technical management approach designed to achieve program objectives within the resource constraints imposed. It is the framework for planning, directing, contracting for, and managing a program. However, as noted earlier, DOD acquisition guidance does not explain how to incorporate specific plans for establishing core capability in the acquisition strategy. We looked at 11 ACAT I system acquisition strategies to determine the extent to which these strategies addressed core capability requirements in the absence of specific DOD guidance. For example, in the absence of such guidance, we determined whether the strategies (1) stated that the systems were designated to support core capability requirements, (2) identified a possible depot source of repair, and (3) provided a plan for funding and establishing core capability. The acquisition strategies we reviewed did not include these types of information, even though the systems had been determined to have core requirements. Specifically, we found that the most frequent information provided in the acquisition strategies was simply a statement of the need to address core requirements as required by Section 2464 of Title 10. Further, none of the acquisition strategies we reviewed laid out a plan to establish core capabilities within 4 years of initial operational capability at military depots. Without adequate considerations for core capability requirements in pertinent decision documents, such as the acquisition strategies, program offices likely will not adequately plan for acquiring resources to establish core capability at military depots. A key impediment to the establishment of core capability is that some program offices have not been procuring necessary technical data during the system acquisition process. Some of the programs in our review of new and modified systems for which a core capability was not established within required time frames cited the unavailability of technical data as a key factor contributing to this situation. Also, as discussed earlier, acquisition approaches that planned on long-term use of contractor support resulted in not acquiring technical data or access to technical data rights, which are essential for establishing depot maintenance capability in military depots. In 2006, we reported that the lack of technical data rights limited the services’ flexibility to make changes to sustainment plans that are aimed at achieving cost savings and meeting legislative requirements for depot maintenance capabilities. Specifically, we reported on seven Army and Air Force weapon system programs where the services encountered limitations in implementing revisions to sustainment plans. The programs were the C-17 aircraft, F-22 aircraft, C-130J aircraft, Up-armored High- Mobility Multipurpose Wheeled Vehicle, Stryker family of vehicles, Airborne Warning and Control System aircraft, and M4 carbine rifle. Although circumstances surrounding each case were unique, earlier decisions made on technical data rights during system acquisition were cited as a primary reason for the limitations subsequently encountered. We noted that delaying action in acquiring technical data rights can make these data cost prohibitive or difficult to obtain later in the weapon system life cycle. For example, the Air Force did not acquire technical data during the acquisition process for the C-17, F-22, and C-130J aircraft. In these cases, the Air Force made attempts to obtain needed technical data but found that the equipment manufacturer, among others, declined to provide the data or it was too expensive. Without successfully working out arrangements to provide the depots the technical data they need, the Air Force cannot develop comprehensive core maintenance capability for these aircraft. Officials at the Warner Robins Air Logistics Center told us that while establishing partnerships is sometimes seen as the way to get around technical data issues, the depot has been challenged to establish viable agreements with the subcontractors for various C-17 systems and components that were identified as core requirements. Similarly, the Army Materiel Command designated Anniston Army Depot as the Army’s depot maintenance facility using a performance-based logistics arrangement with General Dynamics Land Systems for the Stryker family of vehicles. According to Army officials, the contract with General Dynamics Land Systems will provide Anniston with instructions for the repair of the Stryker. However, Anniston has been unable to obtain sufficient technical data rights, which limits its ability to perform maintenance, even though Anniston participated in the assembly of Stryker vehicles in a partnership arrangement. Recent changes in law and DOD guidance have addressed the acquisition of technical data. Section 802(a) of the National Defense Authorization Act for Fiscal Year 2007 directed the Secretary of Defense to require program managers for major weapon systems and subsystems of major weapon systems to assess the long-term technical data needs of their systems and to establish corresponding acquisition strategies that provide for technical data rights needed to sustain the systems over their life cycle. In July 2007, the Under Secretary of Defense for Acquisition, Technology, and Logistics directed that program managers for ACAT I and II programs assess their long-term technical data needs. Further, the December 2008 update of DOD Instruction 5000.02 requires that a program’s long-term technical data needs be reflected in a data management strategy. The data management strategy is to be approved in the context of the acquisition strategy prior to issuing a contract solicitation. It is too soon to determine the impact that this data management initiative may have on the availability of technical data and the establishment of core capabilities. Under the biennial core determination process, DOD lacks assurance that it possesses the core capabilities that are necessary to maintain and repair the weapon systems and other military equipment that are identified as necessary to enable the armed forces to fulfill the strategic and contingency plans prepared by the Chairman, JCS. DOD’s core determination process for 2007 did not provide a complete and accurate assessment of core capabilities at military depots. Although DOD reported that more than enough capability existed DOD-wide to support core requirements for fielded systems, the services’ data showed that capability shortfalls existed for several equipment/technology categories. An accurate and comprehensive identification of shortfalls is a necessary first step to managing them and taking corrective actions. Further, DOD lacked internal controls to prevent errors and inconsistencies in the military services’ implementation of the core determination process, with the result that shortfalls were probably greater than the numbers computed by the military services. In addition, because DOD lacks an effective mechanism for ensuring that corrective actions are taken to manage and reduce core shortfalls for fielded systems, shortfalls in capability can remain unresolved and grow over time. DOD could address most of the shortcomings in the biennial core process by improving its core determination guidance, ensuring service compliance with the guidance, expanding on the internal reporting of core results, and instituting a mechanism to ensure corrective actions are taken when shortfalls in core capability are identified. In addition, visibility and oversight of the core determination process could be enhanced by submitting to Congress the results of the core process, as well as planned corrective actions to address shortfalls. Shortcomings in DOD’s acquisition guidance and its implementation have resulted in DOD program managers not identifying and establishing required core capability at military depots in a timely manner—capability that will be needed to support future maintenance requirements for new and modified systems. As older fielded systems phase out of the inventory and newer ones are phased in, shortfalls in core capability to support these systems could grow unless DOD acquisition programs change their practices of delaying the identification and establishment of core capability. Since acquisition guidance provides little or no information on how to identify and plan for the establishment of core capability and relatively greater emphasis is placed on using contractor support arrangements, such as performance-based logistics, program managers may continue to focus their sustainment strategies on the use of contractors. For example, the practice of not acquiring or obtaining access to technical data during the weapon system acquisition process has impeded DOD’s ability to establish core capabilities at military depots. DOD could improve its acquisition process to provide better assurance that program offices identify and establish core depot maintenance capabilities for new and modified systems in a timely manner. If DOD’s acquisition process is not improved and current practices continue, as fielded systems are phased out of the inventory, DOD depots may not be able to provide the ready and controlled source of technical competence they need to ensure an effective, timely response to future national defense emergencies. To improve DOD’s ability to assess core logistics capabilities with respect to fielded systems and correct any identified shortfalls in core capability, we recommend that the Under Secretary of Defense for Acquisition, Technology, and Logistics take the following four actions to revise DOD’s biennial core instruction: Require DOD to compile and report the services’ core capability requirements, planned organic workloads, and any shortfalls by equipment/technology category (work breakdown structure). Require DOD to implement internal controls to prevent errors and inconsistencies in the services’ core calculations. At a minimum, internal controls should address errors and inconsistencies identified in our review on the need to include (1) all JCS-scenario-tasked systems, (2) software maintenance requirements, and (3) only public depot maintenance workload when adjusting for redundancy. Explicitly state the mathematical calculations, based on their core determination worksheets, which the services should use to determine core capability requirements, associated workload, and shortfalls, if any. Require DOD to establish a mechanism to ensure that corrective actions are taken to resolve identified core shortfalls. For example, DOD should institute, in the alternative years of the biennial core process, a status report on the actions taken to resolve shortfalls identified in the previous year. To provide better assurance that program offices identify and establish core depot maintenance capabilities for new and modified systems in a timely manner, we recommend that the Under Secretary of Defense for Acquisition, Technology, and Logistics take the following four actions: Provide program managers with standard operating procedures for performing a core logistics analysis as required in DOD guidance. These standard operating procedures should also ensure that core requirements are considered in conjunction with other sustainment approaches. Modify DOD Instruction 5000.02 to incorporate the 4-year time frame for establishing core capability from initial operational capability, as currently required in DOD Directive 4151.18. Require that the acquisition strategy for each new and modified system include either a statement that core capability requirements were not identified for the system or, if core requirements were identified, a plan for establishing core capability within 4 years of initial operational capability, including obtaining the required resources. Require an initial core assessment early in the acquisition process (preferably prior to Milestone B). Because DOD has recently updated its guidance to require that a program’s long-term technical data needs be reflected in a data management strategy, we are not making a recommendation on this matter. Congress should consider requiring DOD to report on the status of its effort to maintain a core logistics capability consistent with Section 2464 of Title 10, U.S. Code. In doing so, Congress may wish to require that DOD report biennially on the results of its core determination process, actions taken to correct any identified shortfalls in core capability, and efforts to identify and establish core capability for new and modified systems in a timely manner, consistent with DOD guidance. In written comments on a draft of this report, DOD concurred with eight of our recommendations. DOD partially concurred with one recommendation in the draft report, and we have replaced this recommendation with a matter for congressional consideration. DOD’s comments are reprinted in appendix II. The department stated that DOD Instruction 4151.20, published subsequent to the 2007 core determination process, satisfies many of the recommendations contained in the draft report. We obtained and analyzed the instruction as part of our review, compared it with prior guidance that existed for the 2007 process, and considered it when formulating our findings, conclusions, and recommendations. As noted in the report, the instruction did not depart substantially from the earlier guidance. Therefore, we disagree with DOD that it satisfies many of the recommendations in our report and continue to believe that DOD should take additional actions to implement these recommendations, as discussed further below. DOD concurred with our recommendations to improve DOD’s ability to assess core logistics capabilities with respect to fielded systems and correct any identified shortfalls. Regarding our recommendation that DOD improve its approach to compiling and reporting on core capability requirements, workloads, and shortfalls by equipment/technology category (work breakdown structure), DOD stated that it already conducts an analysis of core requirements and sustaining workloads at the work breakdown structure level. DOD also stated that it tasked the services to provide plans for eliminating shortfalls identified during the 2009 core determination process. We believe that DOD is misconstruing the intent of our recommendation, which was to improve DOD’s approach to compiling the service-specific results into a departmentwide assessment. As stated in our report, DOD’s internal report on the results of the 2007 process aggregated the services’ analyses and did not provide a complete and accurate assessment of core capabilities at military depots, including shortfalls that had been identified in specific equipment/technology categories. Therefore, we continue to believe that DOD should improve its approach to compiling and reporting a departmentwide assessment with the aim of providing greater detail on the results of the core determination process. Regarding our recommendations on the services’ submissions and mathematical calculations used in the core determination process, DOD stated that DOD Instruction 4151.20 provides a consistent format and process for the services to follow in developing their core requirements and sustaining workloads. As part of the next data call, DOD plans to reiterate and incorporate our recommendation to prevent errors and inconsistencies in the services’ core calculations. DOD further stated that it will provide explicit guidance that the services follow and complete the core calculation worksheets in their entirety. DOD’s planned actions should focus more attention on the need to ensure accurate and consistent core submissions and calculations across the services. However, our report notes that the services took different approaches in implementing DOD’s core guidance, and DOD Instruction 4151.20 does not substantially change this guidance. Therefore, DOD should take additional steps that we recommended, such as instituting internal controls, for ensuring service compliance with its core determination guidance. Regarding our recommendation to ensure corrective actions are taken to resolve core capability shortfalls, DOD stated that it (1) has tasked the services with providing plans for eliminating shortfalls identified during the 2009 core determination process and (2) will identify shortfalls as a semi-annual agenda item for a senior-level maintenance steering committee until shortfalls are resolved. As noted in our report, we believe DOD’s tasking to the services is a step in the right direction, but falls short of establishing an effective mechanism to ensure that shortfalls are corrected. In addition, on the basis of DOD’s comments and information we subsequently obtained about the charter for the steering committee, it is unclear to what extent this entity will provide an effective mechanism to resolve shortfalls. Therefore, while DOD’s planned actions are positive steps, DOD may need to take additional actions to fully meet the intent of this recommendation. DOD partially concurred with a recommendation in our draft report aimed at enhancing the visibility and oversight of the core process. DOD stated that the department will continue to provide Congress with information it requests for oversight. The department also stated that it plans to make the results of the core determination process available on a DOD Web site. However, DOD was opposed to generating reports to Congress which it has not requested. As we state in the report, Congress does not have readily available and routine visibility of the status of DOD’s core capability, including core requirements, associated workloads, and shortfalls, if any exist. As a result, Congress is not in the best position to make oversight decisions, and DOD is not held accountable for the extent to which the military possesses the core logistics capability specified in Section 2464 of Title 10, U.S. Code. Therefore, we have replaced this recommendation with a matter for congressional consideration that DOD be required to report on the status of its effort to maintain a core logistics capability consistent with Section 2464. DOD concurred with our recommendations to provide better assurance that program offices identify and establish core depot maintenance capabilities for new and modified systems in a timely manner. DOD stated that it will revise guidance in DOD Instruction 5000.02, DOD Directive 4151.18, and the Defense Acquisition Guidebook to provide more specificity on how to identify and establish core capability during the acquisition process. DOD also plans to revise its guidance on the core determination process (DOD Instruction 4151.20) to provide specific core analysis, guidance, and procedures for systems being acquired. Additionally, DOD will issue interim policy until the applicable guidance has been revised. These actions, if implemented, should meet the intent of our recommendations. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense, the Secretaries of the Army, the Navy, the Air Force, and the Commandant of the Marine Corps. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have any questions regarding this report, please contact me at (202) 512- 8365 or [email protected]. Key contributors to the report are listed in appendix III. To evaluate the extent to which the DOD has accurately assessed whether it has the required core capabilities in military depots to support fielded systems, we reviewed DOD’s core determination process for 2007. We obtained and reviewed guidance that was issued by the Office of the Secretary of Defense (OSD) that was in effect for the 2007 core process, as well as subsequent guidance issued for the 2009 biennial review. We reviewed the military services’ implementation of the core determination methodology. We obtained and analyzed their core worksheets showing core capability requirements and associated planned workloads. We determined the extent that the services followed the methodology, and we identified any errors or inconsistencies. Where errors or inconsistencies were identified, however, we did not recalculate core requirements. We obtained OSD’s internal report summarizing the results of the 2007 core process and compared it with the worksheet data submitted by the services. In addition, we discussed the core determination process and the results of our data analyses with OSD and service officials. Although our review focused on the 2007 core determination process, we obtained limited information from DOD officials on the 2009 core process, which was ongoing at the time of our review. We also compared the results of the 2007 core process with those of the 2005 process to identify any trends and to determine how identified shortfalls in core capability were being addressed and resolved. One limitation in our methodology was that we did not assess DOD’s decisions on the weapon systems that were identified in the Joint Chiefs of Staff (JCS) scenarios. Inaccurate tasking of weapon systems could have the effect of either overstating or understating core capability requirements for fielded systems. We also reviewed our prior reports on core capability and depot maintenance issues, as well as related reports issued by service audit agencies and research organizations. To determine the extent to which DOD is preparing to support future core requirements for new and modified systems in military depots, we examined pertinent DOD guidance, including acquisition guidance in DOD’s 5000 series of directives and instructions, DOD guidance for managing military materiel, and service acquisition policies. To obtain information on the identification of core requirements for new and modified systems, we asked the services to identify systems that were in the acquisition process during 2006. Due to DOD data limitations, we could not verify that the services included all systems meeting our criteria. We then surveyed program managers about whether they had conducted a core analysis for their system. We received responses from 112 program managers, including 52 who responded that they had performed a core analysis or source of repair analysis. We conducted additional follow-up audit work with the 52 who responded that they had completed a core analysis. We did not assess the rationale for the decisions made on identifying the systems’ core requirements. However, we collected comments from OSD and service officials and examined service documents on the factors that complicate program managers’ decisions to identify core requirements during the acquisition process. To determine whether the services established core logistics capabilities for new and modified systems for which a core requirement had been identified, we reviewed systems that had completed the acquisition process and were in operation between 1998 and 2003. From a total population of 662 systems that met these criteria, we randomly selected 53 systems and judgmentally selected another 20 systems. From this list of 73 systems, we subsequently excluded 43 weapon systems for various reasons, as shown in table 6, leaving a total of 30 systems. Because the selected systems do not represent a statistical sample, results from nongeneralizable samples cannot be used to make inferences about a population. We further reviewed the 30 systems to determine whether core capabilities were established at military depots within 4 years of their initial operational capability date. We reviewed various program documents, including source-of-repair decisions and maintenance plans, and interviewed program officials about the characteristics of the systems and maintenance sustainment decisions. Further, we examined Defense Acquisition Board documents for some of the selected weapon systems to determine if core capabilities were recorded when future sustainment agreements were discussed in acquisition reviews. We assessed the reliability of the data from the services’ databases that we used to conduct our review and determined that the DOD data were sufficiently reliable for the purposes of our analysis and findings. While the results of these reviews cannot be generalized to all weapon systems in the acquisition process, deficiencies in the way core capability is identified or established for these systems indicate the existence of more widespread problems. Further, we did not look at the larger question of whether DOD fulfilled the warfighter’s requirements as part of our review. In conducting work for both objectives, we interviewed officials and obtained documentation, when applicable, at the following locations: Office of the Secretary of Defense, Washington, D.C. Joint Chiefs of Staff, Washington, D.C. Air Force Headquarters, Washington, D.C. Air Force Materiel Command, Ohio Oklahoma City Air Logistics Center, Oklahoma Warner Robins Air Logistics Center, Georgia Army Headquarters, Washington, D.C. Army Materiel Command, Virginia Anniston Army Depot, Alabama Corpus Christi Army Depot, Texas Tobyhanna Army Depot, Pennsylvania U.S. Army Aviation and Missile Command, Alabama U.S. Army Communications and Electronics Command, New Jersey TACOM Life Cycle Management Command, Michigan Marine Corps Systems Command, Virginia Marine Corps Logistics Command, Georgia Navy Headquarters, Washington, D.C. Naval Air Systems Command, Maryland Fleet Readiness Center East, North Carolina Naval Sea Systems Command, District of Columbia Norfolk Naval Shipyard, Virginia We conducted this performance audit from June 2007 through March 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Julia Denman and Tom Gosling, Assistant Directors; Carleen Bennett; Grace Coleman; Susan Ditto; David Epstein; Chanee Gaskin; Dawn Godfrey; Katherine Lenane; Shawnda Lindsey; Randy Neice; Geoffrey Peck; Terry Richardson; and John Trubey made key contributions to this report.
The Department of Defense (DOD) is required, by law, to maintain a core logistics capability that is government owned and government operated to meet contingency and other emergency requirements. Military depots play a key role in maintaining this "core capability," although in recent years DOD has significantly increased its use of contractors. At the subcommittee's request, GAO examined the extent to which (1) DOD has accurately assessed whether it has the required core capabilities in military depots and (2) DOD is preparing to support future core requirements for new and modified systems. GAO reviewed DOD's biennial process for determining core capability requirements and the associated workloads for fielded systems. GAO also reviewed whether DOD had identified and established core capability in a timely manner for new and modified systems. DOD, through its biennial core process, has not comprehensively and accurately assessed whether it has the required core capability to support fielded systems in military depots. Although DOD internally reported that its maintenance workload of 92.7 million hours in 2007 was "well over" the minimum of 70.5 million hours needed to fulfill core requirements at military depots and that the services were complying with their core capability requirements, this assessment did not show capability shortfalls identified by the services in their core computations. GAO's analysis of the services' 2007 core capabilities data determined that the Army, Navy, and Marine Corps had shortfalls for some equipment categories or technologies. For example, the Army identified core shortfalls of 1.4 million hours for 10 equipment categories. Several factors contributed to the deficiencies in the core process. Current guidance does not address how DOD is to consolidate the services' results into a meaningful department wide assessment. Also, there were errors and inconsistencies in the services' core calculations, making the full extent of the shortfalls unclear, and DOD also did not have effective internal controls in place to identify and resolve these errors and deficiencies. Further, DOD's core process does not have an effective mechanism for ensuring that corrective actions are taken to resolve shortfalls for fielded systems. As a result of shortcomings in the core process, DOD does not know the extent to which the military depots will have the capability to repair weapon systems to support future military operations. Finally, since DOD is not required to provide Congress information on its core process, the results of the process are not readily and routinely visible for purposes of congressional oversight. DOD is not adequately preparing military depots to support future core requirements through its acquisition process. Specifically, for the new and modified systems included in our review, the department had neither identified nor established core capabilities for certain systems in a timely manner. DOD acquisition guidance requires that an analysis of core requirements for new and modified systems be conducted early in the acquisition phase (no later than Milestone B or no later than Milestone C if there is no Milestone B). However, GAO found that program offices managing 20 of 52 systems we reviewed did not identify core requirements by Milestone C. DOD is also not establishing core capabilities for new and modified systems in a timely manner--that is, within 4 years of the system's achieving its initial operational capability, as required under DOD guidance. Shortcomings in the acquisition process include (1) acquisition guidance provides little or no information on how to identify and plan for the establishment of core capability, (2) program acquisition strategies do not fully address core requirements, and (3) some program offices are not procuring technical data necessary to establish a core capability. As a result, DOD has little assurance that the department is preparing military depots to meet future national defense contingencies.
DHS has increased its global outreach efforts. Historically, DHS and its components, working with State, have coordinated with foreign partners on an ongoing basis to promote aviation security enhancements through ICAO and other multilateral and bilateral outreach efforts. For example, DHS and TSA have coordinated through multilateral groups such as the European Commission and the Quadrilateral Group—comprising the United States, the EU, Canada, and Australia—to establish agreements to develop commensurate air cargo security systems. On a bilateral basis, the United States has participated in various working groups to facilitate coordination on aviation security issues with several nations, such as those that make up the EU, Canada, and Japan. The United States has also established bilateral cooperative agreements to share information on security technology with the United Kingdom, Germany, France, and Israel, among others. In addition, TSA has finalized agreements with ICAO to provide technical expertise and assistance to ICAO in the areas of capacity building and security audits, and serves as the United States’ technical representative on ICAO’s Aviation Security Panel and the panel’s various Working Groups. In the wake of the December 2009 incident, DHS increased its outreach efforts. For example, to address security gaps highlighted by the December incident, DHS has coordinated with Nigeria to deploy Federal Air Marshals on flights operated by U.S. carriers bound for the United States from Nigeria. Further, in early 2010, the Secretary of Homeland Security participated in five regional summits—Africa, the Asia/Pacific region, Europe, the Middle East, and the Western Hemisphere—with the Secretary General of ICAO, foreign ministers and aviation officials, and international industry representatives to discuss current aviation security threats and develop an international consensus on the steps needed to address remaining gaps in the international aviation security system. Each of these summits resulted in a Joint Declaration on Aviation Security in which, generally, the parties committed to work through ICAO and on an individual basis to enhance aviation security. Subsequently, during the September 2010 ICAO Assembly, the 190 member states adopted a Declaration on Aviation Security, which encompassed the principles of the Joint Declarations produced by the five regional summits. Through the declaration, member states recognized the need to strengthen aviation security worldwide and agreed to take nine actions to enhance international cooperation to counter threats to civil aviation, which include, among other things strengthening and promoting the effective application of ICAO Standards and Recommended Practices, with particular focus on Annex 17, and developing strategies to address current and emerging threats; strengthening security screening procedures, enhancing human factors, and utilizing modern technologies to detect prohibited articles and support research and development of technology for the detection of explosives, weapons, and prohibited articles in order to prevent acts of unlawful interference; developing and implementing strengthened and harmonized measures and best practices for air cargo security, taking into account the need to protect the entire air cargo supply chain; and providing technical assistance to states in need, including funding, capacity building, and technology transfer to effectively address security threats to civil aviation, in cooperation with other states, international organizations and industry partners. TSA has increased coordination with foreign partners to enhance security standards and practices. In response to the August 2006 plot to detonate liquid explosives on board commercial air carriers bound for the United States, TSA initially banned all liquids, gels, and aerosols from being carried through the checkpoint and, in September 2006, began allowing passengers to carry on small, travel-size liquids and gels (3 fluid ounces or less) using a single quart-size, clear plastic, zip-top bag. In November 2006, in an effort to harmonize its liquid-screening standards with those of other countries, TSA revised its procedures to match those of other select nations. Specifically, TSA began allowing 3.4 fluid ounces of liquids, gels, and aerosols onboard aircraft, which is equivalent to 100 milliliters—the amount permitted by the EU and other countries such as Canada and Australia. This harmonization effort was perceived to be a success and ICAO later adopted the liquid, gels, and aerosol screening standards and procedures implemented by TSA and other nations as a recommended practice. TSA has also worked with foreign governments to draft international air cargo security standards. According to TSA officials, the agency has worked with foreign counterparts over the last 3 years to draft Amendment 12 to ICAO’s Annex 17, and to generate support for its adoption by ICAO members. The amendment, which was adopted by the ICAO Council in November 2010, will set forth new standards related to air cargo such as requiring members to establish a system to secure the air cargo supply chain (the flow of goods from manufacturers to retailers). TSA has also supported the International Air Transport Association’s (IATA) efforts to establish a secure supply chain approach to screening cargo for its member airlines and to have these standards recognized internationally. Moreover, following the October 2010 bomb attempt in cargo originating in Yemen, DHS and TSA, among other things, reached out to international partners, IATA, and the international shipping industry to emphasize the global nature of transportation security threats and the need to strengthen air cargo security through enhanced screening and preventative measures. TSA also deployed a team of security inspectors to Yemen to provide that country’s government with assistance and guidance on their air cargo screening procedures. In addition, TSA has focused on harmonizing air cargo security standards and practices in support of its statutory mandate to establish a system to physically screen 100 percent of cargo on passenger aircraft—including the domestic and inbound flights of United States and foreign passenger operations—by August 2010. In June 2010 we reported that TSA has made progress in meeting this mandate as it applies to domestic cargo, but faces several challenges in meeting the screening mandate as it applies to inbound cargo, related, in part, to TSA’s limited ability to regulate foreign entities. As a result, TSA officials stated that the agency would not be able to meet the mandate as it applies to inbound cargo by the August 2010 deadline. We recommended that TSA develop a plan, with milestones, for how and when the agency intends to meet the mandate as it applies to inbound cargo. TSA concurred with this recommendation and, in June 2010, stated that agency officials were drafting milestones as part of a plan that would generally require air carriers to conduct 100 percent screening by a specific date. At a November 2010 hearing before the Senate Committee on Commerce, Science, and Transportation, the TSA Administrator testified that TSA aims to meet the 100 percent screening mandate as it applies to inbound air cargo by 2013. In November 2010 TSA officials stated that the agency is coordinating with foreign countries to evaluate the comparability of their air cargo security requirements with those of the United States, including the mandated screening requirements for inbound air cargo on passenger aircraft. According to TSA officials, the agency has begun to develop a program that would recognize the air cargo security programs of foreign countries if TSA deems those programs provide a level of security commensurate with TSA’s programs. In total, TSA plans to coordinate with about 20 countries, which, according to TSA officials, were selected in part because they export about 90 percent of the air cargo transported to the United States on passenger aircraft. According to officials, TSA has completed a 6-month review of France’s air cargo security program and is evaluating the comparability of France’s requirements with those of the United States. TSA officials also said that, as of November 2010, the agency has begun to evaluate the comparability of air cargo security programs for the United Kingdom, Israel, Japan, Singapore, New Zealand, and Australia, and plans to work with Canada and several EU countries in early 2011. TSA expects to work with the remaining countries through 2013. TSA is working with foreign governments to encourage the development and deployment of enhanced screening technologies. TSA has also coordinated with foreign governments to develop enhanced screening technologies that will detect explosive materials on passengers. According to TSA officials, the agency frequently exchanges information with its international partners on progress in testing and evaluating various screening technologies, such as bottled-liquid scanner systems and advanced imaging technology (AIT). In response to the December 2009 incident, the Secretary of Homeland Security has emphasized through outreach efforts the need for nations to develop and deploy enhanced security technologies. Following TSA’s decision to accelerate the deployment of AIT in the United States, the Secretary has encouraged other nations to consider using AIT units to enhance the effectiveness of passenger screening globally. As a result, several nations, including Australia, Canada, Finland, France, the Netherlands, Nigeria, Germany, Poland, Japan, Ukraine, Russia, Republic of Korea, and the UK, have begun to test or deploy AIT units or have committed to deploying AITs at their airports. For example, the Australian Government has committed to introducing AIT at international terminals in 2011. Other nations, such as Argentina, Chile, Fiji, Hong Kong, India, Israel, Kenya, New Zealand, Singapore, and Spain are considering deploying AIT units at their airports. In addition, TSA hosted an international summit in November 2010 that brought together approximately 30 countries that are deploying or considering deploying AITs at their airports to discuss AIT policy, protocols, best practices, as well as safety and privacy concerns. However, as discussed in our March 2010 testimony, TSA’s use of AIT has highlighted several challenges relating to privacy, costs, and effectiveness that remain to be addressed. For example, because the AIT presents a full-body image of a person during the screening process, concerns have been expressed that the image is an invasion of privacy. Furthermore, as noted in our March 2010 testimony, it remains unclear whether the AIT would have been able to detect the weapon used in the December 2009 incident based on the preliminary TSA information we have received. We will continue to explore these issues as part of our ongoing review of TSA’s AIT deployment, and expect the final report to be issued in the summer of 2011. TSA conducts foreign airport assessments. TSA efforts to assess security at foreign airports—airports served by U.S. aircraft operators and those from which foreign air carriers operate service to the United States—also serve to strengthen international aviation security. Through TSA’s foreign airport assessment program, TSA utilizes select ICAO standards to assess the security measures used at foreign airports to determine if they maintain and carry out effective security practices. TSA also uses the foreign airport assessment program to help identify the need for, and secure, aviation security training and technical assistance for foreign countries. In addition, during assessments, TSA provides on-site consultations and makes recommendations to airport officials or the host government to immediately address identified deficiencies. In our 2007 review of TSA’s foreign airport assessment program, we reported that of the 128 foreign airports that TSA assessed during fiscal year 2005, TSA found that 46 (about 36 percent) complied with all ICAO standards, whereas 82 (about 64 percent) did not meet at least one ICAO standard. In our 2007 review we also reported that TSA had not yet conducted its own analysis of its foreign airport assessment results, and that additional controls would help strengthen TSA’s oversight of the program. Moreover, we reported, among other things, that TSA did not have controls in place to track the status of scheduled foreign airport assessments, which could make it difficult for TSA to ensure that scheduled assessments are completed. We also reported that TSA did not consistently track and document host government progress in addressing security deficiencies identified during TSA airport assessments. As such, we made several recommendations to help TSA strengthen oversight of its foreign airport assessment program, including, among other things, that TSA develop controls to track the status of foreign airport assessments from initiation through completion; and develop a standard process for tracking and documenting host governments’ progress in addressing security deficiencies identified during TSA assessments. TSA agreed with our recommendations and provided plans to address them. Near the end of our 2007 review, TSA had begun work on developing an automated database to track airport assessment results. In September 2010 TSA officials told us that they are now exploring ways to streamline and standardize that automated database, but will continue to use it until a more effective tracking mechanism can be developed and deployed. We plan to further evaluate TSA’s implementation of our 2007 recommendations during our ongoing review of TSA’s foreign airport assessment program, which we plan to issue in the fall of 2011. A number of key challenges, many of which are outside of DHS’s control, could impede its ability to enhance international aviation security standards and practices. Agency officials, foreign country representatives, and international association stakeholders we interviewed said that these challenges include, among other things, nations’ voluntary participation in harmonization efforts, differing views on aviation security threats, varying global resources, and legal and cultural barriers. According to DHS and TSA officials, these are long-standing global challenges that are inherent in diplomatic processes such as harmonization, and will require substantial and continuous dialogue with international partners. As a result, according to these officials, the enhancements that are made will likely occur incrementally, over time. Harmonization depends on voluntary participation. The framework for developing and adhering to international aviation standards is based on voluntary efforts from individual states. While TSA may require that foreign air carriers with operations to, from, or within the United States comply with any applicable U.S. emergency amendments to air carrier security programs, foreign countries, as sovereign nations, generally cannot be compelled to implement specific aviation security standards or mutually accept other countries’ security measures. International representatives have noted that national sovereignty concerns limit the influence the United States and its foreign partners can have in persuading any country to participate in international harmonization efforts. As we reported in 2007 and 2010, participation in ICAO is voluntary. Each nation must initiate its own involvement in harmonization, and the United States may have limited influence over its international partners. Countries view aviation security threats differently. As we reported in 2007 and 2010, some foreign governments do not share the United States government’s position that terrorism is an immediate threat to the security of their aviation systems, and therefore may not view international aviation security as a priority. For example, TSA identified the primary threats to inbound air cargo as the introduction of an explosive device in cargo loaded on a passenger aircraft, and the hijacking of an all-cargo aircraft for its use as a weapon to inflict mass destruction. However, not all foreign governments agree that these are the primary threats to air cargo or believe that there should be a distinction between the threats to passenger air carriers and those to all-cargo carriers. According to a prominent industry association as well as foreign government representatives with whom we spoke, some countries view aviation security enhancement efforts differently because they have not been a target of previous aviation-based terrorist incidents, or for other reasons, such as overseeing a different airport infrastructure with fewer airports and less air traffic. Resource availability affects security enhancement efforts. In contrast to more developed countries, many less developed countries do not have the infrastructure or financial or human resources necessary to enhance their aviation security programs. For example, according to DHS and TSA officials, such countries may find the cost of purchasing and implementing new aviation security enhancements, such as technology, to be prohibitive. Additionally, some countries implementing new policies, practices, and technologies may lack the human resources—for example, trained staff—to implement enhanced security measures and oversee new aviation security practices. Some foreign airports may also lack the infrastructure to support new screening technologies, which can take up a large amount of space. These limitations are more common in less developed countries, which may lack the fiscal and human resources necessary to implement and sustain enhanced aviation security measures. With regard to air cargo, TSA officials also cautioned that if TSA were to impose strict cargo screening standards on all inbound cargo, it is likely many nations would be unable to meet the standards in the near term. Imposing such screening standards in the near future could result in increased costs for international passenger travel and for imported goods, and possible reductions in passenger traffic and foreign imports. According to TSA officials, strict standards could also undermine TSA’s ongoing cooperative efforts to develop commensurate security systems with international partners. To help address the resource deficit and build management capacity in other nations, the United States provides aviation security assistance— such as training and technical assistance—to other countries. TSA, for example, works in various ways with State and international organizations to provide aviation security assistance to foreign partners. In one such effort, TSA uses information from the agency’s foreign airport assessments to identify a nation’s aviation security training needs and provide support. In addition, TSA’s Aviation Security Sustainable International Standards Team (ASSIST), comprised of security experts, conducts an assessment of a country’s aviation security program at both the national and airport level and, based on the results, suggests action items in collaboration with the host nation. State also provides aviation security assistance to other countries, in coordination with TSA and foreign partners through its Anti- Terrorism Assistance (ATA) program. Through this program, State uses a needs assessment—a snapshot of a country’s antiterrorism capability—to evaluate prospective program participants and provide needed training, equipment, and technology in support of aviation security, among other areas. State and TSA officials have acknowledged the need to develop joint coordination procedures and criteria to facilitate identification of global priorities and program recipients. We will further explore TSA and State efforts to develop mechanisms to facilitate interagency coordination on capacity building through our ongoing work. Legal and cultural factors can also affect harmonization. Legal and cultural differences among nations may hamper DHS’s efforts to harmonize aviation security standards. For example, some nations, including the United States, limit, or even prohibit the sharing of sensitive or classified information on aviation security procedures with other countries. Canada’s Charter of Rights and Freedoms, which limits the data it can collect and share with other nations, demonstrates one such impediment to harmonization. According to TSA officials, the United States has established agreements to share sensitive and classified information with some countries; however, without such agreements, TSA is limited in its ability to share information with its foreign partners. Additionally, the European Commission reports that several European countries, by law, limit the exposure of persons to radiation other than for medical purposes, a potential barrier to acquiring some passenger screening technologies, such as AIT. Cultural differences also serve as a challenge in achieving harmonization because aviation security standards and practices that are acceptable in one country may not be in another. For example, international aviation officials explained that the nature of aviation security oversight varies by country—some countries rely more on trust and established working relationships to facilitate security standard compliance than direct government enforcement. Another example of a cultural difference is the extent to which countries accept the images AIT units produce. AIT units produce a full-body image of a person during the screening process; to varying degrees, governments and citizens of some countries, including the United States, have expressed concern that these images raise privacy issues. TSA is working to address this issue by evaluating possible display options that would include a “stick figure” or “cartoon-like” form to provide enhanced privacy protection to the individual being screened while still allowing the unit operator or automated detection algorithms to detect possible threats. Other nations, such as the Netherlands, are also testing the effectiveness of this technology. Although DHS has made progress in its efforts to harmonize international aviation security standards and practices in key areas such as passenger and air cargo screening, officials we interviewed said that there remain areas in which security measures vary across nations and would benefit from harmonization efforts. For example, as we reported in 2007, the United States requires all passengers on international flights who transfer to connecting flights at United States airports to be rescreened prior to boarding their connecting flight. In comparison, according to EU and ICAO officials, the EU has implemented “one-stop security,” allowing passengers arriving from EU and select European airports to transfer to connecting flights without being rescreened. Officials and representatives told us that although there has been ongoing international discussion on how to more closely align security measures in these and other areas, additional dialogue is needed for countries to better understand each others’ perspectives. According to the DHS officials and foreign representatives with whom we spoke, these and other issues that could benefit from harmonization efforts will continue to be explored through ongoing coordination with ICAO and through other multilateral and bilateral outreach efforts. Our 2007 review of TSA’s foreign airport assessment program identified challenges TSA experienced in assessing security at foreign airports against ICAO standards and recommended practices, including a lack of available inspector resources and host government concerns, both of which may affect the agency’s ability to schedule and conduct assessments for some foreign airports. We reported that TSA deferred 30 percent of its scheduled foreign airport visits in 2005 due to the lack of available inspectors, among other reasons. TSA officials said that in such situations they sometimes used domestic inspectors to conduct scheduled foreign airport visits, but also stated that the use of domestic inspectors was undesirable because these inspectors lacked experience conducting assessments in the international environment. In September 2010 TSA officials told us that they continue to use domestic inspectors to assist in conducting foreign airport assessments and air carrier inspections— approximately 50 domestic inspectors have been trained to augment the efforts of international inspectors. We also previously reported that representatives of some foreign governments consider TSA’s foreign airport assessment program an infringement of their authority to regulate airports and air carriers within their borders. Consequently, foreign countries have withheld access to certain types of information or denied TSA access to areas within an airport, limiting the scope of TSA’s assessments. We plan to further assess this issue, as well as other potential challenges, as part of our ongoing review of TSA’s foreign airport assessment program, which we plan to issue in the fall of 2011. Mr. Chairman, this completes my prepared statement. I look forward to responding to any questions you or other members of the committee may have at this time. For additional information about this statement, please contact Stephen M. Lord at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, staff who made key contributions to this statement were Steve D. Morris, Assistant Director; Carissa D. Bryant; Christopher E. Ferencik; Amy M. Frazier; Barbara A. Guffy; Wendy C. Johnson; Stanley J. Kostyla; Thomas F. Lombardi; Linda S. Miller; Matthew M. Pahl; Lisa A. Reijula; Rebecca Kuhlmann Taylor; and Margaret A. Ullengren. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The attempted December 25, 2009, terrorist attack and the October 2010 bomb attempt involving air cargo originating in Yemen highlight the ongoing threat to aviation and the need to coordinate security standards and practices to enhance security with foreign partners, a process known as harmonization. This testimony discusses the Department of Homeland Security's (DHS) progress and challenges in harmonizing international aviation security standards and practices and facilitating compliance with international standards. This testimony is based on reports GAO issued from April 2007 through June 2010, and ongoing work examining foreign airport assessments. For this work, GAO obtained information from DHS and the Transportation Security Administration (TSA) and interviewed TSA program officials, foreign aviation officials, representatives from international organizations such as the International Civil Aviation Organization (ICAO), and industry associations, about ongoing harmonization and TSA airport assessment efforts and challenges. In the wake of the December 2009 terrorist incident, DHS and TSA have strived to enhance ongoing efforts to harmonize international security standards and practices through increased global outreach, coordination of standards and practices, use of enhanced technology, and assessments of foreign airports. For example, in 2010 the Secretary of Homeland Security participated in five regional summits aimed at developing an international consensus to enhance aviation security. In addition, DHS and TSA have coordinated with foreign governments to harmonize air cargo security practices to address the statutory mandate to screen 100 percent of air cargo transported on U.S.-bound passenger aircraft by August 2010, which TSA aims to meet by 2013. Further, in the wake of the December 2009 incident, the Secretary of Homeland Security has encouraged other nations to consider using advanced imaging technology (AIT), which produces an image of a passenger's body that screeners use to look for anomalies such as explosives. As a result, several nations have begun to test and deploy AIT or have committed to deploying AIT units at their airports. Moreover, following the October 2010 cargo bomb attempt, TSA also implemented additional security requirements to enhance air cargo security. To facilitate compliance with international security standards, TSA assesses the security efforts of foreign airports as defined by ICAO international aviation security standards. In 2007, GAO reported, among other things, that TSA did not always consistently track and document host government progress in addressing security deficiencies identified during foreign airport assessments and recommended that TSA track and document progress in this area. DHS and TSA have made progress in their efforts to enhance international aviation security through these harmonization efforts and related foreign airport assessments; however, a number of key challenges, many of which are beyond DHS's control, exist. For example, harmonization depends on the willingness of sovereign nations to voluntarily coordinate their aviation security standards and practices. In addition, foreign governments may view aviation security threats differently, and therefore may not consider international aviation security a high priority. Resource availability, which is a particular concern for developing countries, as well as legal and cultural factors may also affect nations' security enhancement and harmonization efforts. In addition to challenges facing DHS's harmonization efforts, in 2007 GAO reported that TSA experienced challenges in assessing foreign airport security against international standards and practices, such as a lack of available international inspectors and concerns host governments had about being assessed by TSA, both of which may affect the agency's ability to schedule and conduct assessments for some foreign airports. GAO is exploring these issues as part of an ongoing review of TSA's foreign airport assessment program, which GAO plans to issue in the fall of 2011. In response to prior GAO recommendations that TSA, among other things, track the status of foreign airport assessments, DHS concurred and is working to address the recommendations. TSA provided technical comments on a draft of the information contained in this statement, which GAO incorporated as appropriate.
Prior to the 1930s, securities markets were overseen by various state securities regulatory bodies and the securities exchanges themselves. In the aftermath of the stock market crash of 1929, the Securities Exchange Act of 1934 (SEA) created SEC as a new federal agency and gave it authority to register and oversee securities broker-dealers, as well as securities exchanges, to strengthen securities oversight and address inconsistent state securities rules. SEC’s mission is to protect investors; maintain fair, orderly and efficient markets; and facilitate capital formation. In addition to regulation by SEC and state agencies, securities markets and the broker-dealers that accept and execute customer orders in these markets continue to be regulated by self-regulatory organizations (SRO), including the Financial Industry Regulatory Authority, that are funded by the participants in the industry. Among other things, these SROs establish rules and conduct examinations related to market integrity and investor protection. SEC also registers and oversees investment companies and advisers, approves rules for the industry, and conducts examinations of broker-dealers and mutual funds. State securities regulators are generally responsible for registering certain securities products and, along with SEC, investigating securities fraud. SEC is also responsible for overseeing the financial reporting and disclosures that companies issuing securities must make under U.S. securities laws. Oversight of the trading of futures contracts has changed over the years in response to changes in the marketplace. Under the Grain Futures Act of 1922, the trading of futures contracts was overseen by the Grain Futures Administration, an office within the Department of Agriculture, reflecting the nature of the products for which futures contracts were traded. However, futures contracts were later created for nonagricultural commodities, such as energy products like oil and natural gas, metals such as gold and silver, and financial products such as Treasury bonds and foreign currencies. In 1974, as a result of the Commodity Exchange Act (CEA), CFTC was created as a new independent federal agency to oversee the trading of futures contracts. CFTC’s mission is to protect market users and the public from fraud, manipulation, and abusive practices related to the sale of commodity and financial futures and options, and to foster open, competitive, and financially sound futures markets. Like SEC, CFTC oversees the registration of intermediaries, including futures commission merchants (FCM), and relies on SROs, including the futures exchanges and the National Futures Association, to establish and enforce rules governing member behavior. The Commodity Futures Modernization Act of 2000 (CFMA) established a principles-based structure for the regulation of futures exchanges and derivatives clearing organizations, and clarified that some off-exchange derivatives trading—and in particular trading on facilities accessible only to large, sophisticated traders—was permitted and would be largely unregulated or exempt from regulation. In recent decades, CFTC and SEC have sought ways to resolve jurisdictional disputes and address other emerging areas of overlap in their respective oversight of futures and securities markets. For example, in 1981, CFTC and SEC reached an agreement, called the Shad-Johnson Jurisdictional Accord, to clarify their respective jurisdictions over securities-based options and futures. The accord was enacted into law in January 1983 and, among other things, confirmed SEC’s jurisdiction over securities-based options, including stocks and stock indexes; provided CFTC with jurisdiction over futures (and options thereon) on certain securities and securities indexes; and prohibited futures trading on single stocks, as well as on securities indexes that did not meet specific requirements. In 2000, CFMA lifted the ban on futures on single stocks and narrow-based securities indexes, allowing them to be traded on securities or futures exchanges but subject to joint regulation of CFTC and SEC. Pursuant to the CFMA, the two agencies worked together to jointly create margin requirements for single stock futures. Exchanges that list and trade security futures are subject to the jurisdiction of both CFTC and SEC; this is one example of how the securities and futures markets have overlapped in terms of regulated entities. In addition, financial intermediaries must register with both CFTC and SEC if they serve investors trading in instruments subject to the jurisdiction of the two agencies. According to the joint report, approximately 45 percent of futures commission merchants are also registered with SEC as broker- dealers. The joint report provides additional examples of the agencies’ efforts to collaborate in various areas. For example, in March 2008, the two agencies entered into a memorandum of understanding with the goal of creating a closer relationship between the agencies on a broad range of issues affecting their jurisdictions. The agreement identified points of contact for coordination, outlined a protocol for addressing novel derivative products, and generally contemplated enhanced information sharing between the two agencies on areas of mutual concern and interest. Despite efforts by the agencies to define their respective regulatory jurisdictions, jurisdictional disputes have periodically delayed the introduction of novel derivative products to the marketplace. The joint report notes that the governing statutes do not definitively address the fundamental question of whether certain derivative instruments qualified as futures contracts or options. In one recent example, in January 2005 the Chicago Board Options Exchange (CBOE) filed a proposal with SEC to list and trade a new option on an exchange-traded fund holding investments involving gold, but introduction of this product was delayed by over 3 years as CFTC and SEC could not reach agreement on jurisdiction. In another instance, according to the Chief Executive Officer of CBOE, an option on a credit default product was placed on hold for 7 months, while a European derivatives exchange introduced a similar product within weeks of the announcement of the proposal to list this similar product. These examples illustrate the potential for such delays to create domestic and international competitive disadvantages for U.S. exchanges and clearinghouses attempting to introduce novel products. In its June 2009 white paper on financial regulatory reform, Treasury noted that the broad public policy objectives of futures and securities regulation are the same and that many of the differences in the regulation of the markets are no longer justified. Specifically, Treasury expressed the following concerns: Economically equivalent instruments may be regulated in different manners, depending on which agency has jurisdiction. For example, many futures products and financial options regulated as securities are similar, and the returns to one can often be replicated with the other. Jurisdictional disputes consume significant agency resources, and uncertainty about the outcome of such disputes may impede innovation. Jurisdictional distinctions may have unnecessarily limited competition between markets and exchanges. Under existing law, financial instruments with similar characteristics may be forced to trade on different exchanges that are subject to different regulatory regimes. The agencies follow different approaches to the regulation of exchanges, clearing organizations, and intermediaries. Pursuant to the CEA, CFTC employs a more principles-based approach to regulation, under which market participants can have greater flexibility in complying with regulatory requirements than under a more rules-based approach. Treasury suggested that the two agencies seek agreement on principles of regulation that are significantly more precise than the CEA’s current “core principles.” As noted earlier, Treasury recommended that the agencies make recommendations to address differences in statutes and regulations that are not justified by the agencies’ policy objectives. In the joint report, the agencies note that broad differences in futures and securities regulation reflect, in part, fundamental differences in the roles played by the two markets. Because of the role of certain securities markets in capital formation, for example, securities regulation is more concerned with disclosure than commodities regulation is. For example, securities with returns that depend on the issuer’s financial performance—such as stocks issued by institutions to raise capital—require more detailed disclosure to protect investors than futures products with returns that depend on changes in the price of a physical commodity. The primary purpose of the futures markets is to facilitate the management and transfer of risk, and certain securities markets, such as securities options and other securities derivatives markets, also facilitate the management and transfer of risk. As noted above, Treasury expressed concern that certain securities options and futures products are subject to different regulatory requirements although they serve similar purposes. To respond to Treasury’s recommendation, CFTC and SEC obtained public input and conducted independent and joint analyses to identify and assess significant differences in their statutes and rules. In July and August 2009, the agencies collaborated to prioritize and categorize issues on which to solicit public input. Through joint public meetings held in early September 2009 and a request for public comments, CFTC and SEC collected views on harmonization opportunities from a range of market participants and experts. The agencies worked together to analyze the information collected, develop recommendations, and draft the joint report. On the basis of their analysis of the public input, CFTC and SEC grouped issues of regulatory conflict into eight areas, and in the joint report made at least one recommendation in all but one of these categories. The agencies also made five recommendations intended to enhance operational coordination between them. The joint report focuses on differences in the agencies’ existing authorities and does not cover issues related to gaps in the agencies’ authorities to oversee over-the- counter derivatives, which were the subject of congressional deliberation at the time of their analysis. Given the tight time frame—Treasury recommended in June 2009 that the agencies report to Congress by the end of September 2009—agency staff said they focused on significant areas of difference and relied to a large extent on public input to help identify significant regulatory differences and, in turn, harmonization opportunities. As a first step, the agencies worked separately and together in July and August 2009 to analyze differences between them regarding their statutes and regulations. For example, CFTC and SEC staff completed a side-by-side analysis of the agencies’ respective statutes and rules in nine areas: (1) exchanges and markets, (2) clearance and settlement, (3) trading practices, (4) intermediaries, (5) Securities Act of 1933 and applicable provisions of the Exchange Act, (6) financial responsibility rules, (7) enforcement, (8) investment companies, and (9) investment advisers. According to CFTC and SEC staff, the agencies used this analysis to identify significant statutory and regulatory differences and to prioritize and categorize issues on which to solicit public input. Following these independent and joint analyses, CFTC and SEC sought input from the public in two ways. First, the agencies jointly arranged and hosted public meetings on September 2 and 3, 2009. For the joint public meetings, CFTC and SEC invited members of the investor community, academics, industry experts, and futures and securities market participants to participate in a series of panel discussions and provide their views on regulatory differences and harmonization opportunities. The agencies organized the meetings into five panel discussions, with each panel focused on one of five broad categories: (1) exchanges and markets, (2) intermediaries, (3) clearance and settlement, (4) enforcement, and (5) investment funds. Including the participation of all nine CFTC and SEC Commissioners and 30 panelists, these joint public meetings were unprecedented in the history of the two agencies, according to the joint report. Second, CFTC and SEC provided an opportunity for public comment from August 19 to September 14, 2009, on the issues to be discussed at the joint public meetings. In addition to the statements submitted by individuals who participated as panelists, the agencies received over a dozen statements offering the views of individuals or organizations not represented on the panels. According to CFTC and SEC staff, the agencies worked together to analyze the collected information, develop their findings and recommendations, and draft the joint report. On the basis of their analysis of comments obtained from the joint public meetings and public comment request, the agencies focused the joint report’s analysis on eight subject areas covering issues the agencies believe emerged as the most relevant to harmonizing their statutory and regulatory regimes: (1) product listing and approval, (2) exchange/clearinghouse rule changes, (3) risk-based portfolio margining and bankruptcy/insolvency regimes, (4) market structure, (5) price manipulation and insider trading, (6) customer protection standards applicable to financial advisers, (7) regulatory compliance by dual registrants, and (8) cross-border regulatory matters. For each of the eight areas, the joint report includes discussion of statutes and regulations relevant to SEC oversight, followed by discussion of statutes and regulations relevant to CFTC oversight. For each area, the joint report also includes an analysis section in which the two agencies analyze the differences between their regulatory approaches. Each agency took responsibility for drafting the sections on its regulations and the statutes relevant to its authority. The agencies divided initial drafting responsibility for the analysis and recommendation sections, and CFTC and SEC staff said that the agencies shared their drafts with each other and incorporated each other’s comments. In the analysis sections, the agencies also incorporated public input obtained through the joint public meetings and the public comment period. CFTC and SEC jointly issued their report in October 2009 and made 15 recommendations that cover harmonization opportunities in all but one of the eight areas—market structure. Table 1 summarizes the joint report’s recommendations for statutory change and agency action in these seven areas. The recommendations for statutory change cover changes CFTC and SEC believe require legislative action to amend one or both of the agencies’ statutes, while the recommendations for agency action cover changes the agencies believe they can implement without action from Congress. In the joint report, the agencies note that market participants and other experts offered mixed views about whether differences in the futures and securities market structures are justified by the agencies’ policy objectives. Later in this report, we discuss opposing views on whether Congress should legislate changes to the structure of the futures industry to introduce features of the securities market structure. In addition, the agencies made five recommendations to enhance operational coordination between them: create a Joint Advisory Committee to be tasked with considering and developing solutions to emerging and ongoing issues of common interest in the futures and securities markets; create a Joint Agency Enforcement Task Force to share market surveillance data, improve market oversight, enhance enforcement, and relieve duplicative regulatory burdens; establish a joint cross-agency training program for staff; develop a program for the regular sharing of staff through detail create a Joint Information Technology Task Force to pursue linking information on CFTC- and SEC-regulated persons and other information the agencies jointly find useful. The joint report’s recommendation for the creation of a Joint Advisory Committee included a request that Congress authorize CFTC and SEC to form, fund, and operate this committee. The other four recommendations for operational coordination did not identify a need for legislative action prior to implementation. The joint report does not cover issues related to gaps in the agencies’ regulatory authority with respect to over-the-counter derivatives. The executive summary of the joint report notes that these gaps were discussed in the Treasury white paper and were the subject of deliberation before Congress at the time of the agencies’ harmonization study. Consistent with Treasury’s request that the agencies identify existing conflicts in their rules and statutes, CFTC and SEC staff said that they chose to focus on their existing authorities in the report. The joint report’s recommendations for statutory changes have yet to be enacted, and the recommendations for agency action remain in the planning stages. Congress authorized funding for the Joint Advisory Committee, as requested in the joint report, and has proposed legislation including provisions that would address several recommended statutory changes. CFTC and SEC staff told us they expect to have the Joint Advisory Committee functioning by early summer 2010. The agencies have not yet established time frames for implementing the joint report’s other recommendations that do not require legislative action. According to CFTC and SEC staff, since issuing the joint report in October 2009, the agencies have been focused on working with Congress on drafting legislation to address statutory changes recommended in the joint report. To date, Congress has acted on a request in one of the agencies’ recommendations to enhance operational coordination: The Consolidated Appropriations Act of 2010 authorized CFTC and SEC to fund the Joint Advisory Committee. The joint report’s recommendations for changes to one or both of the agencies’ statutes have yet to be enacted. H.R. 4173, as passed by the House of Representatives, would address statutory changes recommended by the report in five areas, if enacted (see table 2). First, H.R. 4173 includes provisions that would enhance CFTC’s authority over exchange and clearinghouse compliance with the CEA, as recommended by the joint report. Second, by amending the Securities Investor Protection Act (SIPA) to extend SIPA protection to margin related to futures positions held in a securities portfolio margining account, H.R. 4173 would address one of the statutory changes recommended to facilitate portfolio margining. H.R. 4173 also includes provisions that address recommended enhancements to a specific enforcement authority for either CFTC or SEC. For example, Sections 7207 and 7208 would grant SEC specific statutory authority for aiding and abetting under the Securities Act and the Investment Company Act. As noted in table 2, several of the H.R. 4173 provisions would represent only partial implementation of the joint report’s recommendations. For example, with respect to enhancing CFTC’s authority over exchange and clearinghouse rules, H.R. 4173 would not amend the CEA to allow CFTC to reject proposed rule changes if it cannot make a finding that the change is consistent with the CEA and regulations. In addition, H.R. 4173 provisions regarding fiduciary duty and whistleblower protections would implement recommended statutory changes with respect to securities market participants, but not futures market participants. Finally, the H.R. 4173 provision related to cross-border access would not empower CFTC to require certain foreign boards of trade to register with CFTC, as recommended in the joint report. According to CFTC and SEC staff, the joint report’s recommendations for action by one or both agencies generally are in the initiation or planning stage. As noted above, only one of the recommendations for enhanced interagency coordination included a request for legislative action, and Congress acted on this request to authorize funding for the Joint Advisory Committee. The agencies have drafted a charter for the Joint Advisory Committee, and CFTC and SEC staff told us they were working together to finalize the charter and consider selection of individuals to sit on the committee. The report’s other recommendations requiring agency action include the other operational coordination recommendations and recommendations for the agencies to align certain requirements and study certain issues, such as portfolio margining and SEC’s approach to cross- border access. Agency staff said they expect to have the Joint Advisory Committee functioning by late spring or early summer 2010 but have not set firm time frames for implementing the joint report’s other recommendations requiring agency action. While the joint report’s recommendations would reduce or eliminate certain inconsistencies in the two agencies’ regulatory approaches, additional harmonization opportunities exist and the agencies’ future harmonization efforts could benefit from clearer goals and accountability requirements. The agencies acknowledge that the recommendations do not address all differences that may not be justified by their policy objectives, and market participants and other experts identified areas they believe could benefit from additional harmonization efforts. Importantly, some remaining differences in the agencies’ regulatory approaches could create opportunities for regulatory arbitrage. CFTC and SEC staff told us they may use the Joint Advisory Committee to further Treasury’s recommendation on harmonization, but the agencies have not established clear goals for harmonization or requirements to report and evaluate progress toward such goals. Without a clear vision for future harmonization efforts, the agencies may not be strategically positioned to implement the joint report’s recommendations and assess remaining opportunities for harmonization. Given time and resource constraints, agency staff said they could not address all differences through the joint report’s recommendations. As noted earlier, CFTC and SEC relied heavily on public input to identify areas of focus for the joint report. Although public input generally indicated support for harmonization in several areas, on some issues, significant disagreement existed at the joint public meetings as to whether or how to achieve harmonization, presenting challenges to reaching agreement in a short time. The joint report’s recommendations acknowledge a need for further study in certain areas, including risk-based portfolio margining and SEC’s approach to cross-border access. However, with respect to certain other issues where disagreement existed, such as the structure of the U.S. futures markets and SEC’s process for reviewing and approving exchange and clearinghouse rules, the agencies did not make any recommendations. Moreover, CFTC and SEC acknowledge that some potential harmonization opportunities not covered in the report, such as harmonizing the agencies’ investor definitions, merit consideration by the agencies. At the joint public meetings, the CFTC and SEC Chairmen both cited reducing regulatory arbitrage as an objective of the harmonization effort. Importantly, some remaining statutory and regulatory differences may create opportunities for regulatory arbitrage—that is, the potential for market participants to use a particular market or product instead of a competing market or product to exploit regulatory differences. In its white paper, Treasury expressed concern that economically equivalent instruments may be regulated in different manners, depending on which agency has jurisdiction, and consistent with this concern, we have endorsed the goal of consistent regulation of similar products and institutions to help minimize negative competitive outcomes. However, the joint report’s recommendations do not address all inconsistencies in oversight of similar products and institutions. For example, the joint report’s recommendations do not explicitly address the potential for different margin requirements for certain economically equivalent instruments when used for similar purposes. In a joint comment letter submitted to the agencies following the joint public meetings, several securities options exchanges and the Options Clearing Corporation said that differences between the agencies’ approaches to regulating margin can result in significantly different margin requirements for comparable securities options and futures products, creating a competitive disadvantage for certain options regulated as securities. The joint report notes that CFTC, unlike SEC, generally does not have authority to set margin levels for futures contracts or options on futures, but does not recommend a statutory change to harmonize the agencies’ authority over margin requirements. In addition, SEC staff noted that all securities transactions are subject to a small fee under the SEA and that there is no comparable fee for futures transactions. The joint report did not include a discussion of this difference, and according to SEC staff, a statutory change would be required to achieve harmonization on this matter. As discussed below, market participants identified other areas where remaining differences could create the potential for regulatory arbitrage, including differences in market structure and investor definitions. CFTC staff said that recognizing that issues related to regulatory arbitrage are often complicated is important because many factors, including statutory goals, can drive differences in the rules applicable to similar products and activities and because judgments about which regulatory approach is more appropriate can be difficult. Moreover, regulatory differences with respect to similar products or institutions do not necessarily indicate that either futures or securities market requirements provide insufficient investor protection or impose excessive burdens on market participants. Nevertheless, when such differences exist, it is important to consider whether they can create incentives for market participants to engage in economically costly activities in order to take advantage of more favorable regulations. As part of our review, we contacted the 30 panelists who participated in the joint public meetings to ask them about their views on the joint report and its recommendations. We also requested input from four other individuals, based on suggestions from CFTC and SEC. In their written comments, respondents identified areas they believe could benefit from additional harmonization efforts. These areas include (1) legal certainty for new products, (2) oversight of exchange and clearinghouse rules, (3) portfolio margining, (4) market structure, and (5) investor definitions. Respondents provided other comments on the joint report and its recommendations, but we focused on remaining areas for harmonization most emphasized by respondents. Greater legal certainty for new products: Many respondents supported the joint report’s recommendation for having the U.S. Court of Appeals expeditiously resolve a dispute between CFTC and SEC over their jurisdiction over a new product in cases where the agencies do not reach agreement within a prescribed time frame. However, several expressed concern that implementation of this recommendation would not fully resolve concerns related to establishing greater legal certainty for new products. Recent Example of the Reolution Process for Juridictional Conflict On Juary 25, 2005, CBOE filed propoed rle chnge with SEC to lind trde option on re in trust holding invetment in gold. In 2004, SEC hpproved ecritie exchnge’s proposal to lind trde the gold trusre nderlying the propoed option prodct, but CFTC ff took the view tht the gold trusre hold e viewed as commodity trsaction (rther thecri- tie) nd tht, as such, CFTC hold hve exclusive jridiction over the option on the gold trusre. A result of thi difference in view, SEC deferred ction on the propoed liting of the option on gold trusre for over three ye. In the interim, CBOE submitted mendment to it propoed rle chnge, nd for other exchnge submitted proposa to lind trde option on gold trusre. staff expressed concern that an administrative body, depending on its composition, could be subject to political influence. Second, two futures market participants supported changes that would allow exchanges to choose whether to list a product as a future or a security, but CFTC and SEC staff said that agency review is needed to ensure that new products fit within the legal definitions of the regime—futures or securities—under which they are regulated. In ddition, in Octoer 2007, OneChicgo, ecrity fre exchnge, submitted proposal to CFTC to lind trde fre on gold trusre. In Mrch 200, psuant to memorndm of ndernding etween the gencie nd diussion etween CFTC nd SEC ff, SEC publihed the mended CBOE proposal for comment in the Federl Regiter. In Mrch nd April 200, CFTC publihed notice eeking public comment on exemption from CFTC’s exclusive jridiction for the OneChicgo prodct nd the CBOE prodct. The finliztion of thee exemption permitted the OneChicgo prodct to e trded nd clered as ecrity fre subject to the joint jridiction of CFTC nd SEC nd the gold trust option to e trded nd clered as ecritie option subject to exclusive SEC jridiction. On My 29, 200, SEC grnted pprovl to CBOE to lind trde the gold trust option. report recommends legislation to enhance CFTC’s authority over exchange and clearinghouse compliance with the CEA, it does not include a recommendation for SEC in this area. Echoing views expressed at the joint public meetings and discussed in the joint report, some respondents recommended that SEC adopt or consider adopting a process similar to CFTC’s more rapid process for reviewing and approving exchange and clearinghouse rules, under which most proposed rules are immediately effective upon self-certification by the exchange or clearinghouse that the rule complies with the CEA. Exchanges noted that the self-certification process is competitively important because it allows them to implement rule changes quickly. A few respondents also urged the two agencies to reach agreement on an overarching set of principles to govern their oversight of exchange and clearinghouse rules. This view also was reflected in the joint public meetings and the joint report. As noted in the joint report, SEC recently approved a new process for streamlining review of rule changes, and SEC staff noted that about two-thirds of rule changes proposed by securities exchanges are effective immediately upon filing. SEC staff acknowledged that despite the recent streamlining, differences remain between the two agencies’ rule approval processes. Under the SEA, for example, rule changes that are not effective under self- certification, in contrast to the approach under the CEA, must be approved by SEC before they are effective. In addition, all proposed rule changes on the securities side are published for comment. SEC staff noted that differences in the agencies’ rule approval processes in part reflect differences in the structures of the futures and securities markets. For example, in the securities markets, multiple exchanges compete to provide a trading venue for products that are fungible across the exchanges; thus proposed securities exchange rules can have implications for competition among the exchanges. Portfolio margining and insolvency regimes: The joint report’s recommendation to facilitate portfolio margining neither explicitly addresses differences in the portfolio margining methods used for futures and securities portfolio margining accounts nor fully addresses issues related to the insolvency of an intermediary that is dually registered as a broker-dealer and a futures commission merchant. Two respondents suggested that the agencies adopt a uniform portfolio margining regime. Currently, the portfolio margining method approved by SEC for securities portfolio margining accounts is different from the method for futures portfolio margining accounts. Agency staff said these differences could result in different margin requirements for similar, or economically equivalent, instruments when used for similar purposes. SEC staff said they are aware of the potential for regulatory arbitrage as a result of these different methods. CFTC and SEC staff agreed that there are issues related to portfolio margining that merit further consideration. In addition, a few market participants recommended that CFTC and SEC work with Congress to harmonize the bankruptcy and customer protection rules applicable to joint broker-dealer/FCMs. These respondents noted that harmonization of these rules is needed to help ensure the orderly unwinding of customer positions in the event of a joint broker-dealer/FCM bankruptcy. One respondent observed that while addressing these insolvency issues cannot be characterized as a “quick win,” CFTC and SEC should begin the process soon considering its importance and the volatility of today’s markets. Market structure: At the joint public meetings, panelists presented mixed views on the need to resolve differences in the futures and securities market structures, and the joint report discusses these views. Noting the absence of a joint report recommendation, a few respondents recommended actions to promote greater competition in the U.S. futures markets. Two respondents told us that Congress and CFTC should take steps to introduce features of the securities market structure to the futures markets to improve competition and lower costs for investors in these markets. For example, one securities market participant recommended that CFTC encourage listing of fungible products to allow trading of products on multiple exchanges and mandate interoperability of clearing organizations to permit market participants to clear trades at a clearinghouse regardless of the facility on which the trade was executed. Another respondent suggested that regulators take a more aggressive stance in using their antitrust authorities to ensure that futures exchanges and clearinghouses and their rules are not anticompetitive. In written comments provided in response to our questions, one futures market participant opposed mandated interoperability among futures clearinghouses, citing the potential for interoperability to inhibit innovation, eliminate competition among clearinghouses, and contribute to greater systemic risk by linking and exposing futures clearinghouses to one anothers’ risks. The joint report states that securities options exchanges have been both competitive and innovative in developing new products, notwithstanding the use of central clearing. Although the joint report did not include a recommendation related to market structure, it noted that the agencies have supported provisions for nondiscriminatory access to clearing organizations for the over-the-counter derivatives market. Moreover, in 2007, in response to Treasury’s request for comments on the regulatory structure associated with financial institutions, the Department of Justice expressed support for a review of exchange-controlled clearing of financial futures, the regulatory structure that underlies it, and its alternatives. The joint report notes that the Futures Industry Association, in its comment letter to the agencies, stated that it would welcome a comprehensive study of how best to improve competition in the market structures for both futures and listed options markets. Investor definitions: Some market participants recommended that CFTC and SEC harmonize their respective customer categories and definitions with respect to oversight of intermediaries to help ensure greater consistency in the application of customer protection rules. One dually registered broker-dealer/FCM said that because essentially the same entities transact business across asset classes, the agencies could simplify definitions to include fewer categories based on net worth (rather than financial assets) and investment experience. For example, this respondent suggested that the agencies agree on the definition of “retail” investor. SEC and CFTC staff said the agencies did not cover this issue for the purposes of the joint report and that it merits further consideration by the agencies. CFTC and SEC staff told us that the agencies may use the Joint Advisory Committee to coordinate their efforts to address harmonization issues involving differences between the two agencies’ approaches to regulation. In prior work, we have identified practices that can help enhance and sustain collaboration among federal agencies. These practices include defining and articulating a common outcome; developing mechanisms to monitor, evaluate, and report on results; and reinforcing agency accountability for collaborative efforts through agency plans and reports. Although the draft charter for the Joint Advisory Committee includes furtherance of Treasury’s recommendation on harmonization as one possible activity of the committee, the agencies have not established clear goals for harmonization or requirements for the agencies to report and evaluate progress toward such goals. For example, the agencies have not created a plan for implementing the joint report’s recommendations or established clearly defined objectives for addressing remaining harmonization opportunities. Consistent financial oversight of similar products and institutions—one of nine principles we have identified for financial regulatory reform—could be used to guide the agencies’ efforts to define objectives that would allow them to readily determine which issues fall within or outside the scope of harmonization. Without clear goals and accountability requirements to guide future coordination efforts, the agencies may not be strategically positioned to implement the joint report’s recommendations and address remaining harmonization opportunities. The October 2009 joint report of CFTC and SEC on harmonization represents a substantial positive step toward reducing and eliminating inconsistencies in the agencies’ regulatory approaches. The two agencies’ efforts to identify and assess harmonization opportunities are notable for the unprecedented dialogue held at the joint public meetings and the agencies’ development of 20 recommendations in just over 3 months. However, the agencies could not address all harmonization opportunities through this time-constrained study, and additional areas for harmonization may emerge as the markets continue to evolve. With the joint report completed, sustained coordination between CFTC and SEC is crucial as the agencies work to implement the report’s recommendations and to assess remaining harmonization opportunities. Indeed, several of the report’s recommendations direct the agencies to create a joint body or program to facilitate operational coordination. Although agency staff told us that they plan to use the Joint Advisory Committee to coordinate future harmonization efforts, CFTC and SEC have not yet established goals with respect to harmonization or developed requirements to report and evaluate their progress toward these goals. With regard to the status of the joint report’s recommendations, the agencies expect to have the Joint Advisory Committee functioning within months, but have not yet set time frames for implementing the report’s other recommendations for agency action, which generally remain in the planning stages. We recognize that relatively little time has passed since the joint report was issued and that other agency priorities, such as working with Congress on drafting legislation, may delay action toward implementing these recommendations. As the agencies continue to work toward implementation, setting appropriate goals, including time frames, and reporting progress toward these goals could help to ensure that the agencies take timely actions to address these recommendations. Moreover, the agencies have not established a formal plan for identifying and assessing remaining harmonization opportunities as well as additional areas for harmonization that may emerge as a result of regulatory reform and market developments. Such a plan could establish clear objectives for assessing remaining harmonization opportunities, such as eliminating inconsistencies and gaps in oversight of similar products and entities. Without such a plan, ongoing harmonization efforts may become stalled and the agencies may not continue the process of determining which issues fall within or outside the scope of harmonization and what actions are needed to address them. To help ensure that CFTC and SEC are strategically positioned to implement the joint report’s recommendations and address remaining harmonization opportunities, we recommend that as CFTC and SEC continue to develop the charter for the Joint Advisory Committee, the Chairmen of CFTC and SEC take steps to establish, with associated time frames, clearer goals for future harmonization efforts and requirements for reporting and evaluating progress toward these goals. Specifically, the agencies could benefit from formalizing a plan to assess implementation of the joint report’s recommendations and harmonization opportunities that may not have been fully addressed by the joint report, such as differences in market structure and investor definitions. Such a plan could include goals for future harmonization efforts, such as time frames for implementing the recommendations; assessment of whether remaining differences in statutes and regulations result in inconsistent regulation of similar products and entities that could lead to opportunities for regulatory arbitrage; and periodic reports to Congress on their progress, including the implementation and impact of the recommendations. We provided the Chairmen of CFTC and SEC with a draft of this report for their review and comment. CFTC and SEC provided us with written comments, which appear in appendixes III and IV. In their comments, both agencies agreed to take steps to implement our recommendation. CFTC stated that, consistent with this recommendation, the charter for the Joint Advisory Committee now provides that “he committee shall work to develop clear and specific goals toward identifying and addressing emerging regulatory risks, protecting investors and customers, and furthering regulatory harmonization, and to recommend processes and procedures for achieving and reporting on those goals.” SEC agreed that the agencies should work to define specific goals for harmonization, including setting time frames for implementing the joint report’s recommendations and developing periodic reports to evaluate their progress in this area. SEC also agreed that developing a formal plan for identifying and assessing remaining and emerging harmonization opportunities would be beneficial to furthering the agencies’ efforts. Both agencies noted their appreciation of our recognition of the joint report as a substantial positive step and commented that they are continuing to work toward implementing the joint report’s recommendations. Finally, we received technical comments from CFTC and SEC that we have incorporated into the report, as appropriate. We are sending a copy of this report to the Chairman and the Ranking Member of the Subcommittee on Agriculture, Rural Development, Food and Drug Administration, and Related Agencies of the House Committee on Appropriations. We are also sending copies to the Chairman of the Commodity Futures Trading Commission, the Chairman of the Securities and Exchange Commission, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in Appendix V. To describe how the Commodity Futures Trading Commission (CFTC) and the Securities and Exchange Commission (SEC) identified and assessed significant differences in their statutes and rules, we reviewed and analyzed the joint report of CFTC and SEC on harmonization (joint report); transcripts of the panel discussions held at the joint public meetings hosted by the agencies on September 2 and 3, 2009; statements submitted to CFTC and SEC in response to the agencies’ request for public comment on opportunities for harmonization; CFTC and SEC analyses of relevant differences in their statutes and regulations; and other agency documentation related to the joint report. We also interviewed CFTC and SEC staff who participated in the agencies’ efforts to collect public input and draft the joint report. To describe the status of the agencies’ efforts to implement the joint report’s recommendations, we reviewed and analyzed relevant provisions of proposed and enacted legislation that address legislative actions, including statutory changes, recommended in the joint report. Specifically, we analyzed and summarized provisions of H.R. 4173, as passed by the House of Representatives, that would address, at least in part, recommendations in the joint report. We reviewed the provision of the Consolidated Appropriations Act, 2010, that authorized funding for the Joint Advisory Committee as well as the agencies’ draft charter for this committee. Finally, we spoke with CFTC and SEC staff about the status of statutory changes and agency actions recommended in the joint report. To identify additional steps CFTC and SEC could take to harmonize their regulatory approaches, we interviewed CFTC and SEC staff and obtained and analyzed written comments on the joint report from representatives of securities and futures market participants, the investor community, and other experts who participated in the joint public meetings. Specifically, in January and February 2010, we developed and implemented a brief e-mail questionnaire to collect feedback on the joint report and its recommendations from market participants and other experts. On the basis of our review of the list of panelists who participated in the joint public meetings and our discussions with CFTC and SEC about how these panelists were selected, we determined that the 30 individuals who served as panelists were an appropriate group of respondents for this questionnaire. We also e-mailed this questionnaire to four other individuals, based on suggestions from CFTC and SEC. These individuals included former CFTC Commissioners and a representative of the Securities Industry and Future Markets Association who did not participate in the joint public meetings but submitted comments to the agencies on harmonization. In January 2010, we e-mailed our questionnaire to the 34 individuals and requested written comments by early February 2010. We received 22 responses and analyzed these responses to identify areas that respondents believed could benefit from additional harmonization. Finally, we reviewed our prior work on futures and securities markets regulation, financial regulatory reform, and practices that can enhance and sustain collaboration among federal agencies. We conducted this performance audit from January 2010 to April 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. (CFTC) and the Securities and Exchange Commission (SEC) identify and assess conflicts between their laws and regulations and develop recommendations to address such conflicts? 2. What progress have CFTC and SEC made towards implementation of the joint report’s recommendations? 3. What additional steps, if any, could CFTC and SEC take to eliminate or reduce inconsistencies in oversight, and enhance regulatory efficiency and effectiveness, as well as market transparency? reviewed and analyzed the SEC/CFTC harmonization report, documentation of public input obtained by CFTC and SEC through joint public meetings and a public comment period, and preliminary analyses conducted by CFTC and SEC on relevant differences in their statutes and regulations; interviewed CFTC and SEC officials about how they identified and assessed harmonization opportunities and developed recommendations; progress made on the report’s recommendations; and additional harmonization opportunities that may exist; reviewed provisions in proposed legislation that may address SEC/CFTC recommendations for statutory changes; obtained the views of market participants and other experts on the report’s recommendations and additional opportunities for harmonization; and reviewed prior GAO work and other relevant studies. CFTC was created in 1974 with the mandate to regulate commodity futures and commodity options markets. CFTC’s mission is to protect market users and the public from fraud, manipulation, and abusive practices related to the sale of commodity and financial futures and options, and to foster open, competitive and financially sound futures and options markets. Futures markets serve to provide a means for risk management and price discovery. SEC was created in 1934 to oversee the securities markets. SEC’s mission is to protect investors, maintain fair, orderly, and efficient markets, and facilitate capital formation. While both CFTC and SEC seek to promote market integrity and transparency, securities markets are concerned with capital formation. Certain securities markets, such as securities options and other securities derivatives markets, also facilitate the transfer of risk. Although CFTC and SEC generally oversee separate markets, their jurisdiction has overlapped in several areas. These areas have included: Futures on single stocks and the Shad-Johnson Accord; Innovative products that have features of both futures and securities; and Dually registered broker-dealers and futures commission merchants. In its June 2009 White Paper on financial regulatory reform, the Department of the Treasury (Treasury) noted that the broad public policy objectives of futures and securities regulation are the same, and that many differences in regulation exist between the markets that are no longer justified. Treasury expressed the following concerns: Economically equivalent instruments may be regulated in a different manner, depending on which agency has jurisdiction; Jurisdictional disputes consume significant agency resources, and uncertainty about the outcome of such disputes may impede innovation; and The agencies follow different approaches to regulation of exchanges, clearing organizations, and intermediaries. with a joint report on harmonization of their regulatory approaches. CFTC and SEC conducted joint analyses and obtained public input to identify significant conflicts in statutes and regulations. In July and August 2009, CFTC and SEC worked together on a preliminary side-by-side analysis of their statutes and regulations. The agencies sought input from market participants and other experts by hosting joint public meetings in early September 2009 and providing an opportunity for public comment from August 19 to September 14, 2009. At the joint public meetings, CFTC and SEC held panel discussions to address differences in five broad categories: (1) regulation of exchanges and markets; (2) regulation of intermediaries; (3) regulation of clearance and settlement; (4) enforcement; and (5) regulation of investment funds. CFTC and SEC officials said that given the short timeline, they focused on identifying significant areas of difference and developing actionable recommendations. CFTC and SEC worked together to analyze the public input and to develop the report’s recommendations. In drafting the joint report, the agencies grouped issues into eight potential areas for harmonization and made at least one recommendation in all but one of these areas. The agencies also made several recommendations to enhance Create a Joint Advisory Committee to be tasked with considering and developing solutions to emerging and ongoing issues of common interest in the futures and securities markets; Create a Joint Agency Enforcement Task Force; Establish a cross-agency training program; Develop a program for sharing staff through detail assignments; Create a Joint Information Technology Task Force. SEC and CFTC officials said that since the report issued in October 2009, they have focused on assisting Congress with drafting language for statutory changes. Congress has authorized funding for the Joint Advisory Committee, but other recommended statutory changes have not been enacted. Provisions in H.R. 4173, as passed by the House of Representatives, would address some of these recommendations. Provision in H.R. 4173 Section 3114 would partially implement this recommendation by expanding the time period allowed for CFTC review of new rules and by repealing certain procedural requirements for CFTC to file an enforcement action for violation of core principles. Sections 3103 and 3111 include amended core principles for clearinghouses and contract markets, respectively, clarifying the CFTC’s rulemaking authority to determine the appropriate manner of compliance with the CEA. H.R. 4173 would not amend the CEA to provide for agency approval of proposed rule changes based on a finding that the change is consistent with the CEA and regulations. Section 7509 would partially implement this recommendation by amending the Securities Investor Protection Act to extend insurance protection to futures held in a securities portfolio margin account. Section 7103 would partially implement this recommendation by amending the Securities Exchange Act (SEA) and the Investment Advisors Act to create a fiduciary duty for brokers, dealers, and investment advisors. Provision in H.R. 4173 Section 3108 would authorize CFTC to require futures commission merchants and introducing brokers to implement conflict of interest procedures separating research and analysis from trading and clearing activities. Section 7203 would partially implement this recommendation by amending the SEA to enhance whistleblower protections. Section 3118 would amend the CEA to expand CFTC’s authority over certain disruptive trading practices. Grant SEC specific statutory authority for aiding and abetting under the Securities Act and the Investment Company Act Empower CFTC to require certain foreign boards of trade to register with CFTC and to meet certain standards that enhance transparency and market integrity. Sections 7207 & 7208 would grant SEC specific statutory authority for aiding and abetting under the Securities Act and the Investment Company Act. Section 3115 would amend the CEA to authorize CFTC to require foreign boards of trade seeking to provide direct access to persons in the U.S. to meet certain standards for transparency and market integrity with respect to contracts where the price is linked to a contract trading on a U.S. exchange, but does not require registration in the U.S. Committee. The report’s other recommendations for agency action generally are in the planning or initiation stages. The agencies plan to have the Joint Advisory Committee functioning by late spring or early summer 2010, but have not set timelines for implementing the other recommendations that do not require statutory changes. could not address all areas of difference through the joint report’s recommendations. CFTC and SEC officials said they did not explicitly define the term “harmonization” and focused on jurisdictional disputes and broad differences in regulation, which agency officials viewed as encompassing differences in the regulation of economically equivalent products. The report does not recommend changes to address some differences that may create incentives for regulatory arbitrage—that is, some remaining differences in rules and statutes may influence market participants’ incentives to invest in a particular product or have a product regulated as a security or future. In written comments provided to GAO, market participants and other experts identified areas they believe could benefit from additional harmonization efforts. Greater legal certainty for new products: Some market participants called for an administrative body to resolve disputes, but agency officials cited precedents and pending legislation in which courts serve as venues for deciding questions concerning the legal definitions of securities and futures. Oversight of exchange and clearinghouse rules: Some market participants recommended that SEC move towards greater self- certification of new exchange rules. SEC officials noted that recent changes streamlined SEC’s process, but acknowledged that differences remain. Portfolio margining: Agency officials agreed that there are issues related to portfolio margining that merit further consideration. Market structure: Market participants had mixed views on the need to resolve differences in market structure. Harmonizing investor definitions: Agency officials agreed that this is an area for potential harmonization. Committee to identify and address harmonization issues involving conflicts between the two agencies’ approaches to regulation. However, the draft charter for the Joint Advisory Committee does not establish clear goals for harmonization and requirements for reporting and evaluating progress towards such goals. GAO has identified practices that can help enhance and sustain coordination among federal agencies. These practices include: defining and articulating a common outcome; developing mechanisms to monitor, evaluate, and report on results; andreinforcing agency accountability for collaborative efforts through agency plans and reports. (GAO-06-15) Consistent financial oversight of similar products and institutions – one of GAO’s nine principles for financial regulatory reform – could be used to guide CFTC/SEC efforts to define and articulate a common outcome. (GAO-09-216) Without clear goals and accountability requirements to guide future coordination efforts, the agencies may not be strategically positioned to address remaining harmonization opportunities. Furthermore, without clearly defined objectives for harmonization, CFTC and SEC cannot readily determine which issues fall within or outside the scope of harmonization. implement the joint report’s recommendations and address remaining harmonization opportunities, we recommend that as CFTC and SEC continue to develop the charter for the Joint Advisory Committee, they take steps to establish clearer goals for future harmonization efforts and requirements for reporting and evaluating progress towards these goals. Specifically, the agencies could benefit from formalizing a plan to assess implementation of the joint report’s recommendations and harmonization opportunities that may not have been fully addressed by the joint report, such as differences in market structure and investor definitions. Such a plan could include assessment of whether remaining differences in statutes and regulations result in inconsistent regulation of similar products and entities that could lead to opportunities for regulatory arbitrage, and periodic reports to Congress on their progress, including the implementation and impact of the recommendations. In addition to the contact named above, Karen Tremba (Assistant Director), John Fisher, Matt McDonald, Omyra Ramsingh, Jennifer Schwartz, Andrew Stavisky, and Richard Tsuhara made significant contributions to this report. Financial Regulation: A Framework for Crafting and Assessing Proposals to Modernize the Outdated U.S. Financial Regulatory System. GAO-09-216. Washington, D.C.: January 8, 2009. Commodity Futures Trading Commission: Trends in Energy Derivatives Markets Raise Questions about CFTC’s Oversight. GAO-08-25. Washington, D.C.: October 19, 2007. Financial Regulation: Industry Trends Continue to Challenge the Federal Regulatory Structure. GAO-08-32. Washington, D.C.: October 12, 2007. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Financial Regulation: Industry Changes Prompt Need to Reconsider U.S. Regulatory Structure. GAO-05-61. Washington, D.C.: October 6, 2004. CFTC and SEC: Issues Related to the Shad-Johnson Jurisdictional Accord. GAO/GGD-00-89. Washington, D.C. April 6, 2000. The Commodity Exchange Act: Issues Related to the Commodity Futures Trading Commission’s Reauthorization. GAO/GGD-99-74. Washington, D.C.: May 5, 1999. The Commodity Exchange Act: Legal and Regulatory Issues Remain. GAO/GGD-97-50. Washington, D.C.: April 7, 1997. Financial Market Regulation: Benefits and Risks of Merging SEC and CFTC. GAO/T-GGD-95-153. Washington, D.C.: May 3, 1995.
The conference report accompanying the Consolidated Appropriations Act of 2010 directed GAO to assess the joint report of the (SEC) and the Commodity Futures Trading Commission (CFTC) on harmonization of their regulatory approaches. In October 2009, CFTC and SEC issued this report in response to the Department of the Treasury's recommendation that the two agencies assess conflicts in their rules and statutes with respect to similar financial instruments. GAO's objectives were to review (1) how CFTC and SEC identified and assessed harmonization opportunities, (2) the agencies' progress toward implementing the joint report's recommendations, and (3) additional steps the agencies could take to reduce inconsistencies and overlap in their oversight. To meet these objectives, GAO reviewed the joint report and related documentation, interviewed agency officials, and obtained and analyzed written comments on the report from market participants. CFTC and SEC conducted joint analyses and sought public input to inform their efforts to identify and assess significant differences in their rules and statutes and develop recommendations to address such differences. The agencies obtained public input through joint public meetings and a public comment period and worked together to analyze this input. In drafting the joint report on harmonization of their regulatory approaches, CFTC and SEC focused their analysis on eight potential areas for harmonization and made at least one recommendation in all but one of these areas. The joint report also includes several recommendations to enhance coordination between the agencies. For example, the report recommended the creation of a Joint Advisory Committee to be tasked with considering and developing solutions to issues of common interest in the futures and securities markets. The joint report did not cover gaps in the agencies' authorities to oversee over-the-counter derivatives, which were the subject of congressional deliberation at the time of their study. The joint report's recommendations for statutory changes have yet to be enacted, and the recommendations for agency action remain in the planning stages. According to agency staff, since issuing the joint report in October 2009, the agencies have been focused on working with Congress on drafting legislation to address recommended statutory changes. Congress authorized CFTC and SEC to fund the Joint Advisory Committee, as requested in the joint report, and proposed legislation includes provisions that would partially address recommended statutory changes in areas including oversight of exchange rules and enforcement. CFTC and SEC have drafted a charter for the Joint Advisory Committee and expect to have this committee functioning by early summer 2010. Agency staff said the agencies have not set firm timelines for the implementation of the other recommendations for agency action. Additional harmonization opportunities exist beyond those addressed by the joint report's recommendations, and future efforts by CFTC and SEC to assess these opportunities could benefit from clearer goals and accountability requirements. With only a few months to complete their report, agency staff said the agencies could not address all differences in their rules and statutes through the joint report's recommendations. Market participants identified several areas they believe could benefit from additional harmonization efforts, including portfolio margining and investor definitions and categories. The agencies plan to coordinate future harmonization efforts through the Joint Advisory Committee, but they have not yet developed clear goals for harmonization or developed requirements for the agencies to evaluate and report their progress toward meeting such goals. Without a clearer vision to guide future harmonization efforts and mechanisms to ensure accountability for these efforts, CFTC and SEC may not be strategically positioned to implement the joint report's recommendations and address remaining harmonization opportunities.
We found that Indian Affairs does not have complete and accurate information on safety and health conditions at all BIE schools because of key weaknesses in its inspection program. In particular, Indian Affairs does not inspect all BIE schools annually as required by Indian Affairs’ policy, limiting information on school safety and health. We found that 69 out of 180 BIE school locations were not inspected in fiscal year 2015, an increase from 55 locations in fiscal year 2012 (see fig. 2). Further, we determined that 54 school locations received no inspections during the past 4 fiscal years. At the regional level, Indian Affairs did not conduct any annual school safety and health inspections in 4 of BIA’s 10 regions with school facility responsibilities—the Northwest, Southern Plains, Southwest, and Western regions—in fiscal year 2015, accounting for 52 of the 180 school locations (see fig. 3). Further, the same four regions did not conduct any school inspections during the previous 3 fiscal years. In the Western region, we found three schools that had not been inspected since fiscal year 2008 and three more that had not been inspected since fiscal year 2009. Indian Affairs’ safety office considers the lack of inspections a key risk to its safety and health program. BIA regional safety officers that we spoke with cited three key factors affecting their ability to conduct required annual safety and health inspections: (1) extended vacancies among BIA regional safety staff, (2) uneven workload distribution among BIA regions, and (3) limited travel budgets. Officials told us that one BIA region’s only safety position was vacant for about 10 years due to funding constraints. As an example of uneven workload distribution, one BIA region had two schools with one safety inspector position, while another region had 32 schools with one safety inspector position. Currently, Indian Affairs has not taken actions to ensure all schools are annually inspected. Without conducting annual inspections at all school locations, Indian Affairs does not have complete information on the frequency and severity of safety and health deficiencies at all BIE school locations and cannot ensure these facilities are safe for students and staff and currently meet safety and health requirements. We also found that Indian Affairs does not have complete and accurate information for the two-thirds of schools that it did inspect in fiscal year 2015 because it has not provided BIA inspectors with updated and comprehensive inspection guidance and tools. In particular, we found that Indian Affairs’ inspection guidance lacks comprehensive procedures on how inspections should be conducted, which Indian Affairs’ safety office acknowledged. For example, BIA’s Safety and Health Handbook—last updated in 2004—provides an overview of the safety and health inspection program but does not specify the steps inspectors should take to conduct an inspection. Further, according to some regional safety staff, Indian Affairs does not compile and provide inspectors with a reference guide for all relevant current safety and health standards. At the same time, BIA inspectors use inconsistent inspection practices, which may limit the completeness and accuracy of Indian Affairs’ information on school safety and health. For example, at one school we visited, school officials told us that the regional safety inspector conducted an inspection from his car and did not inspect the interior of the school’s facilities, which include 34 buildings. The inspector’s report comprised a single page and identified no deficiencies inside buildings. Concerned about the lack of completeness of the inspection, school officials said they arranged with the Indian Health Service (IHS) within the Department of Health and Human Services to inspect their facilities. IHS identified multiple serious safety and health problems, including electrical shock hazards, emergency lighting and fire alarms that did not work, and fire doors that were difficult to open or close. Currently, Indian Affairs does not systematically evaluate the thoroughness of school safety and health inspections and monitor the extent to which inspection procedures vary within and across regions. According to federal internal control standards, internal control monitoring should be ongoing and assess program performance, among other aspects of an agency’s operations. Without monitoring whether safety inspectors across BIA regions are consistently following inspection procedures and guidance, inspections in different regions may continue to vary in completeness and miss important safety and health deficiencies at schools that could pose dangers to students and staff. To support the collection of complete and accurate safety and health information on the condition of BIE school facilities nationally, we recommended that Interior (1) ensure all BIE schools are annually inspected for safety and health, as required by its policy, and that inspection information is complete and accurate and (2) revise its inspection guidance and tools, require that regional safety inspectors use them, and monitor safety inspectors’ use of procedures and tools across regions to ensure they are consistently adopted. Interior agreed with these recommendations. We also found that Indian Affairs is not providing schools with needed support in addressing deficiencies or consistently monitoring whether they have established safety committees, which are required by Indian Affairs. In particular, according to Indian Affairs information, one-third or less of the 113 schools inspected in fiscal year 2014 had abatement plans in place, as of June 2015. Interior requires that schools put in place such plans for any deficiencies inspectors identify. Because such plans are required to include time frames, steps, and priorities for abatement, they are an initial step in demonstrating how schools will address deficiencies identified in both annual safety and health and boiler inspection reports. Among the 16 schools we visited, several schools had not abated high- risk deficiencies within the time frames required by Indian Affairs. Indian Affairs requires schools to abate high-risk deficiencies within 1 to 15 days, but we found that inspections of some schools identified serious unabated deficiencies that repeated from one year to the next year. For example, we reviewed inspection documents for two schools and found numerous examples of serious “repeat” deficiencies—those that were identified in the prior year’s inspection and should have been corrected soon afterward but were not. One school’s report identified 12 repeat deficiencies that were assigned Interior’s highest risk assessment category, which represents an immediate threat to students’ and staff safety and health and require correction within a day. Examples include fire doors that did not close properly; fire alarm systems that were turned off; and obstructions that hindered access/egress to building corridors, exits, and elevators. Another school’s inspection report showed over 160 serious hazards that should have been corrected within 15 days, including missing fire extinguishers, and exit signs and emergency lights that did not work. Besides these repeat deficiencies, we also found that some schools we visited took significantly longer than Indian Affairs’ required time frames to abate high-risk deficiencies. For example, at one school, 7 of the school’s 11 boilers failed inspection in 2015 due to various high-risk deficiencies, including elevated levels of carbon monoxide and a natural gas leak (see fig. 4). Four of the 7 boilers that failed inspection were located in a student dormitory. The inspection report designated most of these boiler deficiencies as critical hazards that posed an imminent danger to life and health, which required the school to address them within a day. School officials told us they continued to operate the boilers and use the dormitory after the inspection because there was no backup system or other building available to house the students. Despite the serious risks to students and staff, most repairs were not completed for about 8 months after the boiler inspection. Indian Affairs and school officials could not provide an explanation for why repairs took significantly longer than Indian Affairs’ required time frames. Limited capacity among school staff, challenges recording abatement information in the data system, and limited funding have hindered schools’ development and implementation of abatement plans, according to school and Indian Affairs officials. Additionally, Indian Affairs has not taken needed steps to build the capacity of school staff to abate safety and health deficiencies, such as by offering basic training for staff in how to maintain and conduct repairs to school facilities. While some regional officials told us that they may provide limited assistance to schools when asked, such ad hoc assistance is not likely to build schools’ capacity to abate deficiencies because it does not address the larger challenges faced by schools. Several officials at Indian Affairs’ safety office and BIA regional offices acknowledged they do not have a plan to build schools’ capacity to address safety and health deficiencies. Absent such a plan, schools will continue to face difficulties in addressing unsafe and unhealthy conditions in school buildings. Finally, we found that Indian Affairs has not consistently monitored whether schools have established safety committees, despite policy requirements for BIA regions to ensure all schools do so. Safety committees, which are composed of school staff and students, are vital in preventing injuries and eliminating hazards, according to Indian Affairs guidance. Examples of committee activities may include reviewing inspection reports or identifying problems and making recommendations to abate unhealthy or unsafe conditions. However, BIA safety officials we interviewed in three regions estimated that about half or fewer of BIE schools had created safety committees in their respective regions, though they were unable to confirm this because they do not actively track safety committees. Without more systemic monitoring, Indian Affairs is not in a position to know whether schools have fulfilled this important requirement. To ensure that all BIE schools are positioned to address safety and health problems with their facilities and provide student environments that are free from hazards, we recommended that Interior (1) develop a plan to build schools’ capacity to promptly address safety and health problems with facilities and (2) consistently monitor whether schools have established required safety committees. Interior agreed with these recommendations. In conclusion, because Indian Affairs has neither conducted required annual inspections for BIE schools nationwide nor provided updated guidance and tools to its safety inspectors, it lacks complete and accurate safety and health information on school facilities. As a result, Indian Affairs cannot effectively determine the magnitude and severity of safety and health deficiencies at schools and is thus unable to prioritize deficiencies that pose the greatest danger to students and staff. Further, Indian Affairs has not developed a plan to build schools’ capacity to promptly address deficiencies or consistently monitored whether schools have established required safety committees. Without taking steps to improve oversight and support for BIE schools in these key areas, Indian Affairs cannot ensure that the learning and work environments at BIE schools are safe, and it risks causing harm to the very children that it is charged with educating and protecting. Interior agreed with our recommendations to address these issues and noted several actions it plans to take. Chairman Calvert, Ranking Member McCollum, and Members of the Subcommittee, this concludes my prepared remarks. I will be happy to answer any questions you may have. If you or your staff have any questions about this testimony or the related report, please contact Melissa Emrey-Arras at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement and the related report include Elizabeth Sirois (Assistant Director), Edward Bodine (Analyst-in- Charge), Lara Laufer, Jon Melhus, Liam O’Laughlin, Matthew Saradjian, and Ashanta Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's March 2016 report, entitled Indian Affairs: Key Actions Needed to Ensure Safety and Health at Indian School Facilities , GAO-16-313 . The Department of the Interior's (Interior) Office of the Assistant Secretary-Indian Affairs (Indian Affairs) lacks sound information on safety and health conditions of all Bureau of Indian Education (BIE) school facilities. Specifically, GAO found that Indian Affairs' national information on safety and health deficiencies at schools is not complete and accurate because of key weaknesses in its inspection program, which prevented GAO from conducting a broader analysis of schools' safety and health conditions. Indian Affairs' policy requires its regional safety inspectors to conduct inspections of all BIE schools annually to identify facility deficiencies that may pose a threat to the safety and health of students and staff. However, GAO found that 69 out of 180 BIE school locations were not inspected in fiscal year 2015, an increase from 55 locations in fiscal year 2012. Agency officials told GAO that vacancies among regional staff contributed to this trend. As a result, Indian Affairs lacks complete information on the frequency and severity of health and safety deficiencies at BIE schools nationwide and cannot be certain all school facilities are currently meeting safety requirements. Number of Bureau of Indian Education School Locations That Were Inspected for Safety and Health, Fiscal Years 2012-2015 Indian Affairs is responsible for assisting schools on safety issues, but it is not taking needed steps to support schools in addressing safety and health deficiencies. While national information is not available, officials at several schools GAO visited said they faced significant difficulties addressing deficiencies identified in annual safety and health and boiler inspections. Inspection documents for two schools GAO visited showed numerous high-risk safety and health deficiencies—such as missing fire extinguishers—that were identified in the prior year's inspection report, but had not been addressed. At another school, four aging boilers in a dormitory failed inspection due to elevated levels of carbon monoxide, which can cause poisoning where there is exposure, and a natural gas leak, which can pose an explosion hazard. Interior's policy in this case calls for action within days of the inspection to protect students and staff, but the school continued to use the dormitory, and repairs were not made for about 8 months. Indian Affairs and school officials across several regions said that limited staff capacity, among other factors, impedes schools' ability to address safety deficiencies. Interior issued an order in 2014 that emphasizes building tribes' capacity to operate schools. However, it has not developed a plan to build BIE school staff capacity to promptly address deficiencies. Without Indian Affairs' support of BIE schools to address these deficiencies, unsafe conditions at schools will persist and may endanger students and staff.
The Navy operates thousands of shore activities worldwide and employs over 1 million military and civilian personnel. Hundreds of Navy Fund Administering Activities at the field level provide budgeting and accounting for more than 9,000 Navy cost centers. A cost center is the lowest level in the Navy’s financial management chain of command and may be a ship, an aircraft squadron, a staff office, or a department or division of a shore activity where identification of costs is required. DFAS, which organizationally reports to the Under Secretary of Defense (Comptroller), was created in January 1991 to streamline and standardize DOD’s finance and accounting procedures, systems, and operations while reducing the costs of such services. Ownership of many DOD financial management systems, including some Navy systems, was transferred to DFAS. At the same time, many mixed financial management systems, such as logistics systems which provide inventory and property financial data, remained under the control of the services or other DOD components. Policy guidance and direction related to accounting system development and operations is performed by the DOD Comptroller’s Office. In the past, Navy activities reported financial information to Defense Accounting Offices maintained by DFAS which, in turn, generated monthly reports to DFAS Cleveland Center. Beginning in fiscal year 1995, DFAS began consolidating its Navy Defense Accounting Offices into six sites called operating locations. Information from these operating locations will continue to be reported to DFAS Cleveland Center. Navy financial management systems have many long-standing problems that prior system development efforts have not fixed. For example, according to the Naval Audit Service, during the 1980s, Navy spent over $260 million on two failed attempts to consolidate and standardize its accounting systems. Both the Standard Automated Financial System and the Integrated Disbursing and Accounting Financial Management System were terminated in 1989 due to design problems, excessive cost escalations, and slippages in completion time. Navy’s financial management system problems continue even though many of Navy’s systems have been transferred to DFAS. For example, DOD’s fiscal year 1995 Federal Managers’ Financial Integrity Act (FMFIA) report stated that a majority of DOD’s financial systems did not comply with Office of Management and Budget (OMB) Circular A-127 and that many perform similar functions, resulting in inefficiencies and disparate business practices. Navy’s Statement of Assurance, used to prepare the DOD Federal Managers’ Financial Integrity Act report, also stated that its systems did not comply with Circular A-127’s integration, accounting classification codes, and general ledger requirements. In August 1991, DFAS began to review DOD’s accounting systems and to develop a plan to (1) decrease the number of financial systems and (2) correct systems deficiencies. In December 1993, DFAS developed an Interim Migratory Strategy, consisting of the following two phases. First, DFAS planned to reduce DOD’s 91 general fund systems to 11 interim migratory systems. The interim migratory systems were to be selected from the military services’ existing financial systems that were considered to be the least deficient. In addition, DFAS planned to identify and correct the interim migratory systems deficiencies. In particular, DFAS plans called for enhancing the systems to comply with DOD’s standard general ledger, key accounting requirements, and standard budget and accounting classification code by October 1, 1997. Second, DFAS planned to eventually select the best interim migratory system(s) to implement DOD-wide as its target system(s). DFAS has not established time frames for this effort. In June 1994, DFAS Cleveland Center selected STARS Field Level to serve as Navy’s interim migratory system for field-level general fund accounting. The Assistant Secretary of the Navy for Financial Management concurred with STARS’ selection. DFAS Cleveland and Navy personnel deemed STARS the newest, least deficient, and most advanced of Navy’s 25 existing general fund accounting systems. Through April 30, 1996, the largest of the former field-level systems had been converted to STARS Field Level at 923 activities, and 122 activities remained to be converted to STARS Field Level. In addition to STARS Field Level, the STARS umbrella includes other components—an on-line bill paying component and components that provide financial data to Navy’s major claimants. DFAS Cleveland Center is also in the process of developing a STARS financial reporting module at the departmental level. The STARS enhancement effort is intended to ensure that all STARS components comply with DOD’s key accounting requirements and standard general ledger as well as to implement other improvements. In April 1996, DFAS created the Defense Accounting System Program Management Office to centrally manage the consolidation and modernization of DFAS accounting systems. After the period of our review, DFAS selected STARS as one of three DFAS target general fund accounting systems, under the oversight of this Program Management Office. To assess Navy’s efforts to reduce the number of accounting systems and implement and enhance STARS, we examined DOD, DFAS, and Navy documents and conducted interviews with appropriate officials. We also reviewed STARS system documentation and compared it to the financial management system architecture guidance in the Joint Financial Management Improvement Program’s Framework for Federal Financial Management Systems. We also interviewed the Director of the STARS Project Office on this issue. To evaluate DFAS Cleveland Center’s plans to enhance STARS, we examined STARS project planning documentation, such as its April 18, 1996, Plan of Action and Milestones and software project plans. We also reviewed analyses related to implementing various key accounting requirements in STARS prepared by an outside contractor. We interviewed officials from this contractor, the STARS Project Office, DFAS Cleveland Center, and Navy’s Fleet Material Support Office (FMSO)—which serves as the primary STARS Central Design Agency. To review the implementation of STARS at the field level, we judgmentally selected 18 Navy shore activities that converted to STARS Field Level between July 1993 and March 1995, from a universe of east and west coast activities. We visited each of these sites, examined documents related to various aspects of Navy’s financial management operations, and identified areas applicable to or interfacing with field-level accounting and reporting. We obtained sample financial reports with explanations of their purpose and use, conducted interviews with field-level financial managers, and reviewed supporting accounting and reporting documentation. During this review, we noted any problems concerning accuracy, timeliness, and usefulness and examined their cause and resultant effect on field-level financial operations. We performed our work at the Office of the DOD Comptroller, DFAS Headquarters, DFAS Cleveland Center’s STARS Project Office, Navy’s FMSO (Mechanicsburg, PA), and the 18 Navy shore activities listed in appendix I. The Department of Defense provided written comments on a draft of this report. These comments are presented and evaluated in the “Agency Comments and Our Evaluation” section of this report and are reprinted in appendix II. Our work was performed from April 1995 through early August 1996 in accordance with generally accepted government auditing standards. We believe that savings will accrue as a result of eliminating the duplication and inefficiencies of supporting and maintaining Navy’s 25 existing accounting systems, although we did not attempt to quantify such savings. In October 1994, a contractor to the STARS Project Office completed an economic analysis that compared the costs of developing and implementing STARS to continuing with the existing systems. This economic analysis estimated that converting five of Navy’s existing general fund accounting systems to STARS (primarily at the field level) would save $162 million in the first 5 years. This initial estimated savings, primarily in maintenance and operating costs, was reduced by the estimated STARS Field Level system development, maintenance, operating, and training costs of $145 million, for a net savings of $17 million. Over 15 years, net savings were projected to total $293 million. We did not assess the reliability of these estimates. However, we note that achieving the projected level of net savings could be diminished by the need to provide additional training and technical support to field activities in using STARS Field Level. These issues are discussed later in this report. In addition, the October 1994 economic analysis was not a total life-cycle economic analysis of all STARS components. For example, it did not include costs to enhance all of the STARS components to bring them into compliance with DOD’s standard general ledger, key accounting requirements, and the standard budget and accounting classification code. The Major Automated Information Systems Review Council, which is reviewing the STARS project, directed the STARS Project Office to conduct a total life-cycle economic analysis of all STARS components. This analysis is expected to be completed by December 31, 1996. In addition, as discussed in the next section, DFAS has not developed a complete system architecture for STARS—basically a blueprint for what the system will do and how it will operate. As a result, any estimate of STARS total costs will be incomplete. We also found that, in less than 2 years, actual obligations to enhance STARS in certain areas were significantly higher than budgeted. The STARS Project Office estimated that in fiscal years 1995 and 1996, STARS software development costs would total $35.6 million. Of this amount, $18.8 million was budgeted for projects pertaining to the key accounting requirements, the budget and accounting classification code, and the consolidation efforts. As of July 23, 1996, actual obligations for the software projects related to these three areas were $24.5 million, or 30 percent, above the budget estimate, although total STARS software development obligations were close to what was estimated. In addition, some software development projects have already exceeded their total budget. For example, the STARS Project Office estimated that for fiscal years 1995-1997 (1) property and inventory accounting and (2) cash and accounts payable key accounting requirement enhancements would each cost $500,000. However, as of July 23, 1996, obligations for these enhancement efforts were $1.1 million and $1.2 million, respectively. Neither of these projects is scheduled to be completed in fiscal year 1996, although, according to DFAS, an accounts payable function was implemented for one STARS component—STARS Field Level (but not for the STARS claimant module). In addition, STARS budget estimates were incomplete and lacked supporting documentation. For example, these budget estimates did not include DFAS’ internal costs, such as the STARS Project Office. In fiscal year 1996, the STARS Project Office personnel costs alone were estimated at $1.4 million. Moreover, the STARS Project Office could not find documentation for much of the July 1995 budget estimate and instead provided us with a written rationale on the methodology it used to estimate the STARS software development costs. According to this rationale, the STARS Project Office consulted with FMSO and, based on these discussions, used prior projects of similar scope and size as a basis for the estimates. However, the STARS Project Office used projects related to only one STARS component to estimate the cost of enhancing all of the STARS components. Each STARS component would require different levels of effort to modify since the components do not have the same program attributes. The STARS enhancement effort is not guided by a target system architecture. A target systems architecture is a composite of all interrelated functions, information, data, and applications associated with a system. Specifically, such a systems architecture is an evolving description of an approach to achieving a desired mission. It describes (1) all functional activities to be performed to achieve the desired mission, (2) the system elements needed to perform the functions, (3) the designation of performance levels of those system elements, and (4) the technologies, interfaces, and locations of functions. The lack of a target STARS architecture increases the likelihood of project failure, additional development and maintenance costs, and a system that does not operate efficiently or effectively. Moreover, the information contained in a complete architecture provides the opportunity to perform a thorough alternatives analysis for the selection of the most effective system at the least cost. According to a February 1994 DFAS memorandum, the decision to choose STARS Field Level was “an intuitive one based on the collective experience of the capitalized Navy field general fund accounting network....” We have found that successful organizations manage information systems projects as investments and use a disciplined process—based on explicit decision criteria and quantifiable measures assessing mission benefits, risks, and cost—to select information system projects. In addition, recent OMB guidance recommends that agencies select information technology projects based on rigorous technical evaluations in conjunction with executive management business knowledge, direction, and priorities.Further, the Congress and the administration recognized the value of treating information system projects as investments by enacting the Information Technology Management Reform Act of 1996 (Public Law 104-106, Division E), which calls for agency heads, under the supervision of the Director of OMB, to design and implement a process for maximizing the value of their information technology acquisitions and assessing and managing the associated risks, including establishing minimum criteria on whether to undertake an investment in information systems. Managers can use the detailed information found in the systems architecture to enhance their analysis of these critical issues. Although STARS was selected without the benefit of an established architecture, such an architecture can provide needed structure and discipline as the STARS enhancement projects move forward. For example, it is unlikely that Navy and DFAS could ever achieve the requirement set forth in the Chief Financial Officers (CFO) Act and OMB Circular A-127 that agencies implement an integrated financial management system without the structure provided by a STARS financial management systems architecture. In order to implement a single, integrated financial management system, Circular A-127 specifies that agencies should plan and manage their financial management systems in a unified manner with common data elements and transaction processing. A critical step in accomplishing this is the development of a financial management systems architecture. According to the Joint Financial Management Improvement Program Framework for Federal Financial Management Systems, a financial management systems architecture provides a blueprint for the logical combination of financial and mixed systems to provide the budgetary and financial management support for program and financial managers. Preparing a financial management system architecture is also consistent with the best practices we found in leading organizations, which established and managed a comprehensive architecture to ensure the integration of mission- critical systems through common standards. In addition, DOD Directive 7740.2, Automated Information System Strategic Planning, states that automated information system strategic processes shall be supported by information architectures that address the information requirements, flows, and system interfaces throughout the organization, including headquarters, major commands, and separate operating agencies. Although the decision to enhance STARS was made over 2 years ago, DFAS Cleveland Center has not yet developed a target STARS system architecture which would include a definition of the systems’ expected functions, features, and attributes, including internal and external interfaces, and data flows. An architecture is particularly critical since several of the STARS enhancements are not only to correct existing system problems but are expected to add new functions to STARS (either programmed as part of STARS or through interfaces with other systems) at considerable cost. For example, the STARS Project Office plans to add an accounts payable function to the STARS claimant module and plans to interface STARS with a property system. (STARS does not currently collect property data.) In addition, without an architecture, DFAS is not in a position to reasonably estimate the total cost to enhance STARS. For example, as previously mentioned, actual obligations to enhance STARS to comply with the cash and accounts payable and property and inventory key accounting requirements were already double the total $500,000 budget estimate for fiscal years 1995-1997, even though these projects were still in the planning stage. Further, complex development efforts such as these pose a greater technical risk of failure which can be mitigated by developing a target architecture. The Director of the STARS Project Office agreed that a STARS architecture should be developed. He stated that he plans to develop a STARS architecture that would include identifying the data sources of systems that interface with STARS. Although STARS is a DFAS system, many of the systems that interface with STARS are controlled by Navy. As a result, a STARS architecture cannot be developed without the direct involvement of Navy and the identification of all feeder systems, interfaces, and supportive detailed data elements. Therefore, it is imperative that DFAS and Navy’s Assistant Secretary of the Navy (Financial Management and Comptroller) work cooperatively to develop a STARS target architecture to ensure that STARS will meet the needs of its primary user in an effective manner. Our analysis noted instances of incomplete planning and missing or slipped milestones that strongly suggest that STARS enhancements will not meet DOD requirements in the near future. We believe that these problems are symptomatic of the lack of a STARS system architecture. As one of the first steps in any systems development effort, the development of an architecture would guide the enhancement efforts and set the appropriate time frames for the completion of major tasks. One example that highlights STARS architecture and planning issues is DFAS’ evaluation of how another system could provide property accounting data to STARS. A June 1996 contractor analysis of this property system found that differences between the STARS and the property system’s lines of accounting would have to be resolved before an interface is developed. Moreover, as of July 1, 1996, the property system had been implemented at only one Navy site and only eight additional sites have been scheduled to implement the property system. As a result, even if the interface issues between STARS and the property system were resolved, only a very limited amount of Navy property data could be transmitted. Additionally, a STARS contractor was directed not to work on certain problem areas related to the cash procedures and accounts payable key accounting requirement.According to the STARS Project Office Director, the necessary analysis that the contractor was to complete will be done internally, although no specific plans existed as of early August, 1996. We also found several instances of milestones that were date-driven rather than based on an analysis of the tasks to be completed. DFAS’ September 1995 Chief Financial Officers Financial Management 5-Year Plan stated that STARS key accounting requirement deficiencies would be corrected by September 30, 1997. However, the April 18, 1996, STARS Plan of Action and Milestones stated that STARS enhancements would comply with DOD’s key accounting requirements by October 1, 1996. The plan included no reason for the accelerated time frame. The STARS Project Office Director stated that the October 1, 1996, milestone for completing the programming, testing, and data conversion for modifying STARS to comply with the key accounting requirements was not derived from an assessment of the scope of these projects. Rather, the implementation date was established to coincide with the Navy’s requirement to prepare and have audited financial statements. In addition, we reviewed the April 18, 1996, STARS Plan of Action and Milestones and found that it did not include several key analysis tasks that are needed to successfully implement the DOD standard general ledger and some of the key accounting requirements. The Plan of Action and Milestones also indicates missing and slipped milestones. For example, we found that the plan did not address how one of the STARS modules, which currently has its own general ledger account structure, will be brought into compliance with DOD’s standard general ledger; did not address how STARS will be enhanced to comply with the audit trail key accounting requirement which states that all transactions be traceable to individual source records maintained in the system; did not specify how the systems analysis for enhancing STARS field-level and headquarters claimant modules to meet the key accounting requirement for budgetary accounting will be performed and by whom, and did not provide for analyzing and documenting the current environment and identifying needed changes, which is the approach planned in making most key accounting requirement analyses; did not provide for identifying needed changes and solutions to control weaknesses as part of the analysis of the current environment related to the system control function of STARS field-level and headquarters claimant modules; and provided, in several cases, milestones for completing the analyses of the current STARS environment and planning for future STARS enhancements that were dated several months before a contractor was scheduled to provide them. We also found that the lack of an overall plan or architecture contributed to the lack of participation of one of Navy’s key systems development offices. Specifically, although Navy’s FMSO is the primary Central Design Agency for STARS, it has had a limited role in the STARS enhancement project. For example, in December 1994, the STARS Project Office tasked FMSO with completing, by December 1995, functional descriptions and/or system specifications to enhance STARS to comply with six key accounting requirements, including those related to accounts receivable and accounts payable. On September 14, 1995, FMSO was also tasked with completing, by March 31, 1996, an expanded functional requirement analysis and detailed system specifications for the key accounting requirements related to general ledger control and financial reporting for the STARS claimant module. A project status report dated May 30, 1996, showed that FMSO had not completed these tasks and had (1) spent little time on the analyses required for the accounts receivable, cash procedures and accounts payable, and general ledger and (2) spent no time on the other key accounting requirements analyses. According to FMSO officials, the STARS Project Office Director instructed them to work on other priorities. Additionally, FMSO officials stated that they did not know whether the milestones and costs for modifying the STARS components to comply with the key accounting requirements were reasonable because the scope of the modifications to be made were not known. Our review of STARS Field Level implementation at 18 Navy shore activities disclosed problems related to training and technical support. Specifically, field staff received limited training. Representatives of over half of the activities told us that the STARS Field Level training (1) did not focus on areas specifically related to their daily jobs, (2) was provided by instructors, often contractors, that had STARS Field Level knowledge but did not have a working knowledge of Navy accounting and/or the activity’s existing accounting system, and (3) did not include follow-up training in most cases. Further, only about one-half of these activities had received training in using available software that would allow them to use the system more effectively and efficiently. After we brought these training deficiencies to the attention of the Director of the STARS Project Office, he agreed that training needed to be improved. According to the Director, the STARS Project Office has collected information on the field activities’ training needs and plans to develop a set of training requirements. However, he stated that additional STARS training will be contingent on available funding. We also found that DFAS provided insufficient STARS Field Level technical support. For example, representatives at six activities cited DFAS’ failure to provide a central focal point or people with sufficient knowledge to provide timely answers to questions and responses to problems. The Director of the STARS Project Office agreed that STARS technical support was a concern. He stated that he planned to consider options to address this concern and that better training would also reduce the number of user problems. Because the DFAS STARS enhancement project was not guided by a target systems architecture—a critical step in any systems development effort—DFAS’ efforts to enhance STARS and correct numerous shortcomings have not been adequately planned, in conjunction with Navy, the system’s primary user, to mitigate technical and economic risks. This is particularly true for planned new STARS functions, such as property, which would entail the greatest risk. As a result, the likelihood that the large investment already made and planned for this project will not yield a reliable, fully integrated Navy general fund accounting system is increased. In addition, STARS implementation has been hampered by limited training and insufficient technical support, which will have to be addressed as the enhancement project moves forward. To increase the likelihood that the STARS enhancement project will result in an efficient, effective, and integrated Navy general fund accounting system, we recommend that the Under Secretary for Defense (Comptroller), in conjunction with the Assistant Secretary of the Navy (Financial Management and Comptroller), expeditiously develop a target STARS architecture. As part of this process, the Comptroller should (1) identify the economic and technical risks associated with the implementation of STARS enhancements, (2) develop a plan to avoid or mitigate these risks, and (3) obtain the Major Automated Information Systems Review Council’s assessment and approval. Until this architecture is complete, the Comptroller should cease the funding of enhancements to STARS components that add new functions to STARS. Also, once a target STARS architecture has been developed and approved, we recommend that the Director, DFAS, enhance its Plan of Action and Milestones to ensure that it contains (1) the steps that will have to be taken to achieve this architecture, including key analysis tasks which relate to how STARS modules will meet the key accounting requirements, (2) the parties responsible for performing these steps, and (3) realistic milestones. In addition, to improve STARS Field Level’s day-to-day operations at the field level, we recommend that the Director, DFAS, provide additional user training, particularly in functions that allow users to use the system more effectively and efficiently and provide a central focal point for enhanced technical support through such means as establishing a “hot line” staffed by knowledgeable personnel. In providing written comments on a draft of this report, DOD generally agreed with our findings but did not concur with our overall recommendation that it cease funding of STARS enhancements until the target architecture is completed. The full text of DOD’s comments is provided in appendix II. DOD’s response stated that since 1991, DOD has made substantial functional and technical improvements, compliance improvements, and significant financial reporting refinements to STARS. While DFAS has implemented some STARS improvements, STARS does not yet fully comply with DOD’s key accounting requirements or standard general ledger, which is why the enhancement effort was started. With respect to our recommendations, DOD agreed that a STARS target architecture must be completed which includes the identification of source data in the target system for all interfaced systems. However, DOD stated that STARS is a fully operational system with a documented architecture of current interfaces, processes, and procedures except for Navy-owned logistic systems. According to DOD, as new enhancements are added to STARS, they will be added to the target architecture. We disagree that STARS has a current documented architecture. DFAS was unable to provide us with an architecture. In addition, DOD did not concur with our recommendation to stop funding enhancements that add functions to STARS until the target architecture is complete. DOD’s comments indicated that STARS enhancements must continue so that the migratory strategy can be completed as soon as possible because (1) Navy’s funding has been either curtailed or terminated beginning in fiscal year 1997 in anticipation of completing the enhancements, (2) key accounting provisions are needed in the current system to establish needed controls and meet CFO reporting requirements, and (3) a “learning curve” situation would be created because personnel resources would have to be removed and later returned to the initial staffing level. Continuing to develop STARS enhancements without the benefit of a completed target architecture runs counter to the basic purpose of developing such an architecture—to provide structure and discipline to a system enhancement effort before changes are made to ensure that the best decisions are made in terms of operational effectiveness, flexibility, maintenance, and cost. Although Navy has funds available now to work on the enhancements, to spend them without a proper planning effort has not proven in the past to be an effective use of resources. Without a target architecture, DOD runs a high risk of spending millions of dollars enhancing STARS and implementing a system that still will not meet the CFO Act financial reporting requirements nor be developed in a timely and cost-effective manner. Indeed, as we discussed in the report, the STARS enhancement project has already experienced incomplete planning, missed milestones, and budget overruns. In regard to DOD’s point that personnel resources would have to be removed and later returned to the initial staffing level, creating a “learning curve” situation, we believe that any personnel currently assigned to the enhancement efforts could be reassigned to the architecture development effort. This would allow them to use the expertise they have gained from working on the enhancements to efficiently produce an accurate and complete target architecture. Once the architecture is completed, these personnel could then continue to use their expertise on the systems development efforts that DFAS and Navy decide to pursue in light of the architecture results. DOD concurred with our remaining recommendations. In regard to the establishment of a “hot line,” DOD’s response noted that it had established a “hot line” to address technical system problems at DFAS Cleveland Center and the Defense Mega Center in Mechanicsburg. DOD’s response also stated that, by December 31, 1996, DFAS will perform a follow-on review to determine the feasibility of expanding the “hot line” service to DFAS operating locations. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the House Committee on National Security, the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the House and Senate Committees on Appropriations. We are also sending copies to the Secretary of Defense, the Secretary of the Navy, and the Director of the Office of Management and Budget. We will also make copies available to others on request. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight within 60 days of the date of this report. You must also send a written statement to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. If you or your staffs have any questions concerning this report, please contact me at (202) 512-9095. Major contributors to this report are listed in appendix III. The following are GAO’s comments on the Department of Defense’s letter dated September 26, 1996. 1. See the “Agency Comments and Our Evaluation” section of this report. Pat L. Seaton, Senior Evaluator Julianne H. Hartman, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Defense Finance and Accounting Service's (DFAS) efforts to reduce the number of Navy accounting systems and to enhance and implement a Navy-wide system to account for general fund operations. GAO found that: (1) DFAS selected the Navy's Standard Accounting and Reporting System (STARS) to serve as the Navy's system for general fund accounting; (2) although believed to be the newest, least deficient, and most advanced of the Navy's 25 existing general fund accounting systems, STARS still has serious shortcomings; (3) savings will accrue as a result of eliminating duplicate and inefficient accounting systems, but could be diminished by the need to provide additional training and technical support; (4) STARS was selected without the benefit of an established architecture, and the lack of a target systems architecture will make it difficult to guide STARS enhancement efforts, estimate enhancement costs, and evaluate alternatives that may be more effective or less expensive; (5) the STARS Plan of Action and Milestones did not include several key analysis tasks and accounting requirements, and several instances of incomplete planning and slipped milestones strongly suggest that STARS enhancements will not meet Department of Defense (DOD) requirements; (6) this piecemeal approach to STARS enhancement could result in costly and time-consuming redesign efforts; and (7) implementation of STARS at field-level activities was not completely successful, since DFAS provided limited training and insufficient technical support.
State and local governments that receive grants from HHS must follow the uniform administrative requirements set forth in federal regulations.When procuring property and services, these regulations require that states follow the same policies and procedures they use for procurements supported with nonfederal funds. Under HHS’s regulations, states must also ensure that contracts include any clauses required by federal statutes and executive orders. Grantees other than states and subgrantees, such as local governments, rely on their own procurement procedures, provided that they conform to applicable federal laws and the standards identified in the regulations, including standards of conduct, requirements of full and open competition in contracting, procedures for different types of procurements, and bid protest procedures to handle and resolve disputes relating to their procurements. Grantees and subgrantees must maintain a contract administration system that ensures that contractors perform in accordance with the terms, conditions, and specifications of their contracts. The procurement of contracts typically follows a process that comprises several phases, including bid solicitation and contract award processes. The bid solicitation process will begin with the development of a work plan by the contracting agency that outlines the objectives contractors will be expected to achieve and the manner in which they will be expected to achieve them. The state or locality will then issue a request-for-proposals to inform potential bidders of the government’s interest in obtaining contractors for the work specified. A request-for-proposals is a publicly advertised document that outlines information necessary to enable prospective contractors to prepare proposals properly. After these activities are completed, the contract award process begins. Once proposals have been submitted, they are evaluated to assess their relative merit. Several key criteria are almost always considered in evaluating proposals, including price/cost, staffing, experience, and technical and/or other resources. The environment for administering social services such as TANF has been affected by changes to the nation’s workforce system. Through the Workforce Investment Act (WIA) of 1998 (P.L. 105-220), the Congress sought to replace the fragmented training and employment system that existed under the previous workforce system. WIA requires state and local entities that carry out specified federal programs to participate in local one-stop centers—local centers offering job placement assistance for workers and opportunities for employers to find workers—by making employment and training-related services available. While TANF is not a mandatory partner at one-stop centers, some states are using one-stop centers to serve TANF recipients. WIA called for the development of workforce investment boards to oversee WIA implementation at the state and local levels. WIA listed the types of members that should participate on the workforce boards, such as representatives of business, education, labor, and other segments of the workforce investment community, but did not specify a minimum or maximum number of members. Local workforce boards can contract for services delivered through one-stop centers. PRWORA broadened both the types of TANF services that could be contracted out and the types of organizations that could serve as TANF contractors. The act authorized states to contract out the administration and provision of TANF services, including determining program eligibility. Under the prior AFDC program, the determination of program eligibility could not be contracted out to nongovernmental agencies. In addition, under the PRWORA provision commonly referred to as charitable choice, states are authorized to contract with faith-based organizations to provide TANF services on the same basis as any other nongovernmental provider without impairing the religious character of such organizations. Such changes in the welfare environment have affected the involvement of for-profit organizations in TANF contracting. Prior to PRWORA, contracting in the welfare arena was mainly for direct service delivery such as job training, job search instruction, and child care provision. While some for-profit companies provided services, service providers were mostly nonprofit. Large for-profit companies were mainly involved as contractors that designed automated data systems. In the broader area of social services, large for-profits were also involved in providing various services for child support enforcement. Now that government agencies can contract out their entire welfare systems under PRWORA, there has been an increase in the extent to which large for-profit companies have sought out welfare contracts, in some cases on a large scale that includes determining eligibility and providing employment and social services. Federal and state funds are used to serve TANF recipients. For federal fiscal years 1997 to 2002, states received federal TANF block grants totaling $16.5 billion annually. With respect to state funding, PRWORA includes a maintenance-of-effort provision, which requires states to provide 75 to 80 percent of their historic level of funding. States that meet federally mandated minimum participation rates must provide at least 75 percent of their historic level of funding, and states that do not meet these rates must provide at least 80 percent. The federally mandated participation rates specify the percentages of states’ TANF caseloads that must be participating in work or work-related activities each year. HHS oversees states’ TANF programs. In accordance with PRWORA and federal regulations, HHS has broad responsibility to oversee the proper state expenditure of TANF funds and the achievement of related program goals. While TANF legislation prohibits HHS from regulating states in areas that it is not legislatively authorized to regulate, it must still oversee state compliance with program requirements, such as mandated work participation rates and other program requirements. Nearly all states and the District of Columbia contract with nongovernmental entities for the provision of TANF-funded services at the state level, local level, or both levels of government. In 2001, state and local governments spent more than $1.5 billion on contracts with nongovernmental entities, or at least 13 percent of all federal TANF and state maintenance-of-effort expenditures (excluding those for cash assistance). The majority of these contracts are with nonprofit organizations. Although TANF contractors provide a wide array of services, the most commonly contracted services reported by our survey respondents include employment and training services, job placement services, and support services to promote job entry or retention. In addition, eligibility determination for cash assistance under TANF or other TANF-funded services has been contracted out in one or more locations in some states. Most state TANF contracting agencies pay contractors a fixed overall price or reimburse them for their costs rather than base contract payments on achieving program objectives for TANF recipients. Contracting for TANF-funded services occurs in the District of Columbia and every state except South Dakota. However, the level of government at which contracting occurs varies, which complicates efforts to provide comprehensive information on TANF-funded contracts. Contracting occurs only at the state level in 24 states, only at the local level in 5 states, at both levels in the remaining 20 states, and in the District of Columbia. Moreover, contracting at the local level encompasses contracting by agencies such as county departments of social or human services as well as workforce development boards whose jurisdiction may include several counties. Our national survey of TANF contracting provides comprehensive information on contracting at the state level but incomplete and nonrepresentative information on local contracting. In 2001, state and local governments expended at least $1.5 billion in TANF funds for contracted services. With respect to state-level contracting, contracts with nonprofit organizations accounted for 88 percent of TANF funds while contracts with for-profit organizations accounted for 13 percent of funds (see fig. 1). Seventy-three percent of state-level contracts are with nonprofit organizations and 27 percent are with for-profit organizations. Under PRWORA’s charitable choice provision, some states have established initiatives to promote the use of faith-based organizations. Contracts with faith-based organizations constitute a smaller proportion of all contracted TANF funds than contracts with secular nonprofit organizations and for-profit organizations. As shown in figure 1, contracts with faith-based organizations account for 8 percent of TANF funds spent by state governments on contracts with nongovernmental entities nationally. In several states, large percentages of the funds contracted by states and localities that were identified by our national survey are in contracts with for-profit organizations. As shown in table 1, at least half of the contracted funds in 8 states are with for-profit organizations. Moreover, in 11 states, more than 15 percent of all TANF-contracted funds identified by our survey went to faith-based organizations. The proportion of TANF funds expended for contracted services with nongovernmental entities varies considerably by state. Nationally, at least 13 percent of TANF funds expended for services other than cash assistance have been contracted out. As shown in table 1, the proportion of funds contracted out in 10 states in 2001 exceeded 20 percent of their fiscal year 2000 TANF fund expenditures (excluding the portion of expenditures for cash assistance). Idaho, Mississippi, New Jersey, Wisconsin, and the District of Columbia expended more than 40 percent of their TANF funds on contracted services. On the other hand, Iowa, Kansas, North Carolina, New Mexico, and Oregon spent the smallest proportion (2 percent or less of their TANF funds) on contracts with nongovernmental entities. Several large for-profit organizations and nonprofit organizations have large TANF contracts in multiple states. Our national survey of TANF contracting asked state and local respondents to identify the names of the contractors with the three largest dollar contracts in their jurisdictions. Four for-profit organizations—Curtis & Associates, Inc.; Maximus; America Works; and Affiliated Computer Services, Inc.—have contracts with the highest dollar values in two or more states. Among this group, Curtis & Associates, Inc., had the TANF contracts with the highest dollar value relative to other contractors in their respective locations. Among nonprofit contractors, Goodwill Industries, YWCA, Catholic Charities, Lutheran Social Services, Salvation Army, Urban League, United Way, Catholic Community Services, American Red Cross, and Boys & Girls Clubs all have TANF-funded contracts in two or more states. Among this group, Goodwill Industries had the TANF contracts with the highest dollar value relative to other contractors in their respective locations. States and localities contract with nongovernmental entities to provide services to facilitate employment, administer program functions, and strengthen families. Overall, states and localities rarely contract different types of services to nonprofit and for-profit organizations. Government entities contract out most often for services to facilitate employment. As shown in figure 2, over 40 percent of state respondents reported that half or more of their TANF-funded contracts ask for the provision of education and training activities, job placement services, and support services that address barriers to work and help clients retain employment. These support services include substance abuse treatment, assistance with transportation, and other services that facilitate job entry and retention. Childcare services are less common. While the responses we obtained from local respondents about types of services contracted out may not be representative of local TANF contracting, they revealed a similar overall pattern to the responses by state respondents presented in figure 2. In some cases, states and localities have contracted with nongovernmental entities to provide program administrative functions that were required to be performed by government workers in the past, such as determining eligibility. The determination of eligibility for TANF-funded services provided to low-income families who are ineligible for cash assistance has been contracted out in one or more locations in at least 18 states. For example, one Ohio county, which offers a variety of services with varying eligibility criteria to the working poor, contracts with nongovernmental organizations to both provide and determine eligibility for the services. Contractors in at least 4 states are contracting out eligibility for cash assistance under TANF, an option authorized under TANF. Finally, some states and localities are using TANF funds to contract for services related to the TANF objectives of preventing and reducing the incidence of nonmarital pregnancies and encouraging the formation and maintenance of two-parent families. For example, 20 percent of state respondents reported that half or more of their TANF contracts call for the provision of services pertaining to stabilizing families. We asked state and local governments about the use of four common types of contracts for TANF services: cost-reimbursement, fixed priced, incentive, and cost- reimbursement plus incentive. Under cost- reimbursement contracts, contracting agencies pay contractors for the allowable costs they incur, whereas under fixed-price contracts, contracting agencies pay contractors based on a pre-established overall contract price. As figure 3 shows, almost 60 percent of state respondents said that half or more of their TANF contracts are cost-reimbursement. Far fewer respondents report that half or more of their TANF contracts were incentive or cost-reimbursement plus incentive. Under incentive contracts, the amount paid to contractors is determined based on the extent to which contractors successfully achieve specified program objectives for TANF recipients, such as job placements and the retention of jobs. Cost-reimbursement plus incentive contracts pay contractors for costs they incur and provide payments above costs for the achievement of specific objectives. While the responses we obtained from local respondents may not be representative of local TANF contracting, they revealed a similar pattern to the responses by state respondents. Our survey disclosed that many state and local governments have chosen to use a contract type—cost-reimbursement—under which the government assumes a relatively high level of financial risk. Contracting agencies assume greater financial risk when they are required to pay contractors for allowable costs under cost-reimbursement contracts than when overall contract payments are limited to a pre-established price. HHS relies primarily on state single audit reports to oversee state and local procurement of TANF services and monitoring of TANF contractors. State single audit reports identified TANF procurement or subrecipient monitoring problems for about one-third of the states for the period 1999 to 2000, and subrecipient monitoring problems were identified more frequently. However, HHS officials told us that they do not know the overall extent to which state single audits have identified problems with the monitoring of nongovernmental TANF contractors or the nature of these problems because they do not analyze the reports in such a comprehensive manner. Our review of state single audit reports for 1999 and 2000 found internal control weaknesses for over a quarter of states nationwide that potentially affected the states’ ability to effectively oversee TANF contractors. HHS relies primarily on state single audits to oversee TANF contracting by states and localities. The Single Audit Act of 1984 (P.L. 98-502), as amended, requires federal agencies to use single audit reports in their oversight of state-managed programs supported by federal funds. The objectives of the act, among others, are to (1) promote sound financial management, including effective internal controls, with respect to federal funds administered by states and other nonfederal entities; (2) establish uniform requirements for audits of federal awards administered by nonfederal entities; and (3) ensure that federal agencies, to the maximum extent practicable, rely on and use single audit reports. In addition, the act requires federal agencies to monitor the use of federal funds by nonfederal entities and provide technical assistance to help them implement required single audit provisions. The results of single audits provide a tool for federal agencies to monitor whether nonfederal entities are complying with federal program requirements. To help meet the act’s objectives, Office Of Management and Budget (OMB) Circular A-133 requires federal agencies to evaluate single audit findings and proposed corrective actions, instruct states and other nonfederal entities on any additional actions needed to correct reported problems, and follow up with these entities to ensure that they take appropriate and timely corrective action. States, in turn, are responsible for working with local governments to address deficiencies identified in single audits of local governments. Single audits assess whether audited entities have complied with requirements in up to 14 managerial or financial areas, including allowable activities, allowable costs, cash management, eligibility, and reporting. Procurement and subrecipient monitoring constitute 2 of the 14 compliance areas most relevant to TANF contracting. Audits of procurement requirements assess the implementation of required procedures, including whether government contracting agencies awarded TANF contracts in a full and open manner. Audits of subrecipient monitoring requirements examine whether an entity has adequately monitored the entities to whom it has distributed TANF funds. Subrecipients of TANF funds from states can include both local governments and nongovernmental entities with whom the state has contracted. Subrecipients of TANF funds from localities can include nongovernmental TANF contractors. State single audit reports identified TANF subrecipient monitoring or procurement problems for one-third of the states. Single audits identified subrecipient monitoring deficiencies for 9 states in 1999 and 12 states in 2000. Of the 15 states that had subrecipient monitoring deficiencies in either 1999 or 2000, 6 states were cited for deficiencies in both years. State single audits identified procurement problems less frequently: for 3 states in 1999 and 4 states in 2000. The extent to which state single audits have identified problems with subrecipient monitoring involving TANF funds is generally equal to or greater than for several other social service programs in which contracting occurs with nongovernmental organizations. As shown in table 2, the number of state single audits that identified deficiencies in subrecipient monitoring for the 1999 to 2000 time period is similar for TANF, child care, and the Social Services Block Grant. Fewer state audits identified such problems for child support enforcement, Medicaid, and Food Stamps. With regard to procurement, the frequency of identified deficiencies in state audits for TANF was fewer than that for Medicaid but about the same as for several other programs. HHS officials told us that state single audits during this time period had identified TANF subrecipient monitoring problems in only two states— Florida and Louisiana—that involved unallowable or questionable costs and that also pertained to the oversight of nongovernmental TANF contractors. However, HHS officials also said that they do not know the overall extent to which state single audits have identified problems with the monitoring of nongovernmental TANF contractors or the nature of these problems because they do not analyze the reports in such a comprehensive manner. Our analysis of the state single audit reports that cited TANF subrecipient monitoring problems in 1999 or 2000 indicates that the reports for 14 of the 15 states identified internal control weaknesses that potentially affected the states’ ability to adequately oversee nongovernmental TANF contractors. Thus, internal control weaknesses pertaining to contractor oversight have been reported for more than a quarter of all states nationwide. (See app. III for a summary of the problems reported in each of the state single audits.) The reported deficiencies in states’ monitoring of subrecipients cover a wide range of problems, including inadequate reviews of the single audits of subrecipients, failure to inform subrecipients of the sources of federal funds they received, and inadequate fiscal and program monitoring of local workforce boards. The audit reports for some states, such as Alaska, Kentucky (2000 report), and Louisiana (1999 and 2000 reports) specified that the monitoring deficiencies involved or included subrecipients that were nongovernmental entities. For example, the 2000 single audit for Louisiana reported that for 7 consecutive years the state did not have an adequate monitoring system to ensure that subrecipients and social service contractors were properly audited, which indicates that misspent federal funds or poor contactor performance may not be detected and corrected. The audit reports for other states, including Arizona, Michigan, Minnesota, and Mississippi do not specify whether the subrecipients that were inadequately monitored were governmental or nongovernmental entities. However, the reported internal control weaknesses potentially impaired the ability of these states to properly oversee either their own TANF contractors or the monitoring of TANF contractors that have contracts with local governments. For example, the 2000 single audit report for Minnesota found that the state agency did not have policies and procedures in place to monitor the activities of TANF subrecipients. The 2000 audit report for Mississippi found that the state did not review single audits of some subrecipients in a timely manner and did not perform timely follow-up in some cases when subrecipients did not submit their single audits on time. Even if the subrecipients referred to in both of these audit reports were solely local governmental entities, the deficiencies cited potentially limited the states’ abilities to identify and follow-up in a timely manner on any problems with local monitoring of TANF contractors. HHS follows up on a state-by-state basis on the TANF-related problems cited in state single audits and focuses primarily on the problems that involve monetary findings. However, HHS does not use these reports in a systematic manner to develop a national overview of the extent and nature of problems with states’ oversight of TANF contractors. HHS officials said that HHS regional offices review state single audits and perform follow-up actions in cases where deficiencies were identified. These actions include sending a letter to the state acknowledging the reported problems and any plans the state may have submitted to correct the identified deficiency. HHS officials told us that their reviews of single audit reports focus on TANF audit findings that cited unallowable or questionable costs, and that HHS tracks such findings in its audit resolution database. The officials explained that their focus on monetary findings stems from the need to recover any unallowable costs from states and from HHS’s oversight responsibility under PRWORA to determine whether to impose penalties on states for violating statutory TANF requirements. If the deficiency identified by a single audit involves monetary findings, HHS takes actions to recover the costs within the same year, according to HHS officials. HHS officials told us that if the identified deficiency does not involve monetary findings but pertains to a programmatic issue such as subrecipient monitoring, HHS generally relies on the state to correct the reported problem and would initiate corrective action if the same problem were cited in the state’s single audit the following year. However, HHS does not use state single audit reports in a systematic manner to oversee TANF contracting, such as by analyzing patterns in the subrecipient monitoring deficiencies cited by these reports. HHS auditors and program officials also told us that inconsistent auditing of nongovernmental entities and state monitoring of these entities affects HHS’s use of single audits as a management tool. For example, HHS officials said that the same nongovernmental entity might be treated as a subrecipient by one state and as a vendor by another state, which could limit HHS’s ability to determine whether the entity has consistently complied with all applicable federal and state requirements. HHS officials told us that they plan to work, in conjunction with OMB, to explore the reasons for the inconsistencies and, where appropriate, to identify ways to better assure compliance with audit requirements applicable to nongovernmental entities. State and local governments rely on third parties to help ensure compliance with procurement requirements, including bid protests, judicial processes, and external audits. Procurement problems that resulted in the modification of contract award decisions surfaced in 2 of the 10 TANF procurements we reviewed. These problems affected 5 of the 58 TANF contracts awarded in the 10 procurements. Procurement issues were raised in 2 other procurements but did not result in the modification of contract award decisions. State and local governments have primary responsibility for overseeing procurement procedures, and they use several approaches to identify problems with procurement processes. In some cases, contracting agencies rely on aggrieved third parties to identify procurement problems through bid protests or lawsuits. In other cases, organizations outside the procurement process may review bid solicitation and contract award procedures. A bid protest occurs when an aggrieved party—a bidder who did not win a contract award—protests the decision of the local or state agency to award another bidder a contract. The process usually has several tiers, starting with a secondary review by the agency that denied the contract award. If the protest cannot be resolved internally, it can be brought to a higher level of authority. Contract agency officials said that bidders frequently protest contract award decisions. However, state and local officials also said that many bid protests are based more on bidder disgruntlement with award decisions than on corroborated instances of noncompliance with procurement processes. However, these protests do occasionally result in a resolution that favors the bid protester. We reviewed 10 separate procurements—specific instances in which government agencies had solicited bids and awarded one or more TANF contracts—in the local sites that we visited. Procurement problems identified in San Diego and Los Angeles resulted in contract award decisions being modified. In San Diego, the county employees union filed a lawsuit against the county maintaining that the county had failed to conduct a required cost analysis to determine whether it was more or less efficient to contract out services than it would be to provide them by county employees. The union won the case, and the county was required to perform a cost analysis and, upon determination that contracted services would be more cost-efficient than publicly provided services, resolicit bids from potential contractors. In Los Angeles County’s procurement of TANF services, one bidder filed a bid protest, claiming that the contracting agency had failed to properly evaluate its bid. As the final contract award authority, the County Board of Supervisors ordered the Director of Public Social Services to negotiate separate contracts for TANF services to the original awardee and protesting bidder. While procurement issues were raised in the District of Columbia and New York City, their resolution did not result in contract award decisions being modified. In the District of Columbia, the city Corporation Counsel raised concerns regarding the lack of price competition and the lack of an evaluation factor for price. For example, the District’s contracting agency set fixed prices it would pay for TANF services and did not select contractors based on prices they offered. District officials said that they set fixed prices so that contractors would not submit proposals that would unrealistically underbid other contractors. In addition, the agency did not include price as a factor in its evaluation of proposals. As a result of these and other factors, the Corporation Counsel concluded that the District’s procurement of TANF services was defective and legally insufficient. However, the city, operating under the authority of the mayor’s office to make final contract award decisions, approved the contract awards and subsequently implemented regulations changing the way price is used in making contract award decisions. In New York City, the TANF contracting process was alleged to have violated certain requirements, but these charges were not confirmed upon subsequent legal review and a resulting appellate court decision. The New York City Comptroller reported that the contracting agency had not disclosed the weights assigned to evaluation criteria for assessing bids, provided contract information to all bidders, and assessed each bid equitably. With regard to the assessment of bids, the comptroller maintained that the city’s Human Resources Administration (HRA) had deemed as unqualified some proposals that clearly ranked among the most technically qualified and recommended contract awards for other proposals that were much less qualified. The comptroller also maintained that HRA had preliminary contact with one of the potential contractors, reporting that HRA had held discussions and shared financial and other information with the contractor before other organizations had been made aware of the same information. The comptroller concluded that these actions constituted violations of city procurement policies. Utilizing its authority to make final contract award decisions, the mayor’s office subsequently overruled the comptroller’s objections and authorized the contract agency to award contracts to the organizations it had selected. A later court appeal found in favor of the mayor’s office. State and local governments use a variety of approaches to help ensure that TANF-funded contractors expend federal funds properly and comply with TANF program requirements, such as on-site reviews and independent audits. Four of the six states that we visited identified deficiencies in their oversight of TANF contractors. Various factors have contributed to these deficiencies, such as the need in some states to create and support local workforce boards that contract for TANF services and oversee contractors. With regard to contractor performance, several contractors at two local sites were found to have had certain disallowed costs and were required to pay back the amounts of these costs. Moreover, in six of the eight locations that established performance levels for TANF contractors, most contractors, including both nonprofit and for-profit contractors, did not meet one or more of their performance levels. State and local oversight approaches that we found being used originate from organizations external to contracting agencies and these include independent audits and program evaluations. State and local government auditors, comptrollers, treasurers, or contracted certified public accounting firms audit contractors. Independent auditors conduct financial and programmatic audits of compliance with contract specifications. Similarly, evaluators from outside the contracting agency generally evaluate various aspects of program implementation, including financial, programmatic, and operational performance by contractors and other entities responsible for achieving program goals. State and local government auditors in several states have identified shortcomings in how contracting agencies oversee TANF contractors. As shown in table 2, auditors reported oversight deficiencies in four of the six states that we visited—Florida, New York, Texas, and Wisconsin. Audit reports cited uneven oversight coverage of TANF contractors over time or across local contracting agencies. We did not identify any audit reports that assessed the oversight of TANF contractors in California or the District of Columbia. Evolving TANF program structures, resource constraints, and data quality issues contributed to the deficiencies in contractor oversight. In Florida and Texas, for example, new TANF program structures entailed establishing local workforce boards throughout the state as the principal entity for TANF contracting and the subsequent oversight of TANF contractors. In both states, local workforce boards varied significantly in their capability to oversee TANF contractors and ensure compliance with contract requirements. According to New York State program officials, contracting agencies in the state continue to experience ongoing shortfalls in staff resources necessary to provide sufficient oversight of contractor performance. In addition, Wisconsin’s Legislative Audit Bureau reported in 2001 that the Private Industry Council had not provided the requisite oversight of five TANF-funded contractors in Milwaukee County. In addition, state and local officials in other states frequently told us that data quality issues complicated efforts to monitor contractors effectively. For example, officials told us that case file information on job placements or job retention frequently differed from data in automated systems maintained by state or local contracting agencies. In New York City, such discrepancies required the Human Resources Administration to conduct time-consuming reviews and reconciliations of the data. Such inaccuracies forced delays in New York City’s payments to contractors, estimated by city officials to total several million dollars. States and localities have taken actions in response to some of the reported contract oversight deficiencies. For example, state of Florida officials worked with local workforce boards to integrate the operations of welfare and employment offices to improve oversight of service providers, including nongovernmental contractors. In Texas, the Texas Workforce Commission issued new oversight policies and provided technical assistance and guidance to help local workforce boards oversee the performance of TANF contractors. For example, the commission’s prior monitoring had identified inappropriate cost allocations across programs and other financial management problems by local boards. The commission subsequently issued guidance on how boards and their contractors can meet cost allocation requirements. Commission officials told us that they use a team approach to monitor workforce boards and provide technical assistance. Auditors disallowed significant costs by TANF contractors at two of the locations that we visited: Milwaukee County, Wisconsin, and Miami-Dade County, Florida. In the first location, Wisconsin’s State Legislative Audit Bureau reported that one for-profit contractor had disallowable and questionable costs totaling $415,247 (of which 33 percent were disallowable) and one nonprofit contractor had disallowable and questionable costs totaling $367,401 (of which 83 percent were disallowable). State auditors reported that a large proportion of the disallowable costs resulted from the contractors claiming reimbursement from Wisconsin for expenses incurred while attempting to obtain TANF contracts in other states. Auditors said that generally accepted contract restrictions prohibit the use of contract funds obtained in one state from being used to obtain new contracts in other states. State auditors also said they examined whether there had been any preconceived intent underlying these prohibited contract practices, which could have led to charges of fraud. However, the auditors could not demonstrate preconceived intent or any related allegations of fraud. The for-profit contractor also had costs disallowed for expenditures that supported TANF-funded activities involving a popular entertainer who had formerly received welfare benefits. The contractor believed the activity would provide an innovative, motivational opportunity for TANF recipients. While the contractor claimed that Wisconsin officials had not provided sufficient guidance about allowable activities, state officials subsequently found the costs associated with the entertainment activities to be unallowable. Costs incurred by the for-profit contractor that state auditors cited as questionable included charges for a range of promotional advertising activities, restaurant and food purchases for which there was no documented business purpose, and flowers for which documentation was inadequate to justify a business purpose. Costs incurred by the nonprofit contractor that were cited as questionable included funds spent on advertising, restaurant meals and other food purchases that were not a program need, and local hotel charges for which there was inadequate documentation. At the time of our review, the contractors had repaid all unallowable and questionable costs. In 2001, Wisconsin enacted a state law that requires TANF contracts beginning on January 1, 2002, and ending on December 31, 2003, to contain a provision stating that contractors that submit unallowable expenses must pay the state a sanction equal to 50 percent of the total amount of unallowable expenses. Auditors also disallowed some program costs claimed by several contractors under contract with the Miami-Dade Workforce Development Board in Florida. The auditors found instances in which several contractors had billed the contracting agency for duplicate costs. On the basis of these findings, the auditors recommended that the contractors repay the board about $33,000 for the costs that exceeded their valid claims. At the time of our review, arrangements had been made for the contractors to repay the disallowed costs to the contracting agency. Many TANF contractors at the sites that we reviewed are not meeting their established performance levels in the areas of work participation, job placement, or job retention rates. Contracting agencies in eight of the nine localities we reviewed (all except the District of Columbia) have established expected levels of performance for their TANF contractors, and these performance levels vary by locality. At two of the eight sites— Milwaukee and Palm Beach—all contractors met all specified performance levels. However, at each of the six other sites, most contractors did not meet one or more of their performance levels, indicating that state and local governments did not achieve all anticipated performance levels by contracting for TANF services. Figures 4, 5, and 6 indicate the overall extent to which contractors met performance levels with respect to measures for work participation, job placement, and job retention rates in each location that had established these performance levels. In contrast, at the two local sites that either established performance measures for the percentage of job placements that pay wages of at least a specified level (Milwaukee and Palm Beach) or offered health benefits (Milwaukee), all contractors met these measures. Payments to contractors at the eight localities that established performance levels are based either entirely or in part on whether contractors meet their specified performance levels. The measures most often used in the locations we visited mirror PRWORA’s emphasis on helping TANF recipients obtain employment. The most common performance measures are work participation, job placement, and job retention rates. Work participation rates stipulate that contractors engage a specified percentage of TANF recipients in work-related activities such as job search or community work experience. Job placement rates specify that contractors place a specified percentage of recipients in jobs and job retention rates specify that contractors ensure that recipients retain employment (but not necessarily at the same job) for a specified period, typically ranging from 30 to 180 days. In addition, some localities have established performance levels that require contractors to place TANF recipients in certain types of jobs, such as jobs that pay wages of at least a specified level or offer health benefits. The localities varied in the types of measures and levels of performance they established. For example, the specified levels for job placements ranged from 22 percent of program participants in Palm Beach to 50 percent in Austin and Houston. Performance levels established for job retention also varied by jurisdiction. For example, the specified performance levels for contractors in Milwaukee County are that 75 percent of TANF recipients who entered employment retain employment for 30 days and 50 percent retain employment for 180 days. In comparison, contractors in San Diego County face a 90-percent level for 30-day employment retention and a 60-percent level for 180-day retention.Appendix IV provides additional details on the performance levels established by each locality. In most cases, nonprofit and for-profit contractors had similar performance levels. Across the locations we reviewed, there are eight instances in which a local site had data on the comparable performance of nonprofit and for-profit contractors. In five of these eight instances, the percentages of nonprofit and for-profit contractors that met the measures were similar. In each of the remaining three instances, for-profit contractors performed substantially better overall. In two locations we reviewed—Los Angeles County and San Diego County—county governments also provided TANF services. Overall, the relative performance levels of county-provided services and contracted services were mixed. For example, in San Diego County, the county performed better than one for-profit contractor and worse than another for-profit contractor in meeting performance levels for certain job retention rates. In Los Angeles County, one of two for-profit contractors performed better than the county in placing TANF recipients in jobs while one of the two county providers achieved higher placement rates than the other for-profit contractor. At the remaining site, the District of Columbia, contracting officials were unable to provide information on how well TANF contractors met expected levels of performance. While the District has not established contractually specified performance levels for TANF contractors, these contractors do have performance-based contracts. For example, contractors receive a specified payment for each TANF recipient who becomes enrolled in work-related activities, placed in a job, or who retains employment for a certain period of time. However, District officials were unable to provide us with an assessment of TANF contractors’ performance in serving TANF recipients. The contracting out of TANF-funded services is an important area for several reasons. First, the magnitude of TANF contracting is substantial, involving at least $1.5 billion in federal and state funds in 2001, which represents at a minimum 13 percent of the total amount states expended for TANF programs (excluding expenditures for cash assistance). In 2001, about a quarter of the states contracted out 20 percent or more of the amounts they had expended for TANF programs in fiscal year 2000, ranging up to 74 percent. Second, PRWORA expanded the scope of services that could be contracted out to nongovernmental entities, such as determining eligibility for TANF. Third, some states are using new entities—local workforce boards—that procure TANF services and are responsible for overseeing TANF contractors. Problems with the performance of TANF contractors have been identified in some cases, but there is no clear pattern of a greater incidence of these problems with nonprofit versus for-profit contractors. At two of the nine localities we reviewed, auditors had disallowed certain costs by several contractors, and arrangements had been made for the contractors to repay unallowable costs. We found more widespread instances of contractors at the local sites not meeting their contractually established performance levels in areas such as work participation, job placement, and job retention rates. Contracting agencies at the local sites had established financial incentives for contractors by basing payments to contractors in whole or part on their performance in such areas. While meeting the service needs of TANF recipients can present many challenges for contractors, this has become even more important now that these recipients face time limits on the receipt of TANF. Effective oversight is critical to help ensure contractor accountability for the use of public funds, and our review identified problems in some cases with state and local oversight of TANF contractors. At the national level, our review of state single audit reports found internal control weaknesses for over a quarter of the states that potentially affected the states’ ability to effectively monitor TANF contractors. The extent to which state single audits have identified problems with subrecipient monitoring involving TANF funds is generally equal to or greater than for several other social service programs in which contracting occurs with nongovernmental organizations. Moreover, in four of the six states we visited, independent audits have identified deficiencies in state or local oversight of TANF contractors. However, HHS officials told us that they do not know the extent and nature of problems pertaining to the oversight of TANF contractors that state single audit reports have cited because HHS does not analyze these reports in such a comprehensive manner. This is due, in part, to HHS’s focus on those problems identified by single audit reports that involve unallowable or questionable costs. While such problems certainly warrant high priority, the result is that there is not adequate assurance that identified deficiencies pertaining to the monitoring of TANF contractors are being corrected in a strategic manner. Greater use of single audits as a program management tool by HHS would provide greater assurance that TANF contractors are being held accountable for the use of public funds. For example, HHS could use state audit reports more systematically in ways such as obtaining additional information about the extent to which nongovernmental TANF contractors are involved in the subrecipient monitoring deficiencies cited in these reports, identifying the most commonly reported types of deficiencies, and tracking how often the same deficiencies are cited recurrently for individual states. To facilitate improved oversight of TANF contractors by all levels of government, we recommend that the Secretary of HHS direct the Assistant Secretary for Children and Families to use state single audit reports in a more systematic manner to identify the extent and nature of problems related to state oversight of nongovernmental TANF contractors and determine what additional actions may be appropriate to help prevent and correct such problems. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. Once we receive comments from HHS, the comments and our response will be incorporated in the final report. Please contact me on (202) 512-7215 if you have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix V. To identify the extent and nature of Temporary Assistance for Needy Families (TANF) contracting, we conducted a national survey of all 50 states, the District of Columbia, and the10 counties with the largest federal TANF-funding allocations in each of the13 states that administer their TANF programs locally. Contracting for TANF-funded services occurs at different levels of government—the state, the local, or both—and data on TANF-funded contracts are maintained at various levels of government. We developed three survey instruments to accommodate these differences. The first survey instrument, which requested state data only, was sent to the 13 states that contract at both levels of government or locally only, but maintain data separately. For these 13 states, a second survey instrument, which requested data on contracts entered into at the local level, was sent to 10 counties that receive the largest TANF allocations in each of these 13 states to determine how much contracting takes place in their larger counties. The third survey instrument, which requested data on state-level and local-level contracts, was sent to the remaining 37 states and the District of Columbia (see app. II for this survey instrument). All three survey instruments were pretested with appropriate respondents in six states. In addition to obtaining data through our national survey, we also obtained data from HHS on federal TANF and state maintenance-of-effort funds for fiscal year 2000. We did not independently verify these data. The response rate for the survey instrument sent to the counties in the 13 states was 78 percent. The response rate for the remaining survey instruments sent to state governments was 100 percent. Since our survey did not cover all counties in the 13 states that contract for TANF services locally, the total number of TANF-funded contracts and their dollar value may be understated. In addition, eight states that maintain data on local- level contracting did not provide us with these data. We subsequently contacted survey respondents who had indicated that the determination of eligibility had been contracted out to confirm that this was for the TANF program and determine whether contractors determined eligibility for cash assistance or other TANF-funded services. To obtain information on approaches used by the federal government to oversee TANF contracting, we met with officials in HHS’s Administration for Children and Families in Washington, D.C., and conducted telephone interviews with staff in HHS regional offices in Atlanta, Chicago, Dallas, New York, Philadelphia, and San Francisco. We also interviewed the director of HHS’s National External Audit Review Center to learn how the agency uses single audit reports to oversee procurement processes and contractor monitoring. In addition, we analyzed the single audit database and reviewed state single audit reports. To obtain information on approaches used by state and local governments to ensure compliance with bid solicitation and contract award requirements and to oversee contractor performance, we conducted site visits to California, the District of Columbia, Florida, New York, Texas, and Wisconsin. We met with state TANF officials in these states. In addition, we met with procurement officers, contract managers, auditors, and private contractors in the following nine locations: Austin and Houston, Texas; the District of Columbia; Los Angeles County and San Diego County, California; Miami-Dade and Palm Beach, Florida; Milwaukee, Wisconsin; and New York City, New York. We elected to visit these localities because they all serve a large portion of the TANF population and have at least one large contractor providing TANF-funded services. To obtain additional perspectives on TANF contracting, we interviewed representatives from government associations (American Public Human Services Association, Council of State Governments, National Conference of State Legislatures, and the National Association of Counties) and unions (American Federation of State, County, and Municipal Employees at the national office and in Milwaukee County, Wisconsin). We also reviewed various audit reports for the state governments, local governments, and nonprofit contractors that we interviewed in the nine locations to determine whether auditors found instances of noncompliance with bid solicitation and contract award requirements or contract monitoring. In addition, we selected 7 TANF- funded contracts with nonprofit organizations and 10 TANF-funded contracts with for-profit organizations to obtain information on their contract structure, services provided, and other relevant information. Appendix III: Problems Cited with TANF Subrecipient Monitoring by State Single Audits, 1999 and 2000 2000 The state lacked procedures to ensure that subrecipient nonprofit organizations used TANF funds only for allowable purposes as required by TANF regulations. The state failed to inform nonprofit subrecipients of the source and amount of TANF funds they received. As a result, the state cannot provide assurance that nonprofit organizations are complying with federal requirements, including TANF requirements for allowable activities, allowable costs, and suspension and debarment of contractors. In some cases, the state did not provide subrecipients with information about the sources of federal funds they received. The lack of proper notification to subrecipients of federal award information increases the risk of the improper use and administration of federal funds. In some cases, the state did not notify subrecipients that the funding they received originated from TANF. The lack of proper notification to subrecipients of federal award information increases the risk of the improper use and administration of federal funds, including limited assurance that proper audits are conducted of those funds. The single audit report references a state inspector general report that identified inadequate state oversight of local workforce coalitions that administer TANF funds and inadequate procurement and cash management practices by the local coalitions. The state has not ensured that significant deficiencies related to electronic benefit transfer cards are corrected on a timely basis. The state did not issue monitoring reports to counties within a consistent timeframe. The 1999 finding on not notifying subrecipients of the federal funding sources from which they received funds was subsequently reported in 2000, including the associated risks reported in the prior year. The state did not provide information to some subrecipients on the sources of federal funds it distributed to them. The state did not provide this information because it initially considered the service providers to be vendors rather than subrecipients, and as such, the state did not believe it was necessary to notify the service providers of the federal award information. Failure to inform subrecipients of the federal award information could result in subrecipients improperly reporting expenditures of federal awards, expending federal funds for unallowable purposes, or not receiving a single audit in accordance with federal requirements. The state did not ensure that all nongovernmental contractors submitted their required audit reports or requested an extension. As a result, the state cannot be assured that subrecipients expended federal awards for their intended purpose and complied with federal requirements. 1999 a result, the state cannot be assured that subrecipients spent grant monies for their intended purpose and complied with federal requirements. The state continues to lack an adequate monitoring system to ensure that federal subrecipients and social services contractors are audited in accordance with federal, state, and department regulations. For the seventh consecutive year, the state does not have an adequate monitoring system to ensure that federal subrecipients and social services contractors are audited in accordance with federal, state, and department regulations. In addition, the audit identified $267,749 in questionable costs for TANF. For 35 percent of the contracts audited, the contract did not include required federal award information and information on applicable compliance requirements. The state cannot determine if all required audit reports are received and lacks review procedures to ensure that the information entered into the audit tracking system is accurate and complete. State policy and procedures relating to audit follow-up for subrecipient audits need to be revised to include current official policies. The state is not able to ensure the completeness or accuracy of its system for tracking the total amount of funds provided to subrecipients. The state’s internal control mechanisms did not provide for the proper identification, monitoring, and reporting of payments to all subrecipients. The state’s contract management database excludes several entities that received payments of federal funds. As a result, the state could not be assured that all entities receiving funds were identified as subrecipients, when appropriate, and monitored. In addition, self-certification of entities as subrecipients or vendors increases the risk that the state is not properly identifying and monitoring subrecipients.While OMB Circular A-133 requires states to monitor subrecipients to ensure compliance with laws, regulations, and provisions of contracts, the state agency did not have policies and procedures in place to monitor the activities of subrecipients. The state did not verify the amount of federal financial assistance expended by subrecipients, which should be done to determine which subrecipients require an audit. The state had not implemented an effective procedure for documenting the fiscal year-end for each new subrecipient. 2 of 15 subrecipients tested did not submit their 1998 audit reports in a timely manner, and the state did not perform follow up procedures in a timely manner. For 5 of 15 subrecipients tested, the state’s review of the audit reports was performed 6 months or more after the state received the reports. Without adequate control over the submission of audit reports and prompt follow-up of audit findings, noncompliance with federal regulations by subrecipients could occur and not be detected. 1999 Local offices of the state agency reported that they could not locate over 6 percent of the case files requested for detailed review. Without case files, adequate documentation is not available to verify the eligibility of clients and the appropriateness of benefits paid. The state did not properly monitor the federal funds expended by the Essex County Welfare Board for the Public Assistance Program. While an independent auditor issued a single audit report for Essex County, the audit excluded the Public Assistance Program because of the lack of internal controls related to some components of the program. Payments to public assistance recipients are made through an electronic benefit transfer (EBT) system administered by a contractor, but EBT account activity has not been reconciled to the state’s automated system for the public assistance program. Eleven of the 58 local districts did not submit their single audit reports within the required 13-month period. The state did not maintain sufficient documentation to adequately monitor advance payments to, and expenditures of, contractors providing child care services. The state’s procedures for reviewing subrecipient audit reports were inadequate. Errors and omissions in reports on subrecipient expenditures went undetected. The state did not conduct expenditure reviews to ensure that amounts disclosed in subrecipient audit reports agreed with expenditure records maintained by the state. As reported in the prior audit, the state did not perform sufficient monitoring procedures to provide reasonable assurance that subrecipients administered federal awards in compliance with federal requirements. The reported problem remains unresolved, as the state did not provide reasonable assurance that services and assistance were provided to eligible families. Eleven of the 58 local districts did not submit their single audit reports within the required 13-month period. The state does not perform an adequate desk review of local districts’ single audit reports to ensure that submitted reports were performed in accordance with federal requirements. The state did not always perform or document a review of the counties’ eligibility determination process to provide reasonable assurance that services and assistance were provided to eligible families. The state did not always monitor to ensure that sanctions were imposed on TANF recipients who did not cooperate with the child support enforcement office. The state did not perform monitoring procedures to provide reasonable assurance that the counties used Social Services Block Grant funds for only eligible individuals and allowable service activities. 1999 The state’s fiscal and program monitoring of local workforce boards does not provide reasonable assurance that TANF funds are being spent appropriately. Current fiscal monitoring procedures are inconsistent and lack program-specific attributes. For example, state fiscal monitors generally do not compare a local workforce board’s funding allocation for specific programs to its subcontractor’s budget to ensure that the board is passing on the funds as required. Federal and state compliance is not ensured by the limited scope of reviews. The state conducted limited program monitoring of only 4 of 18 boards that had TANF contracts in place. No problems were cited. While the 2000 state single audit did not report monitoring problems, another state audit issued in March 2001 reported that local workforce boards still needed to make significant improvements in their contract monitoring. The audit reported that improvements are needed to ensure proper accounting for program funds, management of contracts with service providers, and achievement of data integrity. Not applicable. The following individuals made important contributions to this report: Barabara Alsip, Elizabeth Caplick, Mary Ellen Chervenic, Joel Grossman, Adam M. Roye, Susan Pachikara, Daniel Schwimer, and Suzanne Sterling. Welfare Reform: More Coordinated Federal Effort Could Help States and Localities Move TANF Recipients With Impairments Toward Employment. GAO-02-37. Washington, D.C.: 2001. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: 2001. Welfare Reform: Moving Hard-to-Employ Recipients Into the Workforce. GAO/HEHS-01-368. Washington, D.C.: 2001. Welfare Reform: Progress in Meeting Work-Focused TANF Goals. GAO- 01-522T. Washington, D.C.: 2001. Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort. GAO/HEHS-00-48. Washington, D.C.: 2000. Social Service Privatization: Ethics and Accountability Challenges in State Contracting. GAO/HEHS-99-41. Washington, D.C.: 1999. Social Service Privatization: Expansion Poses Challenges in Ensuring Accountability for Program Results. GAO/HEHS-98-6. Washington, D.C.: 1997. Managing for Results: Analytic Challenges in Measuring Performance. GAO/HEHS/GGD-97-138. Washington, D.C.: 1997. Privatization: Lessons Learned by State and Local Governments. GAO/GGD-97-48. Washington, D.C.: 1997. Child Support Enforcement: Early Results on Comparability of Privatized and Public Offices. GAO/HEHS-97-4. Washington, D.C.: 1996.
The Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996 changed the nation's cash assistance program for needy families with children by replacing the Aid to Families with Dependent Children program, with the Temporary Assistance for Needy Families (TANF) block grant. As specified in PRWORA, TANF's goals include ending the dependence of families receiving government benefits by promoting job preparation, work, and marriage; preventing and reducing the incidence of non-marital pregnancies; and encouraging two-parent families. Contracting with nongovernmental entities to provide TANF-funded services occurs in most states and exceeded $1.5 billion in federal and state funds in 2001. A GAO survey indicated that the most commonly contracted services included education and training, job placement, and support services to promote job entry or retention. The Department of Health and Human Services (HHS) relies primarily on state single audit reports to oversee TANF contracting by states and localities. HHS officials told GAO that their regional offices follow up on the TANF deficiencies identified and that HHS focuses on reported deficiencies that involve unallowable or questionable costs. However, HHS officials said that they do not know the extent and nature of problems pertaining to the oversight of nongovernmental TANF contractors that have been cited in state single audits. State and local governments rely on third parties to help ensure compliance with bid solicitation and contract award procedures, including bid protests, judicial processes, and external audits. They use various approaches to oversee TANF contractors, but problems persist in contract oversight and contractor performance.
Organ transplants are becoming increasingly common. The 28,352 organ transplants performed in the United States in 2007 represent an increase of about 40 percent since 1997. (See fig. 1.) Kidney transplants are the most common procedure, accounting for almost 60 percent of transplants. Most transplanted organs come from deceased donors, but a significant portion (22 percent in 2007) come from living donors who may donate, for example, a kidney or a segment of liver or lung. As of January 2008, 254 U.S. hospitals had a transplant center; collectively, these centers operated 844 individual transplant programs. (See table 1.) Nearly all states had at least one transplant center, but some types of transplant programs, such as lung or intestine transplant programs, were located in a limited number of states. The organ transplantation process involves the following steps. Step 1: The process begins when a patient’s physician determines that an organ transplant may be necessary and refers the patient to a transplant program for evaluation. Step 2: If the transplant program determines that the patient is a candidate for transplantation, the individual is registered on the national organ transplant waiting list maintained by the OPTN. Step 3: When an organ becomes available, the local organ procurement organization enters information about the donor organ into a national computer system operated by the OPTN. The computer system generates a ranked list of potential recipients based on how closely their medical characteristics—such as blood type, organ size, and genetic makeup— match the donor’s, as well as on the urgency of their medical conditions, their time spent on the waiting list, and their proximity to the donor. Step 4: Transplant programs whose patients appear on the list are contacted. The decision whether to accept an organ rests with the patient’s transplant team. Because the length of time organs can viably be kept outside the body is limited, the transplant team has 1 hour to make its decision. If the organ is not accepted, it is offered to the center with the next patient on the list until the organ is placed. Step 5: Once the organ is accepted for a potential recipient, a surgical team comes to the donor hospital to recover the organ. The recovered organ is transported from the donor to the recipient hospital for transplantation into the patient. The OPTN was created pursuant to the National Organ Transplant Act, which called for HHS to provide by contract for the establishment and operation of the OPTN to manage the nation’s organ allocation system. Prior to that time, national policies regarding transplantation did not exist and organ allocation was carried out on an ad hoc basis. The OPTN’s functions include maintaining a list of patients waiting for transplants, operating a system for matching donated organs with individuals on the list, establishing medical criteria for allocating organs, collecting and analyzing data on organs donated and transplanted, and conducting work to increase the supply of donated organs. The OPTN’s members include all transplant centers and organ procurement organizations in the country; tissue-typing laboratories; professional scientific and medical organizations; and other organizations and individuals interested in organ donation or transplantation, such as organ donors, recipients, and their families. The OPTN’s Membership and Professional Standards Committee (MPSC) is responsible for overseeing the compliance of OPTN members with applicable federal regulations and OPTN policies. The OPTN collects most of the funding to cover its operating costs (estimated to be about $25 million in 2006) from candidate registration fees paid by OPTN members; HRSA’s funding for the OPTN is capped at $2 million a year. From early in its history, the OPTN has been responsible for operating an equitable nationwide system of organ allocation. The OPTN develops detailed policies that govern the distribution of organs and other issues related to transplantation, such as the specific credentials required of transplant surgeons and physicians. HHS clarified the OPTN’s oversight responsibilities in regulations implemented in 2000. The regulations require the OPTN to design plans and procedures for conducting ongoing and periodic reviews of all member transplant centers for compliance with the regulations and OPTN policies. The regulations also require the OPTN to advise the Secretary of Health and Human Services when the results of its reviews indicate noncompliance with the regulations or OPTN policies or otherwise indicate a risk to patient health or public safety. While the OPTN is required to monitor transplant programs’ compliance with its policies, OPTN policies are considered voluntary or advisory. To promote transplant programs’ voluntary compliance with OPTN policies, the OPTN employs a confidential review process in which OPTN members evaluate the medical care provided by colleagues to determine compliance with OPTN policies and regulations. The OPTN emphasizes that its confidential review process focuses on corrective action rather than punishment and is aimed at continuous quality and performance improvement. On its own, the OPTN can impose certain sanctions against noncompliant transplant programs, such as issuing a letter of warning or placing a program on probation. The OPTN can also request that the Secretary of Health and Human Services impose stronger enforcement actions, including terminating a program’s ability to receive organs or reimbursement under Medicare. The OPTN contract with HRSA includes several requirements related to the oversight of transplant programs. For example, the contract requires the OPTN to conduct on-site reviews of heart, liver, and lung transplant programs at least once every 3 years and to perform ongoing analyses of organ allocations. The OPTN is also required to submit monthly reports to HRSA describing transplant program-specific instances of noncompliance with OPTN policies and the status of corrective action plans. To ensure that the OPTN is fulfilling its responsibilities to monitor transplant programs’ compliance, HRSA officials participate as ex officio nonvoting members on the OPTN’s Board of Directors and committees, including the MPSC. According to HRSA officials, the agency’s presence on OPTN committees helps ensure that the committees’ recommendations are consistent with federal laws and regulations. In addition, HRSA officials said that they and OPTN officials communicate regularly about all aspects of the OPTN’s performance, including monitoring transplant program compliance. CMS is responsible for overseeing organ transplant programs that receive Medicare reimbursement for transplant services. At the time the high- profile cases came to light, CMS had different criteria and procedures for overseeing extra-renal and renal transplant programs participating in Medicare. Extra-renal transplant programs participated in Medicare by meeting the criteria set forth in various national coverage determinations (NCD) published beginning in 1987. The NCDs provide that transplants of extra- renal organs for Medicare beneficiaries will be considered reasonable and necessary and therefore reimbursable under Medicare if they are performed in a facility that CMS approves as meeting specified criteria. For example, heart, liver, and lung transplant programs were required to have written patient selection criteria, perform a minimum number of transplants each year, and meet minimum patient survival rates. The NCDs for these programs did not include criteria for reevaluating the ongoing performance of Medicare-approved programs. Renal transplant programs participated in Medicare by meeting regulatory standards for facilities furnishing end-stage renal disease (ESRD) services. ESRD facilities include those providing dialysis services and renal transplant services. CMS monitored renal transplant programs’ compliance with Medicare requirements by contracting with state survey agencies—generally state departments of health—to conduct routine on- site inspections known as surveys. If a survey found a facility to be out of compliance and if it had a major deficiency that went uncorrected, then the facility was subject to termination from the Medicare program. In 2005, recognizing the need to update existing requirements for extra- renal and renal transplant programs and that the NCDs did not include criteria for reassessing the performance of extra-renal transplant programs, CMS promulgated proposed regulations to establish a single set of Medicare requirements for both renal and extra-renal transplant programs. CMS’s and, to a lesser extent, the OPTN’s oversight of transplant programs was not comprehensive at the time high-profile problems came to light in 2005 and 2006. CMS did not actively monitor extra-renal transplant programs’ compliance with criteria for Medicare approval. CMS monitored renal transplant programs through contracts with state agencies, but the surveys reviewed compliance with requirements that had not been substantially updated in decades and were limited in scope; also, not all programs were actively monitored. At the same time, the OPTN actively monitored transplant programs and took action to resolve identified problems, but its oversight activities fell short in some respects—the OPTN’s monitoring did not include methods capable of promptly detecting problems at transplant programs that prolonged the time that patients waited for transplants, and the OPTN did not always meet its goals for conducting on-site reviews. CMS’s oversight varied between extra-renal and renal transplant programs and was not comprehensive even for renal transplant programs, which received more oversight. At the time high-profile problems came to light in 2005 and 2006, CMS was not actively monitoring the ongoing compliance of Medicare-approved extra-renal transplant programs with the criteria specified in the NCDs, which included performing a minimum number of transplants per year and achieving a minimum patient survival rate. Instead, CMS’s procedure was to conduct only an initial review of an extra-renal program to determine if it met the criteria in the NCDs at the time the program applied for Medicare approval. Once an extra-renal transplant program received Medicare approval, CMS generally did not assess the program’s continued compliance with NCD criteria. Although the NCDs for heart and liver transplant programs called for programs to submit an application for Medicare reapproval every 3 years, the NCDs did not specify and CMS did not otherwise establish a process for doing so, and programs continued to retain Medicare approval without reapplying. To oversee extra-renal transplant programs’ ongoing compliance with criteria for Medicare approval, CMS relied on programs to self-report significant changes and complaints from Medicare beneficiaries and others that would alert CMS to a potential problem. CMS officials or state surveyors conducted complaint investigations after receiving complaints against transplant programs or otherwise becoming aware of potential problems, such as through media reports. CMS officials in three of the five regions we contacted reported that they or state surveyors had investigated nine complaints against extra-renal transplant programs during the period of 2000 through 2006. For example, one of the three high-profile cases initially came to light after CMS received a patient complaint about a liver transplant program. CMS officials investigated the complaint and discovered that this transplant program had not had a full- time surgeon on staff in over a year. After completing the complaint investigation, CMS withdrew Medicare approval from the transplant program, which closed shortly thereafter. CMS’s oversight of renal transplant programs was more active than its oversight of extra-renal transplant programs, although it also had limitations. CMS contracted with state agencies to periodically perform on-site surveys of renal transplant programs for compliance with Medicare requirements and had a process in place to resolve problems identified during these surveys. When state surveyors identified compliance problems with requirements during their reviews of renal transplant programs, CMS generally acted to resolve these problems by requiring programs to submit corrective action plans for coming back into compliance with requirements. According to CMS data, major problems were generally corrected within 90 days, and only one of the five CMS regional offices we contacted reported that CMS had withdrawn Medicare approval from a renal transplant program in its region since 2000 for failure to comply with Medicare requirements. This instance was the high- profile case involving an HMO that was unable to properly handle a large influx of patients to its program. Although CMS had a process in place to periodically review renal transplant programs through state agency surveys of ESRD facilities, the surveys reviewed compliance with requirements that had not been substantially updated in decades and were limited in scope. Medicare regulations for ESRD facilities, including renal transplant programs, were initially published in 1976 and, according to CMS officials, had not been substantially updated since then. For example, the regulations did not include a requirement that renal transplant programs achieve a minimum patient survival rate. Experts in the transplantation field have since recognized the importance of patient-centered, outcome-oriented performance measures, such as survival rates, and recommended their use. In addition, while the Medicare requirements specified that renal transplant programs should perform a minimum number of transplants per year, CMS instructions to state surveyors did not call for them to verify that these numbers were achieved. Furthermore, CMS’s process to review renal transplant programs did not ensure that all of these programs were actively monitored in practice. Our analysis of CMS data as of May 2007 showed that 31, or about 1 in 8, active, Medicare-approved renal transplant programs had been mistakenly classified as no longer participating in Medicare in CMS’s survey database or had been mistakenly excluded from the database. According to CMS officials, these programs would not have been surveyed again after these mistakes occurred. Our analysis showed that as of May 2007 the length of time since the 31 programs had last been surveyed ranged from about 4 to over 20 years; over three-quarters of these programs had not been surveyed in the previous 10 years. By comparison, most correctly classified programs had been surveyed in the previous 4 years, although 34 programs had not, and 9 of those programs had not been surveyed in the previous 10 years. CMS did not have survey frequency goals specific to renal transplant programs. However, CMS has acknowledged that not all state agencies achieved CMS goals for conducting surveys of all ESRD facilities (of which renal transplant programs are a subset). CMS officials emphasized that the CMS survey and certification budget had not been fully funded during fiscal years 2005 through 2007. The OPTN’s oversight, while more active and extensive than CMS’s oversight, also had limitations; most notably, its monitoring methods were insufficient to promptly detect problems affecting patients waiting for transplants. The OPTN monitored transplant programs on an ongoing basis for numerous types of potential problems. The OPTN’s oversight was conducted by both OPTN staff and by its MPSC, which includes OPTN members who are medical professionals from the field of transplantation. The OPTN’s numerous activities to monitor compliance with OPTN policies included reviewing information on patients placed on transplant waiting lists, allocations of organs from deceased donors, physician credentials, and timely submission of required data. These reviews were generally scheduled to occur weekly or quarterly. The OPTN also monitored on a quarterly basis two key indicators of potential performance problems at transplant programs—lower-than-expected patient and organ survival rates and failure to perform any transplants during a specified period of time. In addition, the OPTN conducted periodic routine on-site reviews of heart and liver transplant programs to review patient medical records; the OPTN’s goal was to conduct these on- site reviews once every 3 years. The OPTN’s monitoring activities identified many problems, ranging from minor anomalies with organ allocations to more significant problems, including one of the three high-profile cases. The case came to light after a routine OPTN on-site review led staff at the transplant program to self- report that a recipient of a liver transplant had inappropriately received the transplant ahead of others on the waiting list and that the program had falsified patient medical records in order to conceal its actions. Our review of a sample of compliance cases showed that the OPTN most often identified members’ noncompliance with OPTN policies through its routine on-site reviews (15 of 43 cases). Our review of OPTN compliance cases and performance data and discussions with OPTN officials indicated that the OPTN took steps to resolve compliance and performance problems it identified during its monitoring activities. As explained below, the OPTN’s process for resolving members’ noncompliance with OPTN policies differs from its process for resolving members’ performance problems, such as lower- than-expected survival rates. Noncompliance with OPTN policies. OPTN officials emphasize that the OPTN works to resolve most cases of noncompliance without resorting to strong enforcement actions. Our review of a sample of compliance cases showed that the length of time for cases to be fully resolved varied and depended on the nature of the case. For example, a relatively minor case involving an organ allocation discrepancy was resolved within 4 months, while a case involving problems with medical record documentation took about 3 years to resolve. The three high-profile cases are examples of cases in which the OPTN took strong enforcement actions. After the individual transplant programs involved in these cases had announced that they would voluntarily close, the OPTN continued to review the cases and eventually declared two of the transplant centers that operated these transplant programs “Members Not in Good Standing” and imposed a lesser sanction of probation on the third transplant center. Performance problems. The OPTN flags for the MPSC’s review transplant programs that are not achieving OPTN performance standards for survival rates or transplant activity, but these programs are not considered to be out of compliance with OPTN policies. Instead, the OPTN works with these programs until they meet the standards, sometimes sending peer review teams on-site to consult with the programs, or until problems are otherwise resolved (for example, if a program closes voluntarily). OPTN officials said that programs with low survival rates typically need to show improvement in outcomes before being released from review by the MPSC and emphasized that this can take some time. Of 72 cases the OPTN flagged for low survival rates in 2005, about 40 percent remained under review by the MPSC as of August 2007. Although the OPTN conducted numerous types of monitoring activities, these activities did not incorporate methods capable of promptly detecting problems at transplant programs that prolonged the time that patients waited for transplants. For example, the OPTN regularly flagged programs for review that did not perform any transplants in a specified period of time. While helpful in detecting completely inactive programs, this particular method did not identify more subtle problems, such as a transplant program that was understaffed and was turning down organs offered for patients at markedly high rates. At the two transplant programs with high-profile problems affecting patient access to transplants, enough transplants were conducted that the programs were not flagged as inactive programs. In addition, the transplants that did occur were successful enough that the programs were not flagged as experiencing performance problems at the time the problems came to light. However, far fewer transplants were conducted than would be expected given the numbers of patients on the waiting list, reflecting problems with understaffing that ultimately affected patients’ access to transplants at these programs. Even though a targeted method for detecting these problems was not in place, separate pieces of information, if pieced together, could have alerted the OPTN to at least one of the high-profile incidents. The OPTN, however, missed opportunities to link these separate sources of information. For example, OPTN staff who handle patient transfers were aware that an HMO was attempting to transfer hundreds of patients to its new transplant program at an unprecedented rate and was experiencing problems with the transfers. However, they did not alert other appropriate OPTN staff to the possible need for a compliance review or to look into the situation by, for example, reviewing available data that indicated far lower-than-expected numbers of transplants at the new program. The problem eventually came to light after a whistleblower alerted the news media. In addition to having insufficient methods to detect some of the high- profile cases, the OPTN was not always timely in conducting those monitoring activities that it performs on-site, namely routine on-site reviews and peer review site visits. Although the OPTN’s goal was to conduct routine on-site reviews of heart and liver programs once every 3 years, it had fallen behind this schedule. In December 2006, 50 percent of continuously active heart and liver transplant programs had not been reviewed on-site in the previous 3 years and 38 percent had not been reviewed on-site in the previous 4 years. OPTN and HRSA officials attribute the delay in routine on-site reviews to HRSA’s directive to the OPTN to study a new lung allocation policy. Additionally, in our review of performance data we observed that in some cases peer review site visits were not conducted on a timely basis. For about three-fourths of transplant programs (17 of 22) for which the MPSC recommended a peer review site visit from July 2005 through July 2006, the site visit had not yet occurred a year after being recommended. According to OPTN officials, a contributing factor in the delay was that the number of programs recommended to receive a peer review site visit significantly increased during 2005, resulting in a backlog. Since the high-profile cases came to light, CMS, HRSA, and the OPTN have made some changes and planned others to improve federal oversight of organ transplant programs; however, the full effect of these changes remains to be seen. CMS has begun monitoring extra-renal transplant programs and has finalized regulations that expand and unify Medicare requirements for all types of transplant programs and establish procedures for periodic review of transplant programs. The OPTN and HRSA are working to develop and implement a set of indicators to help the OPTN better identify problems that prolong the time patients wait for transplants. Implementation of CMS’s new requirements is in its early stages, however, and CMS has not resolved the extent to which on-site surveys will be performed as part of its periodic reviews of programs for Medicare reapproval. Also, the OPTN’s and HRSA’s set of indicators has not yet been implemented. Further, while CMS, HRSA, and the OPTN have begun sharing basic transplant program data, how they will share additional information resulting from their oversight activities has not been resolved. CMS has made substantial changes to its oversight: the agency began monitoring extra-renal transplant programs and, most significantly, finalized new regulations that apply to all types of transplant programs and that require on-site surveys of all transplant programs applying for Medicare approval. After high-profile problems came to light, CMS began monitoring extra- renal transplant programs’ compliance with existing Medicare NCD criteria in 2006. According to CMS officials, the agency’s initial monitoring effort revealed that nearly all 242 Medicare-approved programs were complying with NCD criteria for meeting minimum survival rates. A number of programs, however, were not in compliance with the NCD annual transplant volume criteria, which specify that programs must conduct a minimum number of transplants each year. CMS continued to monitor extra-renal transplant programs and ultimately found that a total of 49 extra-renal transplant programs did not meet the NCD transplant volume criteria in 2005, 2006, or both. As a result, CMS notified 11 programs that agency officials viewed as the most problematic that they could lose Medicare approval for failure to comply with NCD criteria. Ultimately, CMS withdrew Medicare approval from 1 of the 11 programs; of the remaining 10 programs, 2 withdrew voluntarily and 8 programs were implementing corrective action plans as of December 2007. (See table 2.) In March 2007, CMS made a more fundamental change to its oversight by publishing final regulations establishing a new set of Medicare requirements specifically for organ transplant programs. The regulations include 13 core requirements known as Medicare conditions of participation (CoP). Whereas renal and extra-renal transplant programs were previously subject to different requirements and regulatory procedures, the new regulations provide a single set of CoPs and review procedures for all types of transplant programs. In addition, the new regulations both update and expand upon previous requirements. For example, the new CoPs incorporate an updated method for calculating survival rates that reflects current best practices. The CoPs also include entirely new requirements, such as those related to the protection of living donors. (See app. I for more information about the 13 CoPs.) The new regulations also bring CMS requirements into substantial alignment with OPTN policies. Specifically, 10 of the 13 CoPs pertain to areas addressed in OPTN policies. In some instances, CMS incorporated OPTN policies into its requirements such that these policies are now enforceable under federal regulation for Medicare-approved transplant programs. In some areas, the new regulations impose additional requirements not covered by the OPTN. For example, while OPTN policies require transplant programs to provide social support services, CMS’s regulations further require that social support services be furnished by a qualified social worker, as defined by CMS. In other areas, the new CMS requirements cover matters not addressed in existing OPTN policies. For example, one CoP requires programs to implement formal quality assessment and performance improvement programs—a requirement not paralleled in OPTN policies. There are also areas of OPTN policies, largely pertaining to organ allocation, which the CMS regulations do not address. In addition to updating and expanding requirements and more closely aligning them with OPTN policies, CMS’s new regulations also subject transplant programs to initial and periodic reviews for compliance with the Medicare CoPs. Under the new regulations, all transplant programs seeking Medicare approval are required to apply for initial approval; programs that were previously Medicare approved must reapply. As part of determining compliance with the CoPs, each transplant program that applies for Medicare approval will undergo an on-site survey. Transplant programs that are in compliance with all CoPs will be approved for participation in Medicare for 3 years. Prior to the end of the initial 3-year approval period, CMS plans to reexamine data on three key requirements, which together compose one of the CoPs: Data submission: Transplant programs must submit OPTN-required data to the OPTN within specified time frames. Clinical experience: Transplant programs must generally perform at least 10 transplants over a 12-month period. Outcomes: Transplant programs must achieve expected survival rates. If a program is found to be in compliance with the three requirements of this CoP, under the new regulations CMS may choose whether to conduct an on-site reapproval survey of the program’s compliance with additional CoPs. CMS plans to complete on-site surveys for transplant programs seeking initial Medicare approval over the course of 3 years. CMS officials reported that on-site surveys of transplant programs had begun as of August 2007. The agency is prioritizing the order in which these surveys will be conducted, so that programs that do not currently meet the clinical experience and outcomes requirements will receive the highest priority for surveys. According to CMS officials, the agency plans to complete these high-priority surveys by the end of fiscal year 2008; all initial surveys are planned to be completed by the end of fiscal year 2010. Until a new Medicare approval decision is made under the new regulations, currently approved extra-renal and renal transplant programs will remain approved under the NCDs and requirements for ESRD facilities, respectively. To address shortcomings in the OPTN’s ability to detect problems affecting patients waiting for transplants, such as understaffing, the OPTN and HRSA, along with another HRSA contractor, are working to develop and implement a set of activity-level indicators. The set of indicators would be used to monitor programs for problems, such as understaffing, indicated by lower-than-expected activity levels in a manner similar to how the OPTN currently monitors programs for performance problems indicated by lower-than-expected survival rates. The set of indicators includes two existing indicators already developed by the OPTN, one of which, although available, was not previously reviewed by the MPSC, and a new organ acceptance rate indicator. The new indicator, which is intended to identify programs exhibiting lower-than-expected rates of organ acceptance, is a key component of the set of activity-level indicators and has been under development since January 2006. According to the OPTN, the organ acceptance rate indicator had been developed but not yet implemented for kidney and liver transplant programs as of February 2008. With HRSA’s encouragement, the OPTN has also taken steps to increase its capacity to conduct on-site monitoring activities and to improve internal communication. The OPTN substantially increased its staff in 2007 in order to get back on schedule in conducting on-site reviews once every 3 years. According to OPTN officials, the increase in staff will also help the OPTN address its backlog of peer review site visits and achieve its goal of conducting all peer review site visits within 3 months of the visit being recommended by the MPSC. To improve internal communication, the OPTN reported that since 2006, its leadership has emphasized the importance of shared communication, particularly across departments. As a result, according to the OPTN, staff responsible for managing the waiting list, including handling patient transfers, now meet frequently with staff responsible for monitoring policy compliance to share information about potential policy violations. Although CMS, HRSA, and the OPTN have taken steps to improve oversight of transplant programs since the high-profile cases came to light, three important areas remain in progress. One key unresolved question is the extent to which CMS will conduct on- site reapproval surveys of transplant programs (as part of its new review procedures) after transplant programs gain initial Medicare approval under the new regulations. According to CMS’s new regulations, CMS may choose not to conduct on-site reapproval surveys for transplant programs meeting data submission, clinical experience, and outcomes requirements. This means that CMS could potentially choose not to conduct any reapproval surveys for programs meeting these requirements. While CMS officials said that they see value in conducting reapproval surveys, just how CMS will apply its discretion remains unclear. As of January 2008, CMS officials said that the agency had not decided how many reapproval surveys it would conduct or how it would choose which programs to survey among those that meet the aforementioned requirements. They emphasized the agency’s need to carefully consider resource constraints in making these decisions. A decision by CMS not to conduct an on-site reapproval survey at a transplant program means that compliance with some CoPs would not be reviewed unless there was a complaint investigation. As a result, problems at transplant programs unrelated to the data submission, clinical experience, and outcomes requirements—for example, a transplant program failing to provide required protections for living donors or to sufficiently staff its program—could go undetected. In two of the high-profile cases, staffing problems that ultimately affected patients’ access to transplants would not have been detected by the outcomes indicator that CMS has now adopted, and the numbers of transplants performed per year at these programs exceeded or were close to CMS’s clinical experience requirement. Additional questions remain regarding the extent to which CMS will accurately track on-site surveys to avoid the misclassification errors we identified in our review and complete the surveys on a timely basis. As a result of the new transplant regulations, renal transplant programs will no longer share Medicare identification numbers with dialysis facilities, and previously misclassified renal transplant programs will at some point receive a new accurate classification in CMS’s survey database once they are approved. However, the potential for transplant programs to be mistakenly classified may remain because transplant programs within the same hospital will share one transplant center Medicare identification number, according to CMS officials. CMS officials said that they were highly aware of the need for their systems to accurately track the status of each transplant program separately. They said that they plan to test for this capability in their new tracking system for transplant programs, which remains under development. What also remains to be seen is the extent to which surveys will occur on a timely basis. Prior to the new regulations, state agencies did not always meet CMS goals for surveying ESRD facilities. Now, under the new regulations, the responsibilities of state agencies that will be conducting on-site surveys of transplant programs will increase, since they will be required to survey both renal and extra- renal transplant programs. With respect to initial approval surveys, CMS’s stated plan is that high-priority surveys of transplant programs will be completed by the end of fiscal year 2008, but as of January 2008, CMS officials expressed some uncertainty about meeting this goal. Initial surveys of transplant programs have been given a relatively high priority in the state agency workload, but it is not definite that this high priority level will continue because CMS has revised state agency workload priorities in the past. Further, the priority level for reapproval surveys is not yet known; a lower priority could affect how frequently surveys occur. The last unresolved question concerns the OPTN’s and HRSA’s planned organ acceptance rate indicator, which as part of a set of activity-level indicators, could potentially improve the OPTN’s ability to detect transplant programs experiencing problems that prolong the time patients wait for transplants. According to the OPTN, the organ acceptance rate indicator for kidney and liver transplant programs has been developed but, as of February 2008, has not yet been implemented; HRSA officials expect the indicator to be in place within 1 year. HRSA and OPTN officials reported that they are considering developing organ acceptance rate indicators for transplant programs for other organ types. Before extending the indicator to other types of programs, however, the OPTN will first assess the effectiveness of the indicator at detecting potential problems at kidney and liver transplant programs, which perform larger volumes of transplants, and determine the feasibility of developing an indicator for programs with lower transplant volumes, such as heart and lung transplant programs. CMS, HRSA, and the OPTN have recognized the importance of sharing data on transplant programs with one another and have taken initial steps to share basic data. To help CMS assess programs’ compliance with its new Medicare requirements, the OPTN (through HRSA) is now sending CMS certain basic transplant program data on a quarterly basis. For example, the new Medicare regulations require transplant centers to be OPTN members, so the OPTN is providing data on the status of each transplant center’s membership in the OPTN. (See table 3.) While this basic data sharing represents progress, CMS, HRSA, and the OPTN have additional information resulting from their oversight activities that could be shared. The exchange of this information is important because CMS and the OPTN conduct different monitoring activities and, as a result, may have different information about transplant programs that could be relevant to each other. For example, while both CMS and the OPTN conduct on-site reviews of transplant programs, the OPTN’s on-site reviews focus largely on medical records review while CMS’s on-site surveys are more broadly scoped. If the OPTN determined during an on- site review that the medical urgency assigned to patients by a transplant program was not supported by its medical records, this information could be of interest to CMS if this practice inappropriately reduced the chances of others on the waiting list to receive a transplant. As another example, the OPTN and HRSA are working to put into place their organ acceptance rate indicator, which CMS officials said they would be interested in using. Information from CMS’s and the OPTN’s investigations could also be potentially important to share. For example, if CMS investigated a complaint from a patient about the length of time he or she had been waiting for a transplant and determined that the delay was caused by the program failing to update the patient’s health status, a violation of OPTN policy, the OPTN might want to flag the program for closer monitoring. CMS and HRSA have recognized the importance of sharing information from their oversight activities, but the agencies have not yet reached agreement on how they would do so. CMS submitted a draft proposal to HRSA in April 2007 describing how CMS and HRSA could potentially share information about organ transplant programs. CMS and HRSA officials have since discussed the initial proposal, including possible revisions, but their progress has been slow. As of February 2008, CMS and HRSA had yet to reach agreement or establish a time frame for doing so. According to HRSA officials it had taken the agencies several months to better understand each other’s oversight processes, and both agencies needed to further explore their information needs. CMS officials also indicated that further issues would need to be resolved before an agreement could be reached. As part of any agreement to share information from their oversight activities, CMS and HRSA will need to determine precisely what information from their oversight activities they will share and at what point in their oversight processes they will share it. CMS and HRSA have discussed but not resolved these issues: Nature of information to be shared. It will be important for CMS and HRSA to determine specifically what information they will share from their oversight activities. For example, while CMS’s initial proposal addressed how CMS and HRSA could share information from CMS’s and the OPTN’s investigations of serious complaints, such as those involving threats to patient health and safety, CMS and HRSA officials have since discussed whether to share information from all complaints. In addition, CMS and HRSA have not determined to what extent information from routine inspections, such as the OPTN’s on-site reviews and CMS’s on-site surveys, will be shared and at what level of detail. For example, CMS’s initial proposal called for CMS to notify the OPTN about its completed on- site surveys and to indicate whether the transplant program surveyed had a plan of correction, but it did not call for CMS to provide information on the deficiencies CMS found. HRSA officials have since expressed their interest in having this more detailed information. Timing of information sharing. A more difficult challenge that CMS and HRSA face is agreeing when to share information about potential problems at transplant programs. Officials from both CMS and HRSA consider the severity of the identified problem(s) with a program to be a key factor in determining the appropriate time for information sharing. In this regard, officials from both agencies stated a willingness to promptly share information on potentially serious problems. Agreeing on just when to exchange information on less serious problems has been more problematic for the agencies in part because of differences in their approaches to oversight. On the one hand, CMS officials emphasize their agency’s obligation to investigate any indications of noncompliance with Medicare requirements and prefer to be notified as soon as possible if the OPTN discovers a potential problem indicating noncompliance with Medicare CoPs. On the other hand, HRSA officials have emphasized that the viability and success of the OPTN’s performance improvement process depends upon transplant programs sharing openly about their practices or past events. HRSA officials contend that the possibility of such information being shared with CMS, a regulatory agency, could cause transplant programs to be less candid about discussing real or potential problems, making it more difficult for the OPTN to help them return to compliance. CMS, HRSA, and the OPTN recognize the gaps in oversight that existed when serious problems were exposed at transplant centers and have taken significant steps to strengthen federal oversight. The actions they have taken will help improve standards for transplant programs and should improve detection of potential problems. These actions include CMS’s issuance of new regulations that expand and update requirements for transplant programs. In addition, CMS plans to conduct on-site surveys of all transplant programs seeking initial Medicare approval under the new regulations and to regularly review certain transplant program data, which should reduce the chances of problems going undetected by the agency. Similarly, if the OPTN’s and HRSA’s efforts to develop and implement a set of activity-level indicators to detect problems that prolong the time patients wait for transplants are successful, the indicators will likely result in earlier detection of these more subtle problems. The full effect of these planned improvements, however, is unknown at this time, and much has yet to be accomplished. While surveyors have begun conducting on-site surveys for initial Medicare approval, CMS expects these surveys may take 3 years to complete. CMS is still in the process of designing its tracking system for transplant programs, and it is important that the system include mechanisms to check that transplant programs remain accurately classified in the CMS survey database over time. The OPTN and HRSA are working on implementing their set of activity-level indicators for kidney and liver transplant programs. It will be important for the OPTN and HRSA to implement the activity-level indicators to the extent feasible to provide improved monitoring tools to detect problems affecting patient access to transplants like those involved in the high-profile cases in 2005 and 2006. Attending to these areas is critical for effective oversight, and we encourage CMS, HRSA, and the OPTN to continue their efforts to implement these initiatives. Of more concern are two other issues. The first is how CMS will ultimately conduct on-site surveys for transplant programs seeking reapproval under the new Medicare regulations. Under the regulations, CMS may choose not to conduct such surveys for transplant programs meeting data submission, clinical experience, and outcomes requirements. Not conducting on-site reapproval surveys may limit CMS’s ability to monitor these transplant programs’ compliance with other Medicare CoPs, for example, whether transplant programs are providing required protections for living donors, and to detect problems like those involved in some of the high-profile cases. CMS has not yet developed a process to determine the scope of the transplant programs (number or type) to be included in reapproval surveys or the criteria for determining which, if any, transplant programs that meet the three requirements will receive such surveys. Given the potential importance of these reapproval surveys, we believe that having a methodology that ensures that CMS conducts surveys of at least some transplant programs meeting the three requirements is critical to maximize CMS’s opportunities to identify potential problems in a timely manner. We also have a concern about the pace of progress being made to share information about the oversight activities of CMS, HRSA, and the OPTN. Agency officials believe, as we do, that their ability to identify potential problems could be enhanced by sharing information resulting from their oversight activities. While CMS’s draft proposal for sharing such information is an important first step in reaching an agreement on this issue, CMS and HRSA have yet to finalize an agreement or establish a time frame for doing so. Without a definitive time frame for reaching agreement, there is increased risk that the negotiation process among these agencies could languish, and they could miss opportunities to detect and remedy problems with transplant programs. Furthermore, in developing an agreement, CMS and HRSA will need to fully articulate what types of information will be shared from their oversight activities and when they will share it. While we agree that there are challenges associated with reaching agreement on this issue, we also believe it is important to settle these issues and finalize a clear written agreement that maximizes information sharing as appropriate and better ensures that all parties are aware of critical information in time to take appropriate action. Once CMS and HRSA reach and implement an agreement, they may wish to periodically assess how effectively it is working for each of them to improve their oversight. To improve federal oversight of organ transplant programs, we recommend that the Secretary of Health and Human Services: (1) Direct the Administrator of CMS to develop a methodology for conducting on-site surveys of transplant programs seeking Medicare reapproval that ensures that at least some transplant programs meeting data submission, clinical experience, and outcomes requirements receive on-site surveys. (2) Direct the Administrators of CMS and HRSA to establish a time frame for finalizing an agreement for the agencies to share information resulting from CMS’s and the OPTN’s oversight activities. The agreement should, at a minimum, specify the types of information CMS, HRSA, and the OPTN will share and specify at what point in CMS’s and the OPTN’s oversight processes this information will be exchanged. We received comments on a draft of this report from HHS. (See app. II.) The department concurred with both of our recommendations and commented that CMS recognizes the need to increase its oversight of organ transplant programs. HHS agreed with our recommendation that CMS develop a methodology for conducting on-site surveys for Medicare reapproval to ensure that at least some programs meeting certain Medicare criteria are surveyed, noting that CMS has developed an initial framework for doing so but that its implementation will depend on the resources available for survey and certification activities. CMS highlighted several factors the agency may use as part of a methodology to determine survey frequencies for individual transplant programs, including prior survey results, program changes, program indicators, and the time interval since the last survey. HHS also agreed with our recommendation to establish a time frame for finalizing the agreement between HRSA and CMS to share information from their oversight activities, noting that HRSA and CMS have been working to develop and finalize such an agreement. Specifically, the department commented that CMS has conveyed a proposal to HRSA regarding the criteria and process that CMS would use in sharing information, and that CMS would like the agreement with HRSA to be finalized by June 30, 2008. HHS also noted that even though a formal agreement is not yet in place, CMS and HRSA have on several occasions already shared oversight information about particular programs. The department also provided technical comments, which we incorporated as appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Administrators of CMS and HRSA, and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or [email protected]. GAO staff who made major contributions to this report are listed in appendix III. On March 30, 2007, the Centers for Medicare & Medicaid Services (CMS) published a final rule promulgating requirements for Medicare approval and reapproval of transplant centers to perform organ transplants. The regulations, which became effective June 28, 2007, delineate Medicare conditions of participation for heart, heart-lung, intestine, kidney, liver, lung, and pancreas transplant centers. Table 4 presents a summary of the key requirements for each condition of participation. In addition to the contact named above, Kim Yamane, Assistant Director; Emily Beller; Susannah Bloch; George Bogart; Manuel Buentello; Linda McIver; Colin Smith; Stanley Stenersen; and Suzanne Worth made key contributions to this report.
Media reports in 2005 and 2006 highlighted serious problems at organ transplant programs, calling attention to possible deficits in federal oversight. Two agencies in the Department of Health and Human Services (HHS) oversee organ transplant programs: the Centers for Medicare & Medicaid Services (CMS) oversees transplant programs that receive Medicare reimbursement, and the Health Resources and Services Administration (HRSA) oversees the Organ Procurement and Transplantation Network (OPTN), which manages the nation's organ allocation system. GAO was asked to examine (1) federal oversight of transplant programs at the time the high-profile cases came to light in 2005 and 2006 and (2) changes that federal agencies have made or planned since then to strengthen oversight. GAO interviewed CMS, HRSA, and OPTN officials and reviewed agency documents and data and a CMS draft proposal for sharing information with HRSA. Limitations in federal oversight of organ transplant programs existed when high-profile problems came to light in 2005 and 2006. These high-profile cases included, for example, a transplant program that lacked a full-time surgeon for over a year and had been turning down organs offered for patients at markedly high rates. At that time, CMS did not actively monitor heart, liver, lung, and intestine transplant programs, relying instead primarily on complaints to detect problems. CMS periodically monitored kidney transplant programs through on-site inspections, known as surveys, but the surveys reviewed compliance with requirements that had not been substantially updated in decades and were limited in scope. In addition, some programs were not actively monitored. At the same time, the OPTN actively monitored transplant programs for many types of potential problems and worked with the programs to resolve identified problems. The OPTN's monitoring activities, however, were not sufficient to promptly detect certain problems that prolonged the time that patients waited for transplants, such as inadequate staffing at transplant programs. CMS, HRSA, and the OPTN have made or plan to make changes to strengthen their oversight of organ transplant programs, but the effectiveness of these changes will depend, in part, on implementation and information sharing by CMS, HRSA, and the OPTN. In 2006, after high-profile problems came to light, CMS began actively monitoring heart, liver, lung, and intestine transplant programs. In a more fundamental change, CMS published new regulations in 2007 that establish a single set of updated requirements for all Medicare-approved transplant programs and provide for periodic reviews of programs. The OPTN has been working with HRSA to develop and implement a set of indicators to better detect problems that prolong the time patients wait for transplants. However, neither CMS nor the OPTN has fully implemented these changes, and their full effect remains to be seen. In particular, CMS has not determined the extent to which it will conduct on-site surveys in its periodic reviews of programs for Medicare reapproval. Under the new regulations, CMS may choose not to conduct on-site reapproval surveys of programs meeting certain Medicare requirements. Not conducting these surveys may limit CMS's ability to monitor for compliance with other Medicare requirements and to detect problems like some of those involved in the high-profile cases. As of January 2008, CMS had not determined how it will choose which transplant programs to survey, if any, among those for which it has discretion. Further, while CMS, HRSA, and the OPTN recognize the value of sharing information about potential problems at transplant programs, how they will share additional information from their oversight activities has not been resolved. A definitive agreement between CMS and HRSA on this issue will better ensure that problems at transplant programs are detected and corrected in a timely manner.
Fuel for commercial nuclear power reactors is typically made from low enriched uranium fashioned into thumbnail-size ceramic pellets of uranium dioxide. These pellets are fitted into 12- to 15-foot hollow rods, referred to as cladding, made of a zirconium alloy. The rods are then bound together into a larger assembly. A typical reactor holds about 100 metric tons of fuel when operating—generally from 200 to 800 fuel assemblies. The uranium in the assemblies undergoes fission—a process of splitting atoms into fragments and neutrons that then bombard other atoms—resulting in additional fission reactions and a sustainable chain reaction that creates an enormous amount of heat and radioactivity in the form of radioisotopes, or radioactive substances. The heat is used to generate steam to turn a turbine, which generates electricity. The radioisotopes produced in a reactor can remain hazardous from a few days to many thousands of years; these radioisotopes remain in the fuel assemblies and as components of the resulting spent nuclear fuel. Figure 1 shows what a fuel pellet for a commercial nuclear reactor and a fuel rod in an assembly look like. Each fuel assembly is typically used in the reactor from 4 to 6 years, after which most of the uranium dioxide is no longer cost-efficient at producing energy. Reactor operators typically discharge about one-third of the fuel assemblies from a reactor every 18 months to 2 years and place the spent nuclear fuel in a pool to cool. Most commercial spent nuclear fuel is stored immersed in pools of water designed to cool and isolate it from the environment. Water circulates in the pool to remove the heat generated from the radioactive decay of some of the radioisotopes. In recent years, reactor operators have used fuel that is burned longer in the reactor under the same operating conditions, resulting in higher “burn-up” and higher decay heat in comparison to the lower burn-up fuel that had been in use. The pools of water for cooling spent nuclear fuel are typically about 40 feet deep, with at least 20 feet of water covering the spent fuel. Figure 2 shows a typical spent nuclear fuel pool. Industry practice has been to store the spent nuclear fuel in the pools for at least 5 years, with an industry expectation that, at some point, DOE would begin to accept it. Spent nuclear fuel typically must remain in a pool for at least 5 years to decay enough to remain within the heat limits currently allowed for dry cask storage systems. Spent nuclear fuel can be sufficiently cool to load into dry storage systems earlier than 5 years, but doing so is generally not practical. Some dry storage systems, depending on the rated heat load, may not accommodate a full load of spent nuclear fuel. High burn-up fuel may have to remain in a pool longer than low burn- up fuel to cool sufficiently. The pools at commercial nuclear power reactors have largely reached their maximum capacities, however. When spent nuclear fuel is discharged from a reactor at a plant where the spent nuclear fuel pool is at maximum capacity, spent nuclear fuel equal to the amount of spent nuclear fuel discharged from the reactor must be transferred to dry storage. The dry storage systems typically consist of either a thick-walled, bolted steel vertical cask, or a welded steel canister inside a vertical or horizontal steel-reinforced concrete cask. Dry storage systems are designed with thick steel and concrete walls to provide radiation shielding and passive pathways for removal of spent nuclear fuel decay heat, such as air vents in the casks. In one typical process of transferring spent fuel to dry storage, reactor operators place a steel canister inside a larger steel transfer cask and lower both into a pool. Spent nuclear fuel is loaded into the canister, a lid is placed on the canister, and then both the canister and transfer cask are removed from the pool. The transfer cask shields nearby workers from the radiation produced by the spent nuclear fuel in the canister. The water is drained and a lid is welded onto the canister. Then the canister and transfer cask are aligned with a storage cask and the canister is maneuvered into the storage cask. The transfer cask can be re-used. The storage casks, in either vertical or horizontal designs (see fig. 3), are usually situated on a large concrete pad with safety and security measures, such as radiation detection devices and intrusion detection systems. NRC requires that spent fuel in dry storage be stored in approved systems that offer protection from significant amounts of radiation. NRC requires storage systems to demonstrate compliance with its regulations, including through physical tests of the systems, scaled physical tests, and computer modeling. Once a dry storage system is approved, NRC issues a certificate of compliance for a cask design or a specific license. Figure 4 shows spent nuclear fuel on a concrete pad in dry storage. Most U.S. reactors were built during the 1960s and 1970s and, after a 40- year licensing period have received a 20-year license extension, and some may apply for subsequent extensions. Nevertheless, these reactors may begin permanently shutting down in large numbers by about 2030 and emptying their pools by about 2040 absent additional license renewals. In the absence of a repository, the reactors’ accumulated spent nuclear fuel will be “stranded” in a variety of different dry storage systems, with no easy way of repackaging the spent fuel should repackaging be required to meet future storage or disposal requirements. NRC regulations require radioactive contamination to be reduced at a reactor to a level that allows NRC to terminate the reactor’s license and release the property for other use after a reactor shuts down permanently. This cleanup process is known as decommissioning. Spent nuclear fuel is expected to accumulate at an average rate of about 2,200 metric tons per year in the United States, mostly in wet storage, but this rate and the amount in wet storage are expected to decrease as more reactors, as projected, begin to shut down in the 2030s. More specifically, according to data provided by the Nuclear Energy Institute, the rate of accumulation will be about 2,100 metric tons in 2031, decreasing to about 1,200 metric tons in 2040, about 200 metric tons in 2050, and less than 100 metric tons per year from 2051 through 2055, when the last one of the currently operating nuclear power reactors is expected to shut down. These rates assume that except for the few reactors that have announced early permanent shutdown dates, the nation’s current reactors continue to operate through a 20-year extended license period without any further license extensions and continue to produce spent nuclear fuel at the same rate, and that no new reactors are brought online. Shutdown of Nuclear Reactors in the United States Of the 118 nuclear reactors in the United States, 18 reactors have permanently shut down. Three of these reactors are located on a site that also has an operating reactor. The remaining 15 reactors are located on 12 sites where the reactors have been or will be dismantled and decontaminated. After this decommissioning, the site can be used for other purposes and the only relic remaining at the site will be the spent nuclear fuel storage facility and the associated safety and security infrastructure. By the end of fiscal year 2013, DOE reported that it had signed contracts with owners and generators of spent nuclear fuel involving 118 reactors. The spent nuclear fuel from these 118 reactors is stored at 75 sites in both wet and dry storage systems. Several reactors share the same site. For example, the Palo Verde site in Arizona has 3 operating reactors. In 2013, about 70 percent of accumulated spent nuclear fuel—about 50,000 metric tons—was stored in pools, with the remaining 30 percent— about 22,000 metric tons—in dry storage. Figure 5 shows the 75 spent nuclear fuel wet and dry storage sites, including shut-down sites, in the United States. In the future, more spent nuclear fuel is expected to be put into dry storage for two reasons. First, since most spent nuclear fuel pools have reached their maximum capacities, reactor operators must transfer fuel from the pools to dry storage to make room for newer spent nuclear fuel, a time-consuming and costly process. Second, the amount of spent nuclear fuel transferred to dry storage is expected to increase as reactors shut down and their pools are closed. According to data from the Nuclear Energy Institute, by 2024, the proportion of spent fuel in wet storage and dry storage from currently existing reactors should be roughly equal— about 48,000 metric tons each. By 2040, about 70 percent of the spent fuel is expected to be in dry storage, or about 89,000 metric tons, compared to about 39,000 metric tons in wet storage. By 2067—after the last of the currently operating reactors have shut down—nearly all the 139,000 metric tons of spent fuel expected to be generated by currently operating reactors is expected to be in dry storage. Figure 6 shows the estimated amounts of spent nuclear fuel accumulation and transfer from wet storage to dry storage through 2067. Federal liability for managing spent nuclear fuel has been based on costs that owners and generators of this fuel have paid because DOE has not met its contractual obligations to begin disposing of this fuel, and DOE’s estimate of future liability—$21.4 billion through 2071—is based on how long the federal government is expected to pay these costs. This estimate is based on DOE’s strategy to begin accepting spent nuclear fuel in 2021 and assumes no delays in their schedule. Generally, the U.S. Court of Federal Claims has held that the federal government’s liability covers the cost of managing the spent nuclear fuel that DOE was obligated to begin disposing of in 1998. The Department of Justice has also agreed to pay such costs as a result of settlement agreements. (See app. IV for additional information on the settlement agreements.) Under the standard contract, DOE was obligated to take title to and dispose of a certain quantity of spent nuclear fuel beginning in 1998. The order in which spent nuclear fuel was to be picked up is based on the order in which it was removed from reactors—or the oldest fuel first. The owner or generator was expected to pay continued storage costs for the spent nuclear fuel that DOE was not obligated to pick up. Some of the costs for which the federal government has been liable have related to expanding wet storage of spent nuclear fuel. For example, the federal government has compensated owners and generators for replacing lower-density storage racks with higher-density storage racks in pools to increase the capacity of the pools. The capacity of nearly all the spent nuclear fuel pools in the country has been expanded to the extent practical, and according to NRC, most pools have been filled to their maximum prudent capacities. The majority of the types of costs for which the federal government has been liable have pertained to dry storage costs of spent nuclear fuel. Some have been one-time costs, such as the cost of constructing a concrete pad for storing spent nuclear fuel once it has been transferred from the pools to dry canisters or casks. Constructing a concrete storage pad typically costs from about $5.5 million to $6.5 million, but can range higher if additional equipment or special design requirements are needed. Other dry storage costs are recurring; for example, the cost of the canisters themselves, which depends on the size of the canister and type of spent nuclear fuel stored in it. Table 1 shows the differences in sizes of canisters, which typically cost from $700,000 to $1.5 million, and how the numbers of canisters for which the federal government may have to pay can vary from site to site. Another recurring cost is the cost of transferring spent nuclear fuel in the canisters from the pools to dry storage. Table 2 reflects typical costs associated with the transfer of spent nuclear fuel from wet to dry storage that may contribute to federal liabilities. DOE’s estimate of future liability is based on how long DOE expects the federal government to continue to pay for managing spent nuclear fuel that DOE was obligated to have begun disposing of if it had begun picking up the fuel in 1998. DOE’s most recent estimate of this liability is $21.4 billion through 2071. This estimate assumes that DOE will begin accepting spent nuclear fuel in 2021 and complete the process in 2071, ending the federal government’s liability. However, DOE has previously extended the dates in its liability estimates several times. For example, in the fiscal year 2006 liability estimate, DOE estimated (1) that the federal liability was $6.9 billion, (2) that DOE would begin accepting spent nuclear fuel in 2017, and (3) it would complete the process by 2055. Each time extension adds to the federal government’s liability. Experts and stakeholders in the area of spent nuclear fuel management, including DOE and other government officials, identified four major types of challenges to the federal government’s ability to meet DOE’s time frames for managing spent nuclear fuel at consolidated interim storage facilities. First, DOE does not have legislative authority to fully implement its strategy, although DOE officials said they are conducting some planning activities that are allowed by the NWPA. Second, the licensing process could take more time than DOE has allowed. Third, there are several technical challenges to transporting some spent nuclear fuel. Fourth, achieving sustainable public acceptance of transporting and storing spent nuclear fuel is a societal challenge that will need to be addressed irrespective of any policy that is implemented and that experts said DOE could mitigate with a coordinated outreach strategy. Figure 8 summarizes the four major types of challenges identified by experts and stakeholders with whom we spoke. In general, experts identified the legislative challenges as critical to implementing DOE’s strategy within the time frames proposed. In particular, the experts pointed out that new legislative authority is needed for developing interim storage that is not tied to Yucca Mountain, creating a new waste management organization, and providing predictable funding for carrying out spent nuclear fuel management. However, experts and stakeholders generally noted that because the Congress has not agreed on a new path forward for managing spent nuclear fuel since funding was suspended in 2010, nor have DOE officials proposed legislation requesting new authority, obtaining specific legislative authority in time to meet DOE’s proposed time frames might be challenging. As we reported in November 2009 and August 2012,NWPA that authorize DOE to arrange for consolidated interim storage have either expired or are unusable because they are tied to milestones in the development of a repository at Yucca Mountain that have not been met. DOE officials and experts from industry agreed with this assessment, and they noted that the federal government’s ability to site, license, construct, and operate a consolidated interim storage facility is dependent upon new legislative authority. Some industry representatives we spoke with said that such authority would be needed by the end of 2014 to meet DOE’s 2021 goal to begin operations at a pilot interim storage facility. DOE officials noted that their strategy is available to Congress and is intended to initiate discussions on developing a future path forward or future policy to be implemented. However, experts and stakeholders generally noted that there is not agreement between the House and Senate on a path forward, and therefore obtaining such authority in 2014 is unlikely. Pending such agreement, the agency has been planning what it can do, and the agency has undertaken some activities allowed under current authority to inform the development of an interim storage facility. For example, the agency is reviewing reports submitted by contractors in 2013 on design concept alternatives for a consolidated interim storage facility. DOE officials said, however, that the agency’s strategy could not be fully implemented until Congress provides direction on a path forward. DOE’s strategy calls for a new waste management and disposal organization for managing spent nuclear fuel; however, Congress has not authorized such an organization. According to DOE, a new organization separate from DOE is needed to “provide stability, focus, and credibility to build public trust and confidence.” Industry representatives and presenters at a Bipartisan Policy Center Conference in 2014 agreed that DOE is not the right organization to implement its strategy because the public lacks confidence in the agency and its ability to move forward in managing spent nuclear fuel. In addition, an expert from industry said that a new organization designed to implement a consent-based process would be better suited to site a consolidated storage facility—a view that is generally consistent with one of our past matters for congressional consideration noting that an independent organization, outside DOE, could be more effective in siting and developing a permanent repository for the nation’s nuclear waste. Similarly, the Blue Ribbon Commission on America’s Nuclear Future recommended establishing a new federal corporation or similar independent organization to implement, among other things, a consent-based siting process. Such an organization would be more effective because it would be less vulnerable to political interference, according to the commission. However, experts from industry told us that even with congressional authorization, such an organization would take time to create, in part, according to an industry expert, because a new organization would need to acquire personnel, implement a quality assurance program, and develop an implementation plan. These could take from 2 to 5 years, according to the experts. In 2011, we reported that funding for DOE’s work related to Yucca Mountain was unpredictable, noting that DOE’s annual appropriations related to spent nuclear fuel management had varied by as much as 20 percent from year to year; further, DOE’s average annual appropriations fell about $90 million short of the amount DOE requested each year. We reported that this unpredictability made long-term planning difficult, and we suggested that Congress may wish to consider whether a more predictable funding mechanism would enhance future spent nuclear fuel management efforts. According to experts from industry and community action groups we interviewed, having sufficient funds consistently available is essential for any spent nuclear fuel management effort. However, as noted earlier, there is not agreement on a path forward— including developing a more predictable funding mechanism. DOE reported in 2001 that the budgetary requirements enacted by Congress, subsequent to the creation the Nuclear Waste Fund, reduced financial flexibility. These funding challenges were also echoed by the Blue Ribbon Commission, which recommended that a new, more predictable, funding mechanism be developed. Because the NRC licensing process is time consuming, it may be difficult for DOE or an alternative waste management and disposal organization to begin operations at a pilot interim storage facility in 2021 and a consolidated interim storage facility in 2025, as called for in DOE’s strategy. The NRC licensing process cannot begin until an interim storage site has been selected, which, as stated previously in this report, cannot happen under existing legislative authority. Some industry experts, as well as a scientist from a national laboratory, told us licensing an interim facility could be achieved within 5 to 7 years of site selection. Other industry experts and a community action group told us licensing will more likely be an 8- to 10-year process because of the potential for legal concerns raised by the public. Such concerns may originate with an environmental impact statement—such a statement is required before a license can be granted, and it allows for public input that may require adjudication. The NRC hearing process could also lengthen the licensing process. For example, in 2006, after a 9-year licensing process, a consortium of electric power companies called Private Fuel Storage obtained an NRC license for a private consolidated storage facility on an Indian reservation in Utah. As we previously reported, the delay in licensing Private Fuel Storage was due to state opposition and to challenges raised during the license review process. Experts described technical challenges that could be resolved with sufficient time. In particular, there are uncertainties regarding transportation of spent nuclear fuel, including the uncertainties related to the safety of high burn-up fuel during transportation, readiness of spent nuclear fuel to be transported under current guidelines, and sufficiency of the infrastructure to support transportation. In addition, there are uncertainties related to repackaging spent nuclear fuel for transportation. According to DOE officials, the agency is taking steps to begin addressing technical challenges, but not all of the challenges can be addressed until uncertainties regarding the path forward are resolved. According to some industry experts, more information is needed about how to safely store and transport high burn-up fuel. Before 2000, most fuel discharged from U.S. nuclear power reactors was considered low burn-up fuel, and consequently, the industry has had decades of experience in storing and transporting it. In addition, the first dry storage canisters were loaded in 1986. One of these, containing low burn-up fuel, was opened 15 years later and the spent nuclear fuel inspected. The spent nuclear fuel was found to be in good condition, giving NRC additional confidence in the safe storage and transport of spent nuclear fuel. According to NRC, there has been considerable analysis performed on high burn-up fuel, but because it has only been used for about the past 10 years, there has been little testing performed on it. According to various reports from DOE, NRC, the Electric Power Research Institute, and the Nuclear Waste Technical Review Board, as well as experts we spoke with, uncertainties exist on how long high burn-up fuel can be stored and then still be safely transported. Once sealed in a canister, the spent fuel cannot easily be inspected for degradation. Description of Concerns Related to High Burn-Up Fuel According to the Nuclear Regulatory Commission (NRC), burn up is considered as part of NRC’s reviews of spent nuclear fuel cask designs because each dry storage system has limits on temperature and radiation, both of which are higher for high burn-up fuel. While in the reactor, hydrogen gas is generated and is absorbed by the cladding. Then, during the transfer from wet to dry storage, the spent nuclear fuel is loaded into a storage cask, which is removed from the storage pool and drained of water to dry the fuel. During the drying process, the fuel heats up, dissolving the hydrogen in the cladding structure. While the spent nuclear fuel is still hot and the cladding is still supple, there is little uncertainty in storing or transporting it, as long as temperature and radiation limits are met. However, as the spent nuclear fuel cools over extended periods in dry storage, the dissolved hydrogen can change the characteristics of the cladding and, if certain conditions exist, can cause the cladding to become brittle over time. The extent of the changes in cladding depend on the burn-up of the fuel, the type of cladding, and the temperatures reached during the drying process, and need to be accounted for in the storage and transportation of high burn-up spent nuclear fuel. There is not the same level of concern about changes in the cladding causing brittleness with low burn-up fuel because not as much hydrogen becomes dissolved in the cladding. In addition, there is more data available on the performance of low burn-up fuel during storage, since low burn-up fuel has been stored for long periods. As of August 2014, NRC officials told us that they had analyzed laboratory tests and models developed to predict the changes that occur during dry storage and the results indicate that high burn-up fuel will maintain its integrity over very long periods of storage and can eventually be safely transported. However, NRC officials said they continue to seek additional evidence to confirm their position that long-term storage and transportation of high burn-up spent nuclear fuel is safe. In an effort to obtain evidence confirming the test and modeling results, DOE and the Electric Power Research Institute have planned a joint development project to load a special dry cask storage system with high burn-up fuel in mid-2017 and, using instrumentation built into the cask, monitor the spent nuclear fuel over a period of about 10 years. At that point, DOE and the Electric Power Research Institute expect to transport the canister to an appropriate facility and open it to inspect the high burn-up fuel and its cladding, as well as the cask, for any indication of damage or degradation. According to DOE and the Electric Power Research Institute, the project’s execution has been planned through 2018 and many future elements of the project still need to be developed. Experts and stakeholders expressed various views regarding the level of uncertainty of the safe transportation of high burn-up fuel after long-term storage. But one theme conveyed during our discussions with them was that high burn-up fuel would continue to be stored in dry storage canisters without knowing how long the spent nuclear fuel would be stored or how safe future transportation would be. DOE officials stated that their strategy would not involve transportation of large amounts of high burn-up fuel until at least 2025 and that even then, there is likely going to be enough low burn-up fuel to ship for the first several years, giving more time for the development project to yield results. Because the guidelines governing dry storage of spent nuclear fuel allow higher temperatures and external radiation levels than guidelines for transporting the fuel, some of the spent nuclear fuel in dry storage may not be ready to be moved to an interim storage facility in time to meet DOE’s time frames. For example, according to the Nuclear Energy Institute, as of 2012, only about 30 percent of spent nuclear fuel currently in dry storage is cool enough to be directly transportable. For safety reasons, transportation guidelines do not allow the surface of the transportation cask to exceed 185 degrees Fahrenheit (85 degrees Celsius) because the spent nuclear fuel is traveling through public areas using the nation’s public transportation infrastructure. NRC’s guidelines on spent nuclear fuel dry storage limit spent nuclear fuel temperature to 752 degrees Fahrenheit (400 degrees Celsius).spent nuclear fuel dry storage systems are typically found, are usually in secluded areas with a buffer zone between the spent nuclear fuel and the public. Dry storage sites are also usually located so that workers and the public have minimal exposure to radiation from the spent nuclear fuel. According to experts from industry, spent nuclear fuel has typically been stored in large dry storage canisters to maximize storage capacity and Reactor sites, where minimize costs. Spent nuclear fuel stored in the large canisters, however, may have temperatures that exceed the limits of transportation guidelines. Scientists from the national laboratories and experts from industry suggested three options for dealing with the stored spent nuclear fuel so it can be transported safely: (1) the spent nuclear fuel may be left to cool and decay at reactor sites, (2) the spent nuclear fuel may be repackaged into smaller canisters that reduce the heat and radiation, or (3) a special transportation “overpack” may be developed to safely transport the spent nuclear fuel in the current large canisters. Since Congress has not agreed on a new path forward since funding was suspended in 2010, it may be difficult to assess the options. In a 2013 report, DOE states that the preferred mode for transporting spent nuclear fuel to a consolidated interim storage facility would be rail. However, several experts from industry pointed out that not all of the spent nuclear fuel currently in dry storage is situated near rail lines; also, one of these experts said that procuring qualified rail cars capable of transporting spent nuclear fuel will be a lengthy process. Storage sites without access to a rail line may require upgrades to the transportation infrastructure or alternative modes of transportation to the nearest rail line. Constructing new rail lines or extending existing rail lines could be a time-consuming and costly endeavor. In addition, an industry official noted that if spent nuclear fuel were trucked to the nearest rail line, the federal government would have to develop a safe method of transferring the spent nuclear fuel from heavy haul trucks onto rail cars. In September 2013, DOE completed a preliminary technical evaluation of options available and needed infrastructure for DOE or a new waste management and disposal organization to transport spent nuclear fuel from shut-down sites to a consolidated interim storage facility. According to DOE officials, currently there is no need to make a decision regarding how best to move forward with the study results because there is, at this time, no site and no authorization to site, license, construct, and operate a consolidated interim storage facility. DOE’s Preliminary Evaluation of Removing Spent Nuclear Fuel from Shutdown Sites DOE’s “Preliminary Evaluation of Removing Used Nuclear Fuel from Shutdown Sites” looked at 12 permanently shutdown sites, which included Maine Yankee, Yankee Rowe, Connecticut Yankee, Humboldt Bay, Big Rock Point, Rancho Seco, Trojan, La Crosse, Zion, Crystal River, Kewaunee, and San Onofre. The evaluation found that some shut-down sites, because the facilities have been decommissioned and lack onsite infrastructure, required multiple transportation modes such as heavy-haul-truck to rail, barge to rail, and in the cases of Kewaunee and Humboldt Bay, potentially heavy-haul- truck to barge to rail. For example, Trojan’s rail spur was removed during the decommissioning process and Big Rock Point will probably need to truck its spent nuclear fuel 52 miles to Gaylord, Michigan. Some of the sites evaluated have no rail spur, and heavy haul trucking will be required for anywhere from 7.5 miles to potentially 260 miles, depending on the site. A DOE official and experts from industry also told us that DOE needs to begin procuring qualified rail cars capable of transporting spent nuclear fuel. DOE officials stated that the agency is beginning to request information from the rail car industry on the rail car design, testing, and approval for the transportation of commercial spent nuclear fuel and anticipates getting responses from interested parties by July 2014. According to an industry representative, the Association of American Railroads established the S-2043 standard, which sets higher standards for transportation of spent nuclear fuel than for normal rail operations. S- 2043 requires, for example, on-board safety protection technology unique to spent nuclear fuel shipments and a high performance structural upgrades to accommodate the extra weight of spent nuclear fuel as well as the transportation cask, which according to the industry representative, can weigh up to 500,000 pounds. Industry experts said it may be challenging to design and fabricate rail cars that meet the S-2043 standard for transporting heavy, fully loaded spent nuclear fuel canisters within DOE’s time frames. According to industry experts, it will take, at a minimum, 2 years to design and fabricate a new rail car, in addition to an extensive quality control process to ensure the car meets the S-2043 standard. For example, an industry representative estimated that the entire rail car procurement process may take up to 9 years and another agreed that it may be feasible to begin to move the spent fuel by 2025 if DOE initiates the necessary planning soon. Another option would be for DOE to use what the U.S. Navy has learned from its rail cars designs for its nuclear navy program, which also generates spent nuclear fuel. DOE officials told us they would consider this suggestion as they proceed. According to experts, some spent nuclear fuel in dry storage may need to be repackaged before it can be transported, and the repackaging is likely to be costly and difficult to accomplish. More specifically, as noted earlier, some dry storage canisters may be too hot or radioactive to meet transportation regulations. If the decision were made to transport the spent nuclear fuel before it had a chance to cool sufficiently, the canisters may have to be repackaged into smaller canisters that meet the transportation regulations. In addition, an industry expert and an expert from a community action group told us repackaging may also be required if current storage canisters, or the spent nuclear fuel in them, has degraded. For example, we previously reported that canisters are likely to last about 100 years, after which the spent nuclear fuel may have to be repackaged because of canister degradation.repackaging might be needed, reactor operators may no longer have pools or the necessary infrastructure to undertake the repackaging. By the time such According to DOE, under provisions of the standard contract, the agency does not consider spent nuclear fuel in canisters to be an acceptable form for waste it will receive. This may require utilities to remove the spent nuclear fuel already packaged in dry storage canisters. Nuclear power reactors that have closed their spent nuclear fuel pools would likely have to transport their spent nuclear fuel to operating reactors with pools, build a new pool, or repackage it in a yet-to-be-developed dry transfer facility. Cutting open these canisters is likely to be an expensive process for utilities that would require using specialized equipment to open welded lids, placing the assemblies back into a wet pool, and then packaging the assemblies into a new cask that DOE would provide under the terms of the standard contract. Such a process would require personnel and equipment, increase worker radiation exposure, increase the potential for fuel damage, produce additional low level waste, and, may, according to an industry expert, hinder electricity generating activities. In addition, another industry expert also expressed concerns that re-wetting and re- drying during the repackaging process may lead to cladding degradation issues. However, NRC officials stated that prior to NRC’s granting a storage license, each system is analyzed by the applicant and reviewed by the NRC to ensure it can be re-flooded safely with no damage to the spent fuel. Until decisions have been made regarding a new path forward—storage and disposal—repackaging requirements will remain uncertain. However, according to scientists from national laboratories, given a path forward, technical challenges, such as repackaging, can be overcome with time and sufficient funding. Experts and stakeholders with expertise in spent nuclear fuel management identified achieving sustainable public acceptance as a challenge that needs to be overcome to implement a spent nuclear fuel management program, and they stated that public acceptance cannot be achieved without a coordinated outreach strategy. Reports spanning several decades cite societal and political opposition as key obstacles to siting and building a permanent repository for disposal of spent nuclear fuel. For example, in 1982, the congressional Office of Technology Assessment reported that public and political opposition were key factors to siting and building a repository. The National Research Council of the National Academies reiterated this conclusion in a 2001 report, stating that the most significant challenge to siting and commencing operations at a repository is societal. Our analysis of stakeholder and expert comments indicates the societal and political factors opposing a repository are the same for a consolidated interim storage facility. This lesson has been borne out in efforts to site, license, build, and commence operations at a consolidated interim storage facility. As we reported in 2011 and as experts and stakeholders reiterated, it may be possible to find a willing community to host a consolidated interim storage facility, but obtaining and sustaining state support may be more difficult because of broader state constituencies and state-federal relations. For example, the Office of the Nuclear Waste Negotiator, established by the NWPA amendments in 1987, tried to broker an agreement for a community to host a repository or interim storage facility. Two negotiators worked with local communities and Native American tribes for several years, but neither was able to conclude a proposed agreement with a willing community by January 1995, when the office’s authority expired. In one siting effort, in 1992, a county in Wyoming sought to host a consolidated interim storage facility, but the Wyoming governor stopped that effort. The governor expressed concerns that despite the assurances of federal officials, even those with “personal integrity and sincerity,” he could not be sure that the federal government’s attitudes or policies would remain the same over the next 50 years or that the state would have any future say in the program. The experience of Private Fuel Storage is another example in which a consortium of owners and generators of spent nuclear fuel found a willing community to host a consolidated interim storage site on the Goshute Indian reservation in Utah. The state of Utah opposed the effort and although the Private Fuel Storage site received a license in 2006, operations never began there because of ongoing legal battles and land use issues. A spokesperson for the State of Utah stated that if the owners and generators renewed their efforts to begin operations at Private Fuel Storage, Utah would continue to fight the effort. Furthermore, in 2014, the Western Governors Association—an association representing 19 western states—passed a resolution stating that the Governor of a state must agree in writing if an interim storage site for spent nuclear fuel is to be considered in the state. Experts and stakeholders we spoke with reiterated this position, stating that states, and many local communities, are concerned that a consolidated interim storage site could become a de facto permanent storage site. In 2011, we reported that no nation had ever succeeded in building a permanent repository for spent nuclear fuel, in part due to societal concerns, and that there was no model or set of lessons that would guarantee success in such a complex, decades-long endeavor. Based on our discussions with experts and stakeholders and a review of relevant documents on spent nuclear fuel management, those same societal concerns apply to building a consolidated interim storage facility. However, as we reported in 2011, efforts to site and commence operations at the Waste Isolation Pilot Plant in New Mexico succeeded largely because a contractor addressed public opposition to the facility. Specifically, the contractor involved local communities situated along the transportation routes throughout the state, providing education and training programs and equipment related to the safe transportation of radioactive waste. The project might have ended because of state opposition if DOE had not conceded some oversight authority to the state. In our discussions with experts and stakeholders, one common theme was apparent: public acceptance cannot be achieved without a coordinated outreach strategy, which would include components such as transparent transportation planning, and a defined consent-based process. According to these experts and stakeholders, a coordinated outreach strategy could include, among other things, sharing information with specific stakeholders and the general public about DOE’s ongoing activities related to managing spent nuclear fuel. The experts and stakeholders said that DOE has no coordinated outreach strategy, which DOE officials confirmed. A coordinated outreach strategy would be an important aspect of informing the public about spent nuclear fuel management plans and DOE’s current management efforts irrespective of which path Congress agrees upon. DOE officials confirmed that they do not have a coordinated outreach strategy for communicating with specific stakeholders and the general public about their activities related to spent nuclear fuel management. Instead, DOE officials said they communicate regularly with certain stakeholders. For example, DOE officials told us that they actively obtain the input of state and tribal representatives on spent nuclear fuel transportation-planning issues. In addition, DOE officials told us they plan to continue to participate in meetings with technical experts and industry stakeholders and to post information online. For example, DOE officials made several presentations on issues related to storage, transportation, and disposal at the Nuclear Waste Technical Review Board’s meeting in November 2013. Some experts we spoke with were critical of DOE efforts to involve stakeholders. For example, experts who represent multi-state, regional organizations active in spent-nuclear-fuel transportation planning said DOE has not been transparent or effective in its communication with stakeholders. Furthermore, although DOE had issued fact sheets related to spent nuclear fuel management related to work on Yucca Mountain, DOE has not recently developed spent nuclear fuel information for the general public, such as a fact sheet explaining issues related to transporting spent nuclear fuel, nor does DOE have plans to do so in the upcoming fiscal year. DOE officials said that recently they have stepped up their outreach efforts with stakeholders. For example, in April 2014, DOE completed a Draft National Transportation Plan for moving spent nuclear fuel. In terms of DOE’s efforts to inform specific stakeholders and the general public about spent nuclear fuel issues by posting information online, DOE has two websites devoted to spent nuclear fuel issues. However, we found that the information on those sites either does not provide information about the agency’s ongoing activities or is not easily accessible. DOE also has an online library where it has posted selected technical studies; however, the agency has done little to explain how it plans to use these studies. DOE officials acknowledged that they could better explain the studies posted. They indicated that it is premature to conduct more extensive public outreach until the timing and logistics of transporting spent nuclear fuel have been determined. Experts from multi- state organizations disagree, stating that if DOE does not engage the public soon, then DOE could appear to be unilaterally making decisions without considering public input. According to experts from industry and an entity representing state regulatory agencies, DOE officials have not defined a consent-based process or engaged interested communities in discussions on hosting a consolidated interim storage facility. An expert from an entity representing state regulatory agencies said that in order to consent, potential host communities need to know what consent entails, both in the short and long term, including the perceived risks and benefits and the planned duration and proposed capacity of the facility before any decisions can be made. Furthermore, a few experts told us that they had heard of some community representatives who approached DOE to express their interest in hosting a storage facility, but DOE did not engage in substantive discussions with them. DOE has requested funding in its fiscal year 2014 and 2015 budget requests to plan for consent-based siting, but DOE officials said they had not developed a formal siting process for an interim storage facility. A 2013 report by the University of Oklahoma, in collaboration with Sandia National Laboratory, documented the importance of social media, such as Twitter postings and Google searches, as public attention to both nuclear energy and nuclear waste management spiked immediately after the earthquake and tsunami struck the Fukushima Daiichi nuclear power plant complex in Japan on March 11, 2011, causing widespread concern of a potential release of radiation. Furthermore, according to experts and stakeholders, social media have been used effectively to provide information to the public through coordinated outreach efforts by organizations with an interest in spent nuclear fuel policy. Some of these organizations oppose DOE’s strategy and the information they distribute reflects their position. In the absence of a coordinated outreach strategy by DOE, including social media, DOE does not effectively provide a forum to share information and offer greater transparency. DOE promotes the use of social media platforms to engage in open discussions about energy issues, but has posted only a few reports and descriptions of spent nuclear fuel management on its web page. DOE has made efforts to share information about its ongoing spent nuclear fuel management activities by presenting at technical meetings and posting information on its website. Sharing information is made more difficult for DOE by uncertainties about the future path of spent nuclear fuel management. DOE intends for its strategy to provide an initial basis for discussions on a sustainable path forward. Until the federal government proceeds with a new policy to meet its contractual obligations, the federal liability for litigation related to spent nuclear fuel management will continue to grow. Our analysis of expert and stakeholders comments indicates four types of challenges to implementing DOE’s January 2013 strategy to manage spent nuclear fuel within the time frames projected. DOE has begun to address aspects of these challenges, but the challenges cannot be fully addressed until uncertainties regarding a path forward are resolved. The societal challenge of building and sustaining public acceptance of the federal government’s spent nuclear fuel management activities, however, will need to be addressed irrespective of the path the federal government agrees upon. Unless and until there is a broad understanding of the issues associated with transporting spent nuclear fuel and managing it at consolidated facilities, specific stakeholders and the general public may be unlikely to support any spent nuclear fuel management program that is decided on in the future. According to experts and stakeholders, organizations that oppose DOE’s strategy have reached the public by effectively using social media to promote their positions. In contrast, DOE currently has no coordinated outreach strategy. In the absence of a coordinated outreach strategy by DOE, specific stakeholders and the general public may not have complete and accurate information about the agency’s activities, making it more difficult for the federal government to move forward with any policy to manage spent nuclear fuel and address federal liability. To help achieve and sustain public acceptance for future spent nuclear- fuel management efforts, the Secretary of the Department of Energy should develop and implement a coordinated outreach strategy for providing information to specific stakeholders and the general public on federal activities related to managing spent nuclear fuel. We provided DOE with a draft of this report for review and comment. In written comments, which are reproduced in appendix VI, DOE generally agreed with our findings and the recommendation in our report. DOE said that it plans to continue to engage states, tribes, and other stakeholders regarding planning for future transportation of spent nuclear fuel and that it would improve its outreach to the general public. DOE also said it would make an effort to provide the public with more complete information about its ongoing activities and the issues associated with spent fuel management. DOE also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, the Administrator of the Environmental Protection Agency, the Attorney General, the Chairman of the Nuclear Regulatory Commission, the Secretary of Transportation, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To describe the expected rate of spent nuclear fuel accumulation in wet and dry storage, we obtained data from the Nuclear Energy Institute, an industry policy umbrella organization. These data included information on the amounts of spent nuclear fuel currently in wet and dry storage and projected amounts of spent nuclear fuel to be added to the current wet and dry storage inventory as it is removed from a reactor until it is permanently shut down. We updated the data as necessary to account for, among other things, reactors shutting down early. In describing the rate at which spent nuclear fuel accumulates, we assumed that except for the few reactors that have announced early permanent shutdown dates, the nation’s current reactors continue to operate through a 20-year extended license period and continue to produce spent nuclear fuel at the same rate; that no new reactors are brought online, and that the generation of spent nuclear fuel declines as reactors shut down. To ensure the accuracy of our estimates, we provided selected sections of the draft report to representatives from the Nuclear Energy Institute for their review and comment. We incorporated their comments, as appropriate, in the final report. To identify the basis of federal liability for spent nuclear fuel management to date and of DOE’s estimate of future liabilities, we reviewed documents from the Departments of Energy (DOE) and Justice. These documents included a generic Department of Justice settlement agreement, DOE’s annual memorandums that described the liability estimate, and DOE’s annual financial reports that include the figures on the cumulative amounts paid to settle with utilities for DOE’s inability to accept spent nuclear fuel for disposal and the estimated remaining liability. In addition, we interviewed officials from DOE and the Department of Justice. We also relied on our prior work (GAO-10-48) to provide information on the typical costs that may contribute to the federal government’s future liabilities. To ensure that we had complete and accurate information on these costs, we provided selected sections of the draft report to representatives from the Nuclear Energy Institute for their review and comment. We incorporated their comments, as appropriate, in the final report. To assess the challenges, if any, that experts and stakeholders have identified to the federal government’s ability to meet DOE’s time frames for managing spent nuclear fuel at consolidated interim storage facilities and potential ways for DOE to mitigate the challenges, we identified individuals with spent nuclear fuel management experience and expertise. In our prior work (GAO-10-48), we had already identified experts in spent nuclear fuel management. Starting with this group and using a snowballing technique, we asked experts about their own expertise and also asked each expert or stakeholder to recommend other individuals that we might consider including in our discussions. We determined that we had a sufficient sample of relevant experts and stakeholders when the names of experts and stakeholders recommended to us became repetitive and when we determined that we had a balanced set of viewpoints represented. In total, we interviewed over 90 individuals, including federal officials, who represented a wide range of viewpoints and expertise. However, our selection of experts is nongeneralizable, in that opinions cannot be generalized to other experts or tallied, either within or across types of expertise. Once identified, we contacted the individuals and confirmed their familiarity with the issues. Before we began each interview, we asked the individual to provide information on his or her background, including education, employment history, and experiences related to spent nuclear fuel management to assess their level of expertise. In addition, we asked all interview participants to self-assess their expertise in various aspects of spent nuclear fuel management such as the political, technological, and regulatory issues related to spent nuclear fuel management. Using our professional judgment, we assessed the level of expertise for each individual in the different issues we considered in our analysis. Opinions of experts on a topic outside their own area of expertise are sometimes presented as the opinions of “stakeholders.” In some cases, the same individual might be considered an expert in one specific issue, but a stakeholder on another issue. Generally, the experts and stakeholders represented their organization’s views. The experts and stakeholders we consulted included: DOE in headquarters, field offices, and scientists from several national laboratories, including Argonne National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory, and Sandia National Laboratories; officials from other federal agencies and organizations involved with spent nuclear fuel management activities, including the Department of Navy, the Nuclear Regulatory Commission, and the Nuclear Waste Technical Review Board; state, local, and regional governments or organizations, including the states of Nevada, Texas, and Utah; the State of Minnesota Public Utilities Commission; Eddy County, New Mexico; and Nye County, Nevada; Council of State Governments, Eastern Regional Conference; National Association of Regulatory Utility Commissioners; the National Conference of State Legislatures; Southern States Energy Board; The Council of State Governments, Midwestern Office; Western Governors’ Association; Western Interstate Energy Board; industry: including AHL Consulting; AREVA; Association of American Railroads; Chicago Bridge & Iron; Dairyland Power Cooperative; Dominion; Duke Energy; Energy Resources International, Inc.; Energy Solutions, Inc.; Exelon Corporation; Kouts Consulting; L. Barrett Consulting; Nuclear Energy Institute; Nuclear Waste Strategy Coalition; Pillsbury Winthrop Shaw Pittman LLP; PSEG Nuclear, LLC; Tennessee Valley Authority; The Brattle Group; Governmental Strategies, Inc.; The Yankee Nuclear Power Companies: Yankee Atomic, Connecticut Yankee, and Maine Yankee; Van Ness Feldman LLP; and Xcel Energy; representatives from a range of interest groups, including Beyond Nuclear, Institute for Energy and Environmental Research, Natural Resources Defense Council, Nuclear Information and Resource Service, Southwest Research and Information Center, Union of Concerned Scientists, U.S. Chamber of Commerce – Institute for 21st Century Energy; and independent entities, including Black Mountain Research; Carnegie Institution for Science; Kadak Associates; Leroy Law Office; National Research Council, National Academy of Sciences; TA Frazier LLC; and University of Oklahoma. To ensure we asked consistent questions among all the indentified experts and stakeholders, we developed a data collection instrument that included broad questions related to the challenges, if any, to the federal government’s ability to meet DOE’s time frames for accepting spent nuclear fuel at consolidated interim storage facilities. We pre-tested the instrument with a few individual experts and stakeholders to ensure that our questions were clear and would provide us with the information that we needed. After each pretest, we refined the instrument, accordingly. We analyzed the interviews to identify consistent themes and issues that emerged. See appendix II for a list of the experts and stakeholders whom we interviewed and their affiliations. In addition to the interviews, we reviewed relevant documents, such as DOE’s January 2013 strategy for the management and disposal of used nuclear fuel and high level waste and the January 2012 Blue Ribbon Commission on America’s Nuclear Future’s report to the Secretary of Energy. We also reviewed documents prepared by the organizations that we interviewed and attended conferences sponsored by relevant organizations, including the U.S. Nuclear Waste Technical Review Board’s Technical Workshop on the Impacts of Dry Storage Canister Designs on the Future Handling, Storage, Transportation, and Geologic Disposal of Spent Nuclear Fuel in the United States; the Nuclear Energy Institute’s Used Fuel Management Conference; and the Bipartisan Policy Center’s regional meeting of Identifying a Path Forward on America’s Nuclear Waste. We also interviewed officials from the Department of Transportation and the Environmental Protection Agency. According to officials from these agencies, each agency has a specified role with respect to regulating transportation and interim storage of spent nuclear fuel. According to Department of Transportation officials, the agency coordinates and shares responsibility with the NRC on issues related to transporting spent nuclear fuel as stipulated in a memorandum of understanding between the two agencies. According to Environmental Protection Agency officials, the agency has a regulatory framework in place for storage of spent nuclear fuel that will allow NRC to license interim storage facilities. We conducted this performance audit from November 2013 to October 2014, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Spent Nuclear Fuel Experts and Stakeholders We Interviewed State of Minnesota Public Utilities Commission The Yankee Nuclear Power Companies: Yankee Atomic, Connecticut Yankee, and Maine Yankee Eddy County, New Mexico National Research Council, National Academy of Sciences U.S. Chamber of Commerce – Institute for 21st Century Energy Southwest Research and Information Center National Conference of State Legislatures Nuclear Regulatory Commission Affiliation Oak Ridge National Laboratory The Council of State Governments, Midwestern Office Institute for Energy and Environmental Research Nuclear Waste Technical Review Board National Association of Regulatory Utility Commissioners Nuclear Information and Resource Service Energy Solutions, Inc. Department of Energy Pillsbury Winthrop Shaw Pittman LLP Governmental Strategies, Inc. Energy Resources International, Inc. Appendix III: Commercial Spent Nuclear Fuel Stored in Wet, Dry, and Shutdown Storage Sites (Text for Interactive Figure 5) 1. Beaver Valley 2. Callaway 3. Clinton 4. Fermi 5. Shearon Harris 6. Pilgrim 7. South Texas Project 8. Virgil C. Summer 9. Three Mile Island 10. Watts Bar 11. Wolf Creek 12. GE Morris 1. Arkansas Nuclear One 2. Braidwood 3. Browns Ferry 4. Brunswick 5. Byron 6. Calvert Cliffs 7. Catawba 8. Columbia Generating Station 9. Comanche Peak 10. Cooper 11. Davis-Besse 12. Diablo Canyon 13. Donald C. Cook 14. Dresden 15. Duane Arnold 16. Joseph M. Farley 17. James A. FitzPatrick 18. Fort Calhoun 19. R.E. Ginna 20. Grand Gulf 21. Edwin I. Hatch 22. Hope Creek 23. Indian Point 24. La Salle 25. Limerick 26. McGuire 27. Millstone 28. Monticello 29. Nine Mile Point 30. North Anna 31. Oconee 32. Oyster Creek 33. Palisades 34. Palo Verde 35. Peach Bottom 36. Perry 37. Point Beach 38. Prairie Island 39. Quad Cities 40. River Bend 41. H. B. Robinson 42. St. Lucie 43. Salem 44. Seabrook 45. Sequoyah 46. Surry 47. Susquehanna 48. Turkey Point 49. Vermont Yankee 50. Vogtle 51. Waterford 1. Big Rock Point 2. Maine Yankee 3. Yankee Rowe 4. Haddam Neck 5. Crystal River 6. Kewaunee 7. La Crosse 8. Zion 9. Trojan 10. Humboldt Bay 11. Rancho Seco 12. San Onofre 1. Alabama 2. Arizona 3. Arkansas 4. California 5. Connecticut 6. Florida 7. Georgia 8. Illinois 9. Iowa 10. Kansas 11. Louisiana 12. Maine 13. Maryland 14. Massachusetts 15. Michigan 16. Minnesota 17. Mississippi 18. Missouri 19. Nebraska 20. New Hampshire 21. New Jersey 22. New York 23. North Carolina 24. Ohio 25. Oregon 26. Pennsylvania 27. South Carolina 28. Tennessee 29. Texas 30. Vermont 31. Virginia 32. Washington 33. Wisconsin In addition, there are three permanently shutdown reactors at sites that continue to have operating reactors. The sites that have both shutdown and operating reactors include the Dresden, Indian Point, and Millstone sites. Of the 12 shutdown reactor sites, the Zion site has two permanently shutdown reactors and the San Onofre site has three permanently shutdown reactors. Also, the operator at the Vermont Yankee site has announced that it plans to shut down the reactor at the end of 2014. The settlement agreements between the Department of Justice and the owners or generators of spent nuclear fuel have not been identical and have changed over time. For example, from 2004 through 2009, the Department of Justice settled with six owners and generators representing 40 of the 118 reactors covered under the standard contract. Under the settlement agreements—known as the Exelon Settlement agreements—it was assumed that the Department of Energy (DOE) would have accepted spent nuclear fuel for disposal at a rate of 900 metric tons per year from 1998 through 2014, and at a rate of 2,100 metric tons per year thereafter. Under the Exelon Settlement agreements, DOE is liable for spent nuclear fuel storage costs that owners and generators would not have incurred if DOE had accepted and disposed of the fuel at this rate. Beginning in 2011, the Department of Justice began using a new settlement agreement—called the New Framework Settlement agreement. As of September 8, 2014, the Department of Justice reported that it has executed New Framework Settlement Agreements with 20 litigants representing 45 reactors covered under the standard contract. These settlement agreements do not supersede the Exelon agreements, which remain effective for the parties that settled through 2009. The New Framework Settlement agreements assumed a higher rate of acceptance of spent nuclear fuel, based on an appellate decision. Specifically, the new settlement agreements assumed that DOE would have accepted spent nuclear fuel for disposal at a rate of 1,200 metric tons per year from 1998 through 2002; 2,000 metric tons in 2003; 2,650 metric tons per year from 2004 through 2007; and 3,000 metric tons per year beginning in 2008. The result is that some types of spent nuclear fuel storage costs for which owners and generators were deemed responsible under the Exelon Settlement agreement are now a DOE liability under the new settlement agreements. According to a Department of Justice document used to discuss settlements for the owners and generators covered by the New Framework Settlement agreements, there are five categories of reimbursable costs: 1. Additional Pool Storage: Costs to purchase, license, and install new, additional, or replacement storage racks or to make available additional storage spaces to the extent, and only to the extent, necessary to provide additional capacity in the spent nuclear fuel pool at the site. 2. Dry Storage Costs: Costs to purchase canisters and casks, including canisters that may be licensed for transport and casks for transferring spent nuclear fuel to the dry storage facility; costs to load spent nuclear fuel into and to transport canisters and casks to the dry storage facility; costs of ancillary equipment for casks and cask loading, such as crawler-type transporters, dollies, and vacuum-drying equipment; costs to conduct initial loading demonstrations required by the Nuclear Regulatory Commission (NRC); costs for training and development of procedures; costs for cask-loading campaign mobilization and demobilization; costs to study and to evaluate spent nuclear fuel storage options; costs for quality assurance inspections of cask vendors; costs for security improvements required by NRC for the dry storage facility; costs of maintaining and operating the dry storage facility; costs for security improvements or upgrades required to comply with utility’s security plan approved by the NRC; and costs to design, license and build the dry storage facility, including costs of building the portion of the facility that will be required for the dry storage of the utility’s spent nuclear fuel in addition to utility’s allocations, provided that the utility can demonstrate that it was more cost effective to incur the costs to design, license and build the dry storage facility during the claim period rather than after termination of the agreement. If the utility previously constructed a dry storage facility for reasons other than to store the utility’s allocations or needs to place, or places, items other than canisters or casks containing the utility’s allocations in dry storage, only the costs attributable to the portion of the dry storage facility needed to store the utility’s allocations will be allowable. 3. Modifications of the Existing Plant: Costs paid to modify cranes to the extent, and only to the extent, necessary to increase the rated lifting capacity of the crane(s) used in the loading of spent nuclear fuel from the fuel storage pool, provided that the utility can establish that these modifications would not have been necessary to meet the requirements of NUREG-0612 or load spent nuclear fuel in casks or canisters provided by DOE had DOE begun performance in 1998; building modifications that the utility can establish would not have been necessary to load spent nuclear fuel into casks or canisters provided by DOE (e.g., seismic restraints for fuel pool or upgrades to floor of cask-loading area); and costs to improve the haul path from the fuel building to the dry storage facility, to the extent that the haul path is different from the path that the utility would have used to deliver fuel to DOE. If the utility incurs costs for site modifications or equipment purchases to store the utility’s allocations that otherwise benefit the operation of the plant, including crane modifications for purposes other than loading storage canisters or casks, the cost reimbursed will be proportional to the benefit to the operation of the plant. 4. Property Taxes: Costs paid as a result of any increase in assessed property tax resulting from and traceable to projects, as identified in the preceding three paragraphs, that were undertaken to provide additional storage for the utility’s allocations. 5. Labor and Overhead: The cost of labor charged directly by the utility’s employees to any project that is otherwise allowable shall be considered allowable, provided that the hours expended on such project are charged in accordance with utility’s standard time recordation system and are identified at the individual employee level. In addition, the following types of overhead charges will be deemed allowable provided that the charges are calculated in accordance with the utility’s established accounting practice and policy: (a) payroll overheads or “burdens” associated with labor hours charged to allowable projects and (b) non-payroll overheads allocated to allowable projects claimed up to a maximum of 5 percent of the portion of the utility’s claim which is otherwise allowable and to which such non-payroll overheads are allocated. Appendix V: Process and Costs of Transferring Spent Nuclear Fuel from Wet to Dry Storage (Text for Interactive Figure 7) Nuclear power reactor and spent nuclear fuel pools—Spent nuclear fuel typically cools for at least 5 years in a pool before a canister ($700,000 to $1.5 million) is placed in the pool, filled with spent nuclear fuel, removed from the pool, and dried. A reusable steel transfer cask ($1.5 million to $3 million) provides shielding for nearby workers as the spent nuclear fuel is transferred from the pool and placed into either a vertical or horizontal dry storage system. The process of transferring spent nuclear fuel, excluding the canister, transfer cask, and storage system costs $150,000 to $550,000. Then the canister is placed into either a vertical or horizontal dry storage system. Transporter— For vertical storage, a crawler-type transporter ($1 million to $1.5 million) carries the entire canister and storage cask in a vertical orientation to a storage pad. For horizontal storage, a tractor with a transfer trailer carries the canister in a reusable transfer cask in a horizontal orientation ($1.5 million to $3 million) to the horizontal module. Vertical storage cask/horizontal storage module—Utilities typically choose either a vertical storage system ($250,000 to $350,000 per cask) or a horizontal storage system ($500,000 to $600,000 per module) for a particular site. Safety and security systems and annual operations—Design, licensing, and construction of the dry storage facility and safety and security systems ($5.5 million to $42 million). Annual operations include costs of security, operations, and maintenance cost. Annual operations at an operating reactor site: $100,000 to $300,000 and at a shutdown reactor site: $2.5 million to $6.5 million. In addition to the individual named above, Karla Springer (Assistant Director), Arkelga Braxton, Kevin Bray, Ross Gauthier, Diana C. Goody, Armetha Liles, Wendell Matt, Mehrzad Nadji, Cynthia Norris, Katrina Pekar-Carpenter, Timothy Persons (Chief Scientist), Anne Rhodes-Kline, and Robert Sánchez made key contributions.
DOE is responsible for disposing of commercial spent nuclear fuel. DOE entered into contracts with owners and generators of spent nuclear fuel to begin disposing of it beginning in 1998, with plans for disposal in a national repository. DOE, however, was unable to meet the 1998 date and, as a result of lawsuits, the federal government has paid out about $3.7 billion for storage costs. DOE proposed a new strategy in January 2013 to build consolidated interim storage facilities—starting operations in 2021 and 2025. GAO was asked to review issues related to DOE's strategy for managing spent nuclear fuel. This report (1) describes the expected rate of spent nuclear fuel accumulation in wet and dry storage, (2) identifies the basis of federal liability for spent nuclear fuel management to date and of DOE's estimate of future liabilities, and (3) assesses challenges, if any, that experts and stakeholders have identified to the federal government's ability to meet DOE's time frames for managing spent nuclear fuel at consolidated interim storage facilities and potential ways for DOE to mitigate the challenges. GAO reviewed documents from DOE and other agencies, and interviewed experts and stakeholders from industry, federal and state governments, interest groups, and independent entities. Spent nuclear fuel—the used fuel removed from nuclear power reactors—is expected to accumulate at an average rate of about 2,200 metric tons per year in the United States. This spent nuclear fuel is mostly stored wet, submerged in pools of water. However, since pools have been reaching their capacities, owners and generators of spent nuclear fuel (typically utilities and reactor operators) have been transferring it to canisters that are placed in casks on concrete pads for dry storage—which is an expensive and time-consuming process. When operating reactors' licenses begin to expire in the 2030s, the rate of spent nuclear fuel accumulation is expected to decrease, but the amount in dry storage will increase as the pools are closed and all spent nuclear fuel is transferred to dry storage. By 2067, the currently operating reactors are expected to have generated about 139,000 metric tons of spent nuclear fuel, nearly all of which is expected to be transferred to dry storage. Federal liability for managing spent nuclear fuel has been based on costs that owners and generators of this fuel have paid because the Department of Energy (DOE) has not met its contractual obligation to dispose of spent nuclear fuel. DOE's estimate of future federal liability is based on how long DOE expects the federal government to continue to pay the costs for managing spent nuclear fuel to plant owners and generators. Generally, the damages paid—mostly for the costs of transferring spent nuclear fuel from wet to dry storage—have been for costs that owners and generators would not have incurred if DOE had begun disposing of the spent nuclear fuel. DOE's most recent estimate of future liability—$21.4 billion through 2071—assumes that DOE will begin taking title to and possession of spent nuclear fuel in 2021 and complete the process in 2071, thereby ending the federal liability. DOE has extended the expected date that the last of the spent nuclear fuel will be picked up several times, and each extension has added to the future federal liability. Spent nuclear fuel management experts and stakeholders GAO spoke with identified several legislative, regulatory, technical, and societal challenges to meeting DOE's time frames for managing spent nuclear fuel at interim storage facilities. Although DOE has begun to take actions to address some of these challenges, officials noted that the department's strategy cannot be fully implemented until Congress provides direction on a new path forward. However, experts and stakeholders believe that one key challenge—building and sustaining public acceptance of how to manage spent nuclear fuel—will need to be addressed irrespective of which path Congress agrees to take. In this context, they suggested the need for a coordinated public outreach strategy regarding spent nuclear fuel management issues, including perceived risks and benefits, which would be consistent with the Administration's directive to be more transparent and collaborative. DOE officials stated they currently do not have such a strategy. Without a better understanding of spent nuclear fuel management issues, the public may be unlikely to support any policy decisions about managing spent nuclear fuel. DOE should implement a coordinated outreach strategy to better inform the public about federal spent nuclear fuel management issues. DOE generally agreed with the findings and recommendation in the report.
Mr. Chairman and Members of the Committee: I am pleased to be here today to provide an update on the Census Bureau’s dress rehearsal for the 2000 Census and the Bureau’s readiness for carrying out the 2000 Decennial Census. The dress rehearsal, currently under way at three sites—Sacramento, CA; 11 counties in the Columbia, SC area; and Menominee County in Wisconsin, including the Menominee American Indian Reservation—is designed to demonstrate major operations, procedures, and questions that are planned for the decennial census. At your request, my statement focuses on how key census-taking operations have performed thus far during the dress rehearsal and the implications that may exist for 2000. When we last testified before Congress in March 1998, we noted that, although the Bureau had made progress in addressing some of the problems that occurred during the 1990 Census, key activities faced continuing challenges. The situation today is much the same. On the one hand, certain census activities, such as staffing the dress rehearsal operations, appear to have gone well. On the other hand, measures of other activities, such as the mail response rate, suggest that the Bureau still faces major obstacles to a cost-effective census. Moreover, while the dress rehearsal activities done thus far have demonstrated the Bureau’s general ability to execute the dress rehearsal according to its operational timetable and plan, the important outcome measure—the quality of the data collected—is not yet available. Further, the Bureau’s general ability to conduct the dress rehearsal according to its operational plan, while encouraging, is not necessarily a predictor of success in 2000. Because the dress rehearsal was performed at three sites, the capacity of regional and headquarters offices, as well as a number of essential census-taking operations, could not be fully tested under census-like conditions. is using sampling and statistical estimation methods at the Sacramento site, in accordance with its plans for a sampling census. At the South Carolina site, the Bureau’s procedures are to follow up on all nonresponding households just as it was to do nationwide in the 1990 Census. At the Menominee dress rehearsal site, the Bureau is to follow up on all nonresponding households, but it is also using sampling and statistical estimation to improve the accuracy of the population count. My comments today are based on our ongoing review of key census-taking operations that could significantly affect the cost and accuracy of the 2000 Census. They include such activities as (1) creating a complete and accurate address list, (2) obtaining a high level of public cooperation through an effective census promotion and outreach effort, (3) staffing census-taking operations with an adequate workforce, (4) processing census data accurately and using technology efficiently and effectively, and (5) carrying out field activities including both nonresponse follow-up and sampling and statistical estimation procedures. To assess these activities, we (1) made several visits to the dress rehearsal sites and the Bureau’s data capture center in Jeffersonville, IN; (2) observed key census-taking operations; (3) interviewed Bureau headquarters officials, staff from regional and local census offices, and individual enumerators and their supervisors; and (4) reviewed relevant documents and data the Bureau prepared about these operations. To obtain a local perspective on the dress rehearsal, we conducted in-person and telephone interviews with local officials at the three dress rehearsal sites on their experiences in reviewing address lists, promoting the census, and recruiting and hiring census workers. Because the dress rehearsal is still under way and more comprehensive data on the results of the dress rehearsal are not yet available, our observations today should be considered preliminary and the Bureau’s data are subject to change pending further refinements and analysis. One of our long-standing concerns has been the Bureau’s ability to build a complete and accurate address list and develop precise maps. Accurate addresses are critical for delivering questionnaires, avoiding unnecessary and expensive follow-up efforts at vacant or nonexistent residences, and establishing a universe of households for sampling and statistical estimation. Precise maps are essential for counting persons at their proper locations—the cornerstone of congressional reapportionment and redistricting. Bureau maps are also used for certain census-taking operations such as nonresponse follow-up that entails following up on households that fail to mail back a census questionnaire. To build its address list, which is known as the Master Address File (MAF), the Bureau initially planned, in part, to (1) use addresses provided by the Postal Service, (2) merge these addresses with the address file the Bureau created during the 1990 Census, (3) conduct limited checks of the accuracy of selected addresses, and (4) send the addresses to local governments and Indian tribes for verification as part of a process called Local Update of Census Addresses. However, as we reported in March 1998, the Bureau concluded in September 1997 that its reliance on postal and 1990 Census addresses to construct its 2000 Census address list would not yield a sufficiently complete and accurate list. The Bureau therefore decided that redesigned procedures were needed in order to generate a MAF for the 2000 Census that, as a whole, was 99 percent complete. Under the revised approach, after local address review, the Bureau plans to verify physically the completeness and accuracy of the address file for the 2000 Census by canvassing neighborhoods across the country. The Bureau expects the new approach will cost an additional $108.7 million. that, based on the 1995 test census results. For the 1995 test census, about 7.7 percent of the census questionnaires were reported to be undeliverable at the Oakland, California test site and 4.5 percent at the Paterson, New Jersey test site. In addition, the census maps appeared to be of uneven quality and usefulness at the dress rehearsal locations. For example, local census officials in Sacramento and South Carolina said that the census maps were inaccurate and contained a variety of errors, such as streets that were incorrectly placed and named. In both locations, problems with census maps led some enumerators to use commercially available maps rather than those supplied by the Bureau. In Menominee, because of the rural nature of the site, maps were particularly important. Houses generally lacked numbered street addresses, and, as a result, enumerators had to locate them, in part, by using maps. However, Bureau officials told us that while the quality of the Menominee maps is improving over the course of the dress rehearsal, the maps still have problems that make it difficult for enumerators to locate houses. As I noted, the Bureau recognized that it needed to revise its approach to building the census address list and to improve the quality of its map products. However, the Bureau’s revised approach to developing its address list is not without risk. Although elements of the revised approach have been used and tested in earlier censuses, the Bureau has not used or tested them together, nor in the sequence as presently designed for the 2000 Census. Furthermore, because the Bureau made the decision to change its address list development procedures in September 1997—after major dress rehearsal address list development efforts were already in place—the revised approach was not used during the dress rehearsal. As a result, it will not be known until the 2000 Census whether the Bureau’s redesigned procedures will allow it to meet its goal of a 99 percent complete address list. The Bureau is scheduled to begin its 2000 Census field canvassing address list efforts in August. We will continue to monitor the Bureau’s efforts to build the census address list. 55 percent response rate that the Bureau expected it would achieve without these efforts. The Bureau always finds that mail response rates during census tests, including the dress rehearsal, are lower than those obtained during an actual decennial census, when public awareness of the census is generally much greater. Table 1 shows the anticipated dress rehearsal mail response rates for the three sites and the rates the Bureau actually achieved. Despite the fact that the Bureau generally met its response rate goals for the dress rehearsal, significant concerns remain about the degree to which the Bureau will be able to meets its mail response goal for 2000. By way of comparison, the 1988 dress rehearsal for the 1990 Census generated mail response rates that ranged from 49 percent to 56 percent for mailout/mailback operations, and 58 percent for update/leave operations.The mail response rate to the 1990 Census was 65 percent—slightly less than the 67 percent response rate that the Bureau hopes for in 2000. More importantly, the Bureau does not currently plan to use in 2000 a key ingredient of the response rate achieved during the dress rehearsal—a second mailing. According to a Bureau official, concerns about public confusion have contributed to the Bureau’s decision not to use a second questionnaire mailing in 2000. The preliminary results of the dress rehearsal suggest that the Bureau may need to reconsider its decision. At both the South Carolina and Sacramento sites, the Bureau obtained approximately a 7-percentage point “bump” in response rates by sending a second questionnaire to all households located in mailout/mailback areas. According to a senior Bureau offical, this 7 percentage point increase represents real additions to the count and does not include duplicate submissions from households that already had responded. The Bureau traditionally has found that simply raising awareness of the census is insufficient; through its various outreach and promotion programs, the Bureau must also motivate people to return their questionnaires. The difficulty in doing this was demonstrated during the 1990 Census when the Bureau found that, although about 93 percent of the public was aware of the census, the mail response rate was only 65 percent—10 percentage points lower than the mail response rate to the 1980 Census. Today, I will highlight two of the more important components of the Bureau’s efforts to build public awareness and cooperation through its outreach and promotion campaign: paid advertising and partnerships and community outreach. With regard to the Bureau’s paid advertising campaign, in October 1997, the Bureau announced it had hired Young & Rubicam, a private advertising agency, to market the census. The advertising campaign is based on the theme “This is your future—don’t leave it ” and stresses how responding to the census questionnaire benefits one’s community. This advertising effort was evident during our visits to the dress rehearsal sites, where we often observed billboards bearing Census 2000 advertising messages, such as “How America Knows What America Needs,” “The Future Takes Just a Few Minutes to Complete,” and “Pave a Road With These Tools.” In convenience stores, we observed signs that told passers-by that “ Gives Life to New Healthcare Centers.” In Sacramento, we observed outdoor advertising in languages appropriate for the neighborhood. The census was also promoted through broadcast and print media, as well as through less traditional methods such as advertisements on shopping bags at a chain of discount stores. $0.35 million for production and media costs for nontraditional advertising $0.23 million for Menominee media costs; and $1.12 million for Sacramento media costs. The Bureau’s use of partnership and community outreach activities and, in particular, its use of Complete Count Committees to help promote the census are other key components of the Bureau’s outreach and promotion campaign. According to the Bureau, Complete Count Committees are intended to help the Bureau take the census by, among other activities, planning and implementing a locally-based promotion effort to publicize the importance of the 2000 Census. The committees are to consist of local leaders, such as representatives of government, education, media, community, religious, and businesses organizations. For the dress rehearsal, the Bureau attempted to form committees in Sacramento and Menominee, as well as in the City of Columbia and the 11 surrounding counties participating in the dress rehearsal. The Bureau recommended that the committees could, among other initiatives, form subcommittees to reach specific segments of the population such as senior citizens; sponsor promotional events; obtain commitments from businesses to promote and support the census; provide the Bureau with testing and training space to assist in the employment of enumerators; and work with local media to cover and publicize census activities. This past spring, the Bureau sent a Complete Count Committee handbook, in which the Bureau described its plan for implementing the Complete Count Committee program for the 2000 Census, to the highest elected officials in about 39,000 local and tribal governments. The handbook suggested a structure for organizing a grassroots outreach campaign and provided an outline and schedule of nearly five-dozen activities that governments could undertake not only to promote the census, but also assist the Bureau in its data collection and enumerator recruiting responsibilities as well. important, what the committees can expect from the Bureau. The Bureau expects that the committees will secure their own funding and will rely on the Bureau for only a very limited amount of direct assistance. For example, at the dress rehearsal site in South Carolina, the Bureau hired two partnership specialists to help mobilize local groups. These specialists had to distribute their time and energy among the City of Columbia and the 11 surrounding counties included in the dress rehearsal—a workload that is consistent with what will be expected in 2000 when the Bureau plans to have 320 partnership specialists in place across the nation. Our work at the dress rehearsal sites suggests that the effectiveness of the partnership effort was undermined by an apparent mismatch between the Bureau’s expectations of the committees and what the committees could realistically accomplish. In both South Carolina and Menominee, a message we consistently heard from local officials associated with the committees was that they lacked the human and financial resources to promote the census, communication and guidance from the Bureau were insufficient, and Bureau assistance was limited. As a result, Complete Count Committees in some South Carolina counties were never formed, while others became inactive and some local officials expressed confusion and frustration over what was expected. Local outreach and promotion appeared to go more smoothly in Sacramento. This was likely due in part to the fact that there was only one Bureau partnership specialist in Sacramento assisting only one Complete Count Committee for Sacramento. However, as I have noted, for the 2000 Census, workloads for the Bureau partnership specialists closer to those I have described for South Carolina are more likely be the norm. Overall, therefore, the dress rehearsal experience suggests that the Bureau needs to ensure that it has realistic expectations about the contributions that Complete Count Committees will be able to make in promoting the census, building the response rate, and assisting the Bureau. as many as 2.6 million applicants, because for a variety of reasons, most applicants never make it through the employment process. Despite the uncertainties surrounding the Bureau’s ability to staff the 2000 Census, staffing the dress rehearsal appears to have gone better than expected thus far. As shown in table 2, one measure of the success of the Bureau’s staffing efforts, applicants’ acceptance of job offers for nonresponse follow-up (where the demand for employees is greatest), far exceeded the Bureau’s expectations. Moreover, managers of the local Census Bureau offices at the dress rehearsal sites we spoke to said that the quality of the newly hired employees’ work was typically good. According to Bureau data, at all three dress rehearsal sites, enumerator productivity came very close to the Bureau’s goal of 1.5 nonresponse follow-up cases completed per hour and enumerator turnover appears to have been lower than expected. The Bureau attributes its apparently successful dress rehearsal staffing efforts to several factors, including a competitive pay plan and aggressive recruitment. Key features of the Bureau’s pay plan include locality-based wages and bonuses for exceeding production targets. In addition, when the Bureau recognized that it was having difficulty recruiting a large enough pool of qualified applicants to fill its needs for nonresponse follow-up and later census operations in South Carolina, the Bureau raised enumerator pay rates from $9.50 per hour to $10.50 per hour effective April 3, 1998. Enumerator pay was $12.50 per hour in Sacramento and $11.25 per hour in Menominee. public libraries. In fact, recruiting literature appeared to be more prevalent than materials that promoted the census itself. Translating data from completed census forms into a useable format represents another challenge for the Bureau. The Bureau plans to have data capture centers process a total of about 1 billion pages of census questionnaires in 99 work days beginning in March 2000. The Bureau plans to take advantage of commercial off-the-shelf hardware and software through its contractor Lockheed Martin, rather than rely on in-house products. During the dress rehearsal, the Bureau is testing the accuracy of the data input by the new scanning equipment and software designed to perform this operation. Bureau officials reported that this operation met all high-priority processing deadlines, despite experiencing system bugs that will need to be addressed before 2000. The purpose of the dress rehearsal was to test and debug the system in an operational environment in advance of Census 2000. However, additional load testing is still necessary because the system could not be run during the rehearsal at performance levels that will be needed in 2000. During the dress rehearsal, the scanning equipment used to electronically record responses off census forms experienced system crashes due to flaws in the software. To deal with this problem, the Bureau was forced to cut back the number of scanners in operation at any one time. According to Bureau officials, the software subcontractor, is resolving this and other problems through intensive testing, and will have a new version of its software available for further testing in late August. According to Bureau officials, another problem related to scanning is the frequency at which the scanners needed to be cleaned of accumulated dust. Initially, the Bureau had planned to clean the machines every 2 hours. However, dust accumulated faster than expected, which necessitated a 5-minute cleaning after each 15 minutes of use. Bureau officials said that poor paper quality appears to be one factor that led to the accumulation of dust. The Bureau and the Government Printing Office are studying the problem. computer-generated images—to test the performance of its scanning equipment. Bureau officials believe that sufficient time remains to complete more testing, incorporate lessons learned from the dress rehearsal, and make technology enhancements before Census 2000. Of the Bureau’s numerous field operations, two of the largest and most logistically challenging under the Bureau’s current design are nonresponse follow-up and a procedure called Integrated Coverage Measurement (ICM), a survey in which residents in a sample of blocks are interviewed. ICM and enumeration data are used in dual system estimation to adjust for coverage errors in the enumeration. As currently planned, the Bureau is to reduce its nonresponse follow-up workload for the 2000 Census by sampling nonresponding households. By using a sample-based nonresponse follow-up, the Bureau would reduce the time necessary to complete this activity. This in turn would expedite the beginning of ICM data collection, improving the Bureau’s ability to meet the target date for delivery of census data at the end of December. In addition, compressing the nonresponse follow-up data collection period could shorten the average time between census day and visits to households, thereby reducing the likelihood of enumeration errors caused by households that move between census day and nonresponse follow-up. The Bureau plans to conduct a nationwide ICM. However, as noted earlier, for the dress rehearsal, the Bureau only sampled for nonresponse and is conducting the ICM in Sacramento. In South Carolina, the Bureau procedures are to follow up on all nonresponding households and do a coverage evaluation operation, just as it did nationally in the 1990 Census. At the Menominee site, the Bureau is to follow up on all nonresponding households and additionally, is using the ICM. completed on time at the Sacramento and Menominee sites, and about a week ahead of schedule in South Carolina. We observed that ICM operations began as scheduled in Sacramento. Major ICM field operations are scheduled to last until late August 1998. The Bureau’s procedures called for it to take additional steps to prevent contamination of the ICM data. According to Bureau officials, these included efforts to separate the management and implementation of the ICM operation from the nonresponse follow-up operation. For example, the ICM operation was administered entirely by the Bureau’s Seattle Regional Office rather than the local census office in Sacramento. Additionally, nonresponse follow-up enumerators were not told which blocks were included in the ICM, and ICM enumerators were told that they could not tell anyone which blocks had been assigned to them. The quality of the dress rehearsal data, as measured by the extent to which it is complete and accurate, is still to be determined. With the ICM still in progress, the full results of the ICM will not be known for several months. Moreover, a key question for which information is not yet available is the degree to which the Bureau had to rely on proxy responses from neighbors, letter carriers, and others to complete its nonresponse workload in a timely manner. As part of our ongoing work, we will review the quality of the data collected during the ICM and nonresponse follow-up operations, the Bureau’s procedures for maintaining the independence of enumeration and ICM data, and, more generally, the extent to which the Bureau was able to implement its field operations as planned. In summary, Mr. Chairman, within the constraints and limitations imposed by the dress rehearsal setting, the Bureau to date has shown a general ability to implement the dress rehearsal at the three sites according to its operational timetable and plan. The Bureau has also shown an ability to adapt to changing requirements as demonstrated by such actions as redesigning its address list development procedures to produce a more accurate and complete list and by increasing wage rates in South Carolina to improve recruiting. the population count and the extent to which proxy data are used—are not yet available. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Bureau of the Census' dress rehearsal for the 2000 Census and the Bureau's readiness for carrying out the 2000 Decennial Census, focusing on how key census-taking operations have performed thus far during the dress rehearsal and the implications that may exist for 2000. GAO noted that: (1) when it last testified before Congress in March 1998, GAO noted that although the Census Bureau had made progress in addressing some of the problems that occurred during the 1990 Census, key decennial census activities faced continuing challenges; (2) the census dress rehearsal, under way at three sites, is the last remaining field test before the decennial census is administered; (3) within the constraints and limitations imposed by the dress rehearsal setting, the Bureau to date has shown a general ability to implement the dress rehearsal at the three locations according to its operational timetable and plan; (4) certain census activities, such as staffing the dress rehearsal operations and completing field operations on schedule, appear to have gone well; (5) however, the dress rehearsal experiences also have underscored the fact that the Bureau still faces major obstacles to a cost-effective census; (6) for example, mail response rates remain problematic, and local partnerships had limited success; (7) further, the Bureau's general ability to conduct the dress rehearsal according to its operational plan, while encouraging, is not necessarily a predictor of success in 2000; (8) because the dress rehearsal was performed at three sites, the capacity of regional and headquarters offices, as well as a number of essential census-taking operations, could not be fully tested under census-like conditions; and (9) the most important outcome measure--the quality of the census data collected--is not yet available.
The Social Security Administration’s (SSA) Disability Insurance (DI) and Supplemental Security Income (SSI) programs are the two largest federal programs providing cash and medical assistance to people with severe, long-term disabilities, at an annual cost of more than $100 billion. The DI program offers partial income replacement for disabled workers who have earned Social Security benefits. The SSI program provides federal and state cash assistance to people who are elderly, blind, or disabled, regardless of insured status, whose income and resources are below a specified amount. In the legislation authorizing the SSI and DI programs, the Congress articulated its aim to rehabilitate into productive activity as many disability benefit recipients as possible. Consistent with this goal, the Congress has passed several work incentive provisions to reduce the risks of seeking employment for recipients, by safeguarding cash and medical benefits while a recipient tries to work. One such provision authorized the plan for achieving self-support (PASS) program as part of the SSI program in 1972. In explaining this provision, the pertinent House Report stated that the Ways and Means Committee wanted to “provide every encouragement and opportunity” for participants to work. The DI program, authorized in 1956, provides cash and medical benefits to workers under age 65 who become disabled and cannot continue working, as well as to their dependents. The DI program is funded through Federal Insurance Contribution Act taxes paid into a trust fund by employers and workers. In 1994, 3.3 million people with disabilities were enrolled in DI and received, on average, cash benefits of about $660 a month. The SSI program was authorized in 1972 under title XVI of the Social Security Act as a means-tested income assistance program for people who are elderly, blind, or disabled. Unlike DI beneficiaries, SSI recipients do not need to have a work history to qualify for benefits, but need only have low income and limited assets. General government revenues provide the federal funding for the SSI program, while some states supplement federal payments with their own funds. SSI disabled beneficiaries receive an average monthly cash benefit of about $380 (beneficiaries in the 43 states that provide a monthly supplement received, on average, an additional $110 in 1993) and immediate Medicaid eligibility in most states. In 1994, 2.3 million blind and disabled adults under age 65 and 893,000 children were enrolled in SSI. Individuals who are insured under Social Security, but fall below SSI’s income and resource eligibility threshold, can qualify for both DI and SSI benefits. An additional 671,000 people under age 65, called concurrent beneficiaries, were enrolled in both programs in 1994. To be considered disabled under either program, a person must be unable to engage in substantial gainful activity (SGA) because of a medically determinable physical or mental impairment that is expected to last not less than 12 months or result in death. The severity of the impairment must prevent the applicant not only from doing his or her previous work, if any, but also from engaging in any other kind of substantial work in the national economy, considering his or her age, education, and work experience. The process used to determine eligibility for benefits is the same for both programs. In establishing the SSI and DI programs, the Congress considered it very important that disabled persons be helped to return to self-supporting employment wherever possible. To this end, over the years the Congress has enacted numerous work incentive provisions in both the SSI and DI programs to encourage more people to work their way off the disability rolls. These include the PASS program and extended eligibility for Medicare benefits to working DI beneficiaries. During a beneficiary’s work attempt, these work incentive provisions allow varying degrees of safeguards for cash and medical benefits, as well as program eligibility. However, despite the Congress’ aim to return the maximum number of DI and SSI beneficiaries to work, few beneficiaries have actually done so. Only 1 in every 500 DI beneficiaries is terminated from the rolls because he or she has returned to work. While SSA has no comparable measures for the SSI population, we recently reported that this population’s return-to-work rate is similarly low. The PASS program was established by the Congress as part of the SSI program to assist disability benefit recipients with beginning or returning to work. The PASS program is administered by staff in the approximately 1,300 SSA field offices nationwide, based on policy and regulations developed by SSA headquarters staff in the Office of Program Benefits Policy. The Program Operations Manual System (POMS) is the primary policy guidance to staff on the PASS program. Work incentive staff in SSA’s 10 regional offices provide additional guidance and oversight to field office staff. In December 1994, about 10,300 individuals participated in the PASS program and had active PASSes. In commenting on the PASS provision, the pertinent House Report stated that it should be “liberally construed,” and SSA headquarters has chosen to place few constraints on the program in regulations or in the POMS. For example, there is no required application format and no limit on the number of approved plans an individual can have in a lifetime. PASSes are written plans, developed specifically for an individual, that identify a work goal and the items and services needed to achieve that goal. (For a sample plan format, see app. I.) To purchase these items and services, PASS program participants may use any non-SSI income or resources they have—for example, DI benefits or wages from a job. Anyone can write a PASS—the disability benefit recipient, an SSA staff member, a vocational rehabilitation professional, staff from another agency, or a relative or friend; no vocational rehabilitation expertise is required. Normally, any additional income or resources would reduce the amount of the SSI payment, but SSA disregards the income and resources included in a PASS when determining income available to the SSI recipient. Consequently, excluding income and resources to pursue a work goal under a PASS can result in additional monthly SSI cash payments. For example, an SSI recipient earning $300 a month in a part-time job is normally eligible for about $350 in SSI benefits, if he or she is single and lives alone. However, using a PASS, this individual could use these earnings to pay for classes and transportation to school to become an accountant and receive the maximum 1995 federal benefit of $458 each month. As currently implemented, the PASS program can also be used to establish eligibility for SSI by disabled individuals whose incomes or resources would otherwise exceed program eligibility limits. If a DI beneficiary who receives $620 a month in benefits, for example, can set $300 of this income aside under a PASS to pursue a work goal, he or she becomes eligible for SSI payments because his or her countable income is less than the federal SSI eligibility rate. Eligibility for SSI usually brings eligibility for other means-tested benefits, including Medicaid and food stamps. PASSes are submitted to staff in one of the SSA field offices for review. These staff approve or deny plans on the basis of the work goal’s feasibility and adjust SSI payment levels for approved plans. Work goals must be stated in terms of specific job titles or professions. For example, education can be part of a plan but cannot be a goal in itself. As long as applicants specify a different work goal in each PASS, there is also no limit placed on the number of plans one individual can have, although only one plan can be active at a time and each PASS is limited to a maximum of 48 months. However, interim guidelines issued in January 1995, in response to a mandate in the Social Security Independence and Program Improvement Act of 1994, allow additional 6-month extensions to ongoing plans. In December 1994, PASS participants represented only about three-tenths of 1 percent of the working-age disabled SSI population. The number of PASSes varies by state, with clusters in areas where outreach by professional PASS preparers and service providers has been greatest. As a result, knowledge of and experience with the PASS program varies greatly among field offices and staff. Some offices have no active PASSes, while administering the PASS program constitutes a significant workload in others. Compared with other SSI recipients, PASS program participants are generally younger and more often men. In addition, PASS program participants are more likely to have mental illness as their disabling condition. For a more detailed description of the demographics of program participants, see appendix II. The PASS program is unique among all DI and SSI work incentives because it is available to disabled individuals who are not already working. Specifically, the PASS provision allows participants to exclude unearned, as well as earned, income for consideration in determining benefit amount. All other work incentive protections, such as extended eligibility for health and cash benefits for working recipients, apply to earned income only. Furthermore, while SSI and DI beneficiaries who work can use the impairment-related work expense provision to deduct from their gross wages the cost of items and services needed to work, only half these costs are offset by increased SSI benefits. In contrast, PASS expenses are deducted after all other exclusions when determining an individual’s countable income, and therefore fully subsidized by additional SSI cash payments, up to the maximum benefit amount. House Conference Report 103-670, accompanying the Social Security Independence and Program Improvement Act of 1994 (P.L. 103-296), asked that we review SSA’s PASS program. Specifically, we were asked to provide data for the last 5 years, to the extent possible, on (1) the number and characteristics of individuals who have applied for a plan, (2) the number and characteristics of those whose plans have been approved, (3) the kinds of plans that have been approved and their duration, (4) the success of individuals in fulfilling their plans, and (5) the extent to which individuals who have completed a PASS have become economically self-sufficient. We were also asked to study whether improvements can or should be made to the PASS program, including the process used to approve plans. Because SSA’s Office of Program Integrity Review was already tracking PASS program participants’ compliance with their plans and the outcome of PASS program participation, we focused our efforts on PASS program management and internal controls. To analyze PASS program implementation and determine what changes and improvements were needed, we reviewed PASS guidance, legislation, and regulations. We also interviewed Social Security headquarters staff responsible for the program and monitored an SSA work group charged with considering PASS program policy changes. In addition, we met with staff in 19 SSA field offices, located in California, Colorado, Maine, Massachusetts, Michigan, Vermont, and Wisconsin, that had high numbers of active PASSes to discuss their experience with the work incentive. (For a list of locations visited, see app. III.) In these same states, we met with 38 individuals, representing 32 different organizations, who prepare PASSes for disability benefit recipients. In addition, we spoke with the PASS program liaison in each of the 10 SSA regional offices to learn about program trends nationwide. To determine the numbers and characteristics of PASSes and program participants, we analyzed an extract of the Supplemental Security Record (SSR), the main database of SSI participants. We also used data from the SSR, the Master Beneficiary Record (MBR) of DI beneficiaries, and SSA’s Master Earnings File maintained for all workers to assess the current earnings and benefit status of former PASS participants. We were unable to evaluate program impact, because most program participation has been recent, and SSA has only 4 years of historical data on the PASS program, which includes no data on the outcomes of program participation. In addition, individuals self-select into the PASS program, and may already be different from other SSI and DI beneficiaries in ways that would affect their future employment and earnings. Finally, we reviewed 380 randomly selected PASS files in 17 field offices to gather additional data about PASSes, including the types of work goals and proposed purchases. We did not, however, verify that specific program participants complied with the goals and activities specified in their PASSes. For more information about our methodology, see appendix IV. We did our work between January and November 1995 in accordance with generally accepted government auditing standards. SSA has not translated the PASS program’s goals of providing opportunities for participants to work into a well-defined program structure with specific objectives. Confusion about program objectives has resulted in different and sometimes conflicting PASS approvals and denials across field offices. In addition, SSA field office staff find it difficult to approve or deny PASSes because SSA headquarters has not developed clear criteria for evaluating PASSes. Most field office staff do not have expertise or training in evaluating work opportunities for people with disabilities, whose needs are extremely diverse. Reflecting this diversity, the PASSes we reviewed encompassed a wide array of work goals, from janitorial work to professional positions, and included expenditures ranging from business cards to new cars. Finally, the PASS program includes primarily DI beneficiaries, many of whom use the PASS program to gain eligibility for federal SSI payments. Because SSA has not developed measures to evaluate how well the PASS program is helping participants become or stay employed, the agency lacks adequate management data on PASS use. For this reason, we could not accurately measure PASS program impact. We did find that many former PASS participants are earning enough income to at least reduce their SSI payments, compared with other SSI recipients, although many had worked prior to participating in the PASS program. Few, however, have earned enough to end SSI payments. In addition, nearly all concurrent beneficiaries continue to receive DI benefits after their plans end, even if they leave the SSI program, limiting the potential federal savings stemming from the PASS program. The 380 PASSes we reviewed encompassed a wide variety of goals, including increasing hours and responsibilities at a current job, seeking a new job, or pursuing education as a step toward work. (See fig. 2.1.) About one-third of plans had education as a major component, although the level of education desired ranged from attending culinary school to pursuing a Ph.D. in philosophy. Another one-third of PASS program applicants were seeking new jobs, while about 10 percent proposed to maintain or increase their hours or responsibilities at a current job. For those individuals proposing specific occupational goals, the demands of these jobs, as well as their likely income, were highly variable. We found that many PASSes were written to help applicants achieve low-skill, low-wage service jobs, such as janitorial work, product assembly, or employment in fast-food restaurants; however, we also saw plans written to help participants become psychotherapists, computer programmers, engineers, and college professors. Some program participants’ goals were more unusual, involving self-employment in music, arts, and crafts. Self-employment was the goal in 15 percent of the PASSes we reviewed, including small businesses in tailoring, tree stump removal, and window washing. In addition, some PASS program participants were approved to support themselves as professional PASS preparers, writing plans for other disability benefit recipients for a fee. (See app. V for a sample of occupational goals listed on the approved PASSes we reviewed.) The proposed purchases listed on PASSes varied, as did their cost. Automobiles (new and used vehicles as well as insurance, maintenance, and modification costs), tuition, and computers were common items in PASS budgets. More than 80 percent of the plans we reviewed included at least one of these items. Costs for these items ranged from a $41,000 wheelchair-modified van or thousands of dollars in highly specialized computer equipment to a $19 monthly bus pass. SSA has placed no absolute limits on the types or costs of items that PASSes can be used to purchase as long as they are necessary to achieve the work goal and reasonably priced. The wide range of approved purchases we saw included photographic film, cellular telephone service, business advertising, professional attire, job coaching, and school supplies. The total budget also ranged considerably in the PASSes we reviewed, depending on the amount of the income excluded and the plan’s time length. The average monthly exclusion for all PASSes in December 1994 was $400, although some individuals were excluding more than $1,000 a month. Total costs associated with a single PASS could be significant; for example, one plan had a budget of $67,233 for an individual to return to work as a radiological technologist, which included items such as a standing wheelchair, a modified van and insurance, a track lift system, and a computer. Field staff in at least eight of the offices we visited raised concerns about PASS applications in which proposed purchases were exactly equal to total excludable income. Several PASS preparers confirmed that they develop budgets based on the maximum excludable income, regardless of the occupational goal. We were not able to determine the average time length of PASSes, because extensions and end-dates were not always documented in the files we reviewed. In addition, this information is not captured in the Supplemental Security Record (SSR), the database of SSI recipients. We found, however, that half of the current exclusions as of December 1994 had been active 9 or fewer months. In addition, more than 1,700 PASSes begun in 1993 were no longer active in December 1994. No comprehensive data are currently available on the total number of PASS applications, because SSA does not track denied plans. However, staff in almost all the field offices we visited agreed that the majority of PASSes are approved. SSA has neither clearly articulated the objectives of the PASS program nor established criteria for evaluating whether an individual plan will be successful. This lack of clear goals is reflected in inadequate guidance to field office staff and inconsistent and inefficient PASS program administration. Our interviews revealed a wide range of standards applied by SSA field office staff and third-party PASS preparers in assessing the feasibility of individual PASSes, ranging from cessation of all disability benefits to improved quality of life. Current PASS participants constitute less than 1 percent of the working-age SSI population; nonetheless, the number of PASSes has grown by more than 500 percent in the last 5 years. As a result of congressional direction to liberally construe the PASS provision, SSA has permitted individuals to become eligible for SSI by using a PASS. Consequently, the PASS program includes primarily DI beneficiaries, many of whom use PASS to gain eligibility for SSI payments. In establishing the PASS program, the Congress specified that the work incentive should “provide every opportunity and encouragement to the blind and disabled to return to gainful employment.” SSA headquarters staff, however, have not issued regulations translating this goal into specific outcome measures of PASS program success, choosing instead to leave the interpretation open to the field office staff who administer it. As a result, the SSA field office staff and third-party PASS preparers we interviewed used different, and sometimes conflicting, interpretations of successful outcomes when developing and reviewing individual PASSes. For example, some field office staff will not approve a PASS unless they believe it will result in the applicant leaving the disability rolls, while the preparer may have written it to give the applicant a chance to attend school and try working, but not become self-supporting. SSA field office staff in nearly all the offices we visited said that PASS use should result in a reduction, if not cessation, of SSI benefits. While 10 of the 38 PASS preparers we interviewed shared this view, others offered broader definitions of success, including working or attending school, improved self-esteem, and fuller participation in society. Some PASS preparers and SSA staff expressed concern that immediate economic self-sufficiency was not feasible for some members of the SSI population, and that the appropriate PASS outcome depended on the individual. PASS program guidance from SSA headquarters to the field offices contributes to confusion over the goal of the work incentive. Prior to January 1995, the POMS, SSA’s primary written guidance to field office staff who administer the PASS program, stated that the occupational objectives on PASSes must ultimately produce enough additional earned income to reduce or eliminate SSI payments. The guidance also noted that the PASS program was “not intended to subsidize a continuing level of current work activity.” In contrast, the POMS transmittal concerning PASS issued in January 1995 directed field office staff to evaluate individual PASSes in terms of the applicant’s higher earnings potential upon completion of the PASS, and weakens the link between earnings and benefits payments. In addition, this guidance includes increased on-the-job independence and decreased reliance on employment support, irrespective of earnings, as acceptable PASS goals. According to field office staff, the verbal guidance they receive from regional and headquarters staff encourages them to be liberal and err on the side of approving individual PASSes. Agency literature available to SSI recipients primarily defines the PASS program in terms of employment goals, with little or no focus on reduced or eliminated benefits, or increased earnings potential. SSA field office staff in half the offices we visited said PASS approvals and denials were inconsistent, and nearly all field office staff expressed frustration with the inadequate guidance they received from SSA headquarters on approving or denying PASSes. Some SSA field and regional offices have developed their own guidance for approving PASSes to compensate for gaps they see in the POMS. As a result, we found that the PASS program was implemented differently by location or even by individuals in the same location, depending on the prevailing beliefs about program goals. For example, some field offices in Wisconsin use a PASS evaluation form that requires applicants to estimate their pre- and post-PASS earnings to demonstrate increased earnings capacity before they will approve a plan. Staff in other field offices said they only look to see if there is even a “remote chance” of achieving a proposed occupational goal. The basis for decisions about acceptable cost items also varies. For example, some field office staff approved computer expenditures for students without documentation that computers were required, while staff in another office denied a request for computer equipment, noting that the university the applicant planned to attend did not require students to have computers. Work incentive staff in some regional offices review PASS approvals and denials made by the field offices; however, staff in SSA’s headquarters do not provide direct oversight of this process. PASS program participants in December 1994 represented only three-tenths of 1 percent of the working-age SSI population. Nonetheless, the number of PASSes has risen dramatically in recent years, from 1,546 in March 1990 to 10,329 in December 1994. PASS program growth has been much slower during 1995; the number of plans increased by only 169 between December 1994 and June 1995. However, SSA field staff in 15 of the 19 offices we visited predicted that the number of PASSes will continue to grow. Several PASS preparers also told us they believe that more outreach will increase PASS use. Outreach efforts, especially by outside organizations, have already increased awareness of the PASS program. For example, in the Denver area, several professional PASS preparers have established businesses that seek out clients. Since the late 1980s, individuals and organizations across the nation have assisted disability benefit recipients to understand and use work incentives, especially the PASS program. At the same time, some state agencies that provide employment services to the disabled have started to use PASSes to supplement other funds. Private agencies, too, are being encouraged to look at the PASS program as a funding source for their clients. The potential universe of PASS program participants is significant and increasing with the recent tremendous growth in the number of federal disability benefit recipients. All 4 million DI beneficiaries are potentially eligible for a PASS. In addition, any of the 2.3 million blind or disabled adults receiving SSI with additional income or resources to exclude could potentially participate in the PASS program. The PASS program encompasses primarily DI beneficiaries, who can exclude their DI benefits under a PASS. Of the approximately 10,300 PASS participants in December 1994, about 75 percent were concurrent beneficiaries, many of whom would not receive federal SSI benefits without a PASS. In comparison, about 30 percent of disabled adults in the overall SSI program also received DI benefits. We estimated that, overall, more than 40 percent of all PASS program participants had earned or unearned income, primarily DI benefits, in excess of the eligibility level for federal SSI benefits. These individuals received, on average, $318 in monthly federal SSI payments. Without the PASS program, they would not have been eligible to receive any federal SSI payments. The federal cost of PASS program participation for these individuals who use a PASS to establish SSI eligibility is often higher than just the SSI payment, because the participants may also gain access to other means-tested benefits, such as Medicaid and food stamps, while retaining access to DI payments and Medicare. In these instances, Medicaid supplements Medicare benefits, covering deductibles, medications, and other medical costs not covered by Medicare. Nearly one-fourth of recipients whose plans started in 1994 first applied for SSI benefits that year. PASS program participation for SSI recipients who are not concurrent beneficiaries is generally limited to those who are already working and can exclude their earned income or who receive other types of income or resources, such as an inheritance. Many SSI recipients who may benefit from the PASS program are unable to participate because they lack income or resources and cannot make the initial investments in education and skills that can be key to successful vocational rehabilitation. Our field work suggests that SSA’s current implementation of the PASS program is poorly managed and designed. Information required of applicants is inadequate for evaluating plan feasibility. SSA field staff lack the training, guidance, and expertise to effectively review PASSes, and the PASS program competes with other field tasks, such as processing initial claims, that receive more work credit in SSA’s system for measuring office productivity. POMS guidance specifically directs field office staff to evaluate applicants’ goals in light of their impairments and other disability-related factors. However, the POMS does not require much of the information that would be relevant to determine whether a work goal is feasible, including the nature of an applicant’s disability, on a PASS application. This information may not be otherwise available to field office staff. Because of this inconsistency, staff may approve or deny PASSes without knowledge of the applicant’s skills, education, work history, impairments, or even the disabling condition. Further, PASSes are not required to provide any information about the demands of the proposed job, and required skills and qualifications. We also found that diagnosis information was missing on the SSR, which is accessible by field office staff, for more than 40 percent of recipients with PASSes in December 1994. In addition, the detailed assessments of applicants’ work ability compiled by state Disability Determination Services during its reviews to decide program eligibility are often not sent to staff in SSA’s field offices. Field office staff also lack specific training and experience in determining the vocational capabilities and needs of people with disabilities. As a result, staff in every office we visited reported that they do not feel equipped to adequately evaluate PASS work goals and expenditures. Staff in half of these field offices described the evaluation process as inconsistent between and often within offices, and some noted it was unfair to PASS program applicants. SSA headquarters staff are aware of the field staff’s concerns, which were discussed at a 1994 work group on the PASS program. Field office staff’s lack of knowledge and training is also frustrating to third-party PASS preparers; half of those we interviewed said they did not believe field office staff were qualified to make decisions about PASS feasibility. In addition, at least 14 of 38 PASS preparers raised concerns about field office staff’s lack of awareness and understanding about disability issues, including the vocational capabilities of people with disabilities. The differences in plan approvals and denials have also led some PASS preparers to “shop around” within a field office or across offices to find someone who will approve a plan that has been denied elsewhere, according to field office staff and preparers. According to field office staff who evaluate plans, the guidance and criteria for approving and denying PASSes is inadequate, given their lack of expertise. While the POMS acknowledges that self-support is “highly subjective and often complex” to define, staff are directed to use a “common sense approach” when evaluating plans. When no licensed or accredited vocational rehabilitation agency or individual has been involved in developing a PASS, field office staff can seek outside input from a vocational rehabilitation or other employment services agency, or use their “judgment and intuition” to determine PASS feasibility. As a result, SSA field office staff can make highly individual decisions about plans based on their personal beliefs. For example, one field office staff member told us she denied a plan with the goal of writing a cookbook because she believed there were already too many cookbooks on the market. Furthermore, if they seek guidance outside of SSA, field office staff are limited to vocational agencies willing to provide evaluations of the feasibility of proposed work goals at no cost, unless the program applicant agrees to include the cost of the evaluation on his or her PASS. These agencies, however, are not required to advise SSA, and we found that some public agencies did not provide assistance to SSA field offices for individuals who were not already their clients. Field office staff also contact regional or central office work incentive staff with questions about feasibility; however, the staff in the regional and central offices are not necessarily trained in vocational rehabilitation. The field office staff’s lack of vocational rehabilitation knowledge and the limited guidance they receive also limits their ability to evaluate whether charges for particular items in PASS budgets are reasonable or necessary. Staff in at least eight field offices said they had no basis on which to judge whether a proposed item was reasonably priced. POMS guidance directs staff to be “as pragmatic as possible” when evaluating questionable expenses or determining the relationship of an item or service to the work goal. We found that some staff used personal criteria for evaluating the appropriateness of certain costs, especially automobiles and computers. For example, one field staff member told us she denied a plan that included a $22,000 automobile because she herself could not afford a car that expensive, even with a job, so “why should someone receiving federal assistance be allowed to buy one?” Some PASS preparers reported that they were frustrated with the inconsistent and subjective criteria they believed staff used when assessing the appropriateness of expenses. Items and services purchased with PASS funds do not need to be directly related to the claimant’s disability, but only necessary to achieve the work goal. Field office staff provided examples of expenses they considered excessive or unnecessary, including new cars and doctoral degrees, that were submitted on PASS plans and permissible under PASS guidelines. In addition, some field office staff said they have difficulty evaluating specialized or highly technical equipment with which they are not familiar. SSA headquarters has also not determined whether only new cost items related to the goal should be approved, or whether the PASS can be used to pay for existing expenses, such as car loan payments, deemed necessary to achieve the goal. SSA field office staff in many of the offices we visited reported that time spent administering the PASS program is not adequately reflected in SSA’s work credit system. The time necessary to thoroughly evaluate a PASS, including seeking additional information about feasibility, was estimated by field office staff as up to 8 hours or more. The work credit for reviewing PASSes submitted by individuals already entitled to SSI, which is grouped with other tasks including recording wages earned, was 6.8 minutes in June 1995. Furthermore, PASS denials receive no credit at all, yet some field office staff reported that plan denials take even longer than approvals, because of the amount of evidence needed to justify the denial. Requests from two regional offices that SSA create a separate work credit category just for the PASS program were turned down by SSA headquarters. In our interviews, few field office staff identified the PASS program as a high priority workload. Many reported that initial claims and other tasks necessary to secure or start benefits for claimants were a higher priority than PASS. Monitoring PASS program compliance is also considered to be a time-consuming effort by field office staff, especially when participants have not kept adequate records. POMS guidance on reviewing PASSes and monitoring compliance includes a number of steps that could take several hours to complete. More than half the PASS preparers we spoke with reported significant concern about the length of time required by field offices to approve plans. POMS directs field office staff to review PASSes “as soon as possible.” SSA does not track the length of time PASS program applicants must wait for a decision on their plan. We were not able to develop an estimate because submission and approval dates were frequently not recorded on the plans we reviewed. However, we saw and heard examples of plans reviewed and approved within a day, as well as delays of several months before a response was given on a submitted PASS. Delays in approval could result in lost employment and schooling opportunities for applicants, and the slow approval process serves as a disincentive to PASS program participation, according to PASS preparers. For example, the two agencies involved in a demonstration project attempting to increase PASS use and evaluate PASS effectiveness in funding supported employment, conducted in 1989 and 1990 by the Association for Retarded Citizens of the United States, ultimately withdrew as a result of lengthy delays in approving PASSes. On the other hand, quick reviews may not effectively screen out applicants with infeasible PASSes. Although the PASS program has been in existence for more than 20 years, SSA has no published data on the PASS program’s effect on employment and benefits, nor has the agency maintained the type of management data necessary for measuring program impact. We attempted to evaluate the PASS program’s impact on employment among former participants. However, we were unable to adequately determine the long-term effect of PASS program participation for a number of reasons, including the fact that most PASS use has been very recent and that SSA has not maintained comprehensive historical data on PASS program participants. Furthermore, participation in the PASS program is based on self-selection, and therefore PASS program participants may well be different from other SSI and DI beneficiaries in ways that could affect their future employment and earnings, independent of the program. On the basis of the limited data available, we examined the current status of former participants using selected economic measures of potential success—increasing earnings, reducing SSI benefits, and leaving the disability rolls. Among the SSI population, former PASS program participants were more likely than other SSI recipients to have sufficient earnings to reduce their SSI benefits, but few left the rolls as a result of their earnings. The PASS population was also likely to have been working before using the work incentive. Nearly half had earnings higher than before they had the PASS. Success in returning DI beneficiaries to work was negligible. Generally, earnings among concurrent beneficiaries who were former PASS program participants were not high enough to terminate DI benefits. While many concurrent beneficiaries we reviewed left the SSI program after their PASS ended, nearly all were still receiving their DI benefits in May 1995. We estimated the cost of additional cash payments to PASS participants in January 1995 as $2.6 million, or about $30 million annually. The lack of clear outcome measures prevents SSA from evaluating the PASS program’s success in helping recipients return to work. Consequently, the agency does not have data on the PASS program’s effect on employment during the more than two decades since its inception and thus cannot make informed decisions on and changes to the program, such as developing appropriate criteria for setting time limits on PASSes or determining the best candidates to target, as well as improving program implementation. While SSA’s current database for the SSI program, the SSR, includes some characteristics of PASS program participants, it does not systematically track PASS denials, PASSes that exclude only resources and not income, and the occupational objectives and budgets listed on plans. Therefore, SSA also does not know, for example, the acceptance and denial rates for PASS program applicants, nor the total number of applicants or active plans. SSA officials responsible for the PASS program are considering changes to the way program data are tracked, which would capture some of this information, including denials and resource-only exclusions. Former PASS participants who remained in the SSI program were more likely to have earnings that reduced their SSI cash payments than were other SSI recipients. We examined December 1994 earnings and benefit data for 4,751 former PASS program participants whose plans had ended by that date. We excluded from our analysis concurrent former PASS participants who stopped receiving SSI payments after their PASS exclusion ended. More than one-third of the 4,751 were reporting earnings to the SSI program, and the majority of these earned enough income to at least partially reduce their SSI payment amount. Half of the wage earners still receiving benefits reported wages of $360 or more for the month, which would result in a reduction in monthly benefits of about $147. About one-fifth of the 4,751 former PASS participants were earning more than $500, SSA’s measure of substantial gainful employment for most claimants. SSI benefits and reported earnings received before program participation were not available for all of the former PASS holders, and therefore we could not determine whether their earnings had increased or benefits had decreased after participating in the PASS program. Similarly, we could not estimate the cost of PASS program participation for this group of 4,751 former participants. In contrast to the experience of former PASS participants, only about 8 percent of other working-age disabled SSI recipients reported any earnings during December 1994, averaging $300 per person. Similarly, while 14 percent of the 4,751 former PASS program participants had earnings high enough to end their cash benefits altogether, and were receiving Medicaid benefits only, only about 1 percent of other working-age disabled SSI recipients earned this much. Very few former PASS program participants left the SSI rolls in the relatively short time frames for which data were available. Among the SSI-only former program participants we reviewed, about 160 had stopped receiving SSI or Medicaid by December 1994 for any income-related reason. They represented only about 2.5 percent of the 6,582 PASS program participants 1991 through 1993, including those concurrent beneficiaries who left SSI after their PASSes ended because their unearned income was too high. This low rate of leaving the SSI rolls by working for former PASS program participants is consistent with the overall experience of both the SSI and DI programs, which have traditionally seen low rates of return to work. However, even very limited workforce participation by SSI recipients can accrue significant federal savings. For example, the reduced benefits paid to all working recipients in January 1995, including former PASS program participants, lowered SSI cash payments by $18.6 million in that month alone. We found that many PASS program participants had already worked while on the rolls. Because SSI preprogram benefit and earnings data were not available for most PASS program participants, we examined the annual earnings data reported to SSA for all workers, regardless of disability status, to determine whether PASS program participants had been working before starting their PASSes. About half of both current and former PASS program participants we analyzed had earnings the year before their plans began, averaging between $2,000 and $3,000. Although we did not determine whether earnings increased as a result of the PASS, former participants’ annual earnings were frequently higher in the year their plans ended. We reviewed pre-and post-PASS annual earnings data for 3,659 recipients whose plans had ended by December 1993. Approximately 55 percent had earnings in the year before their PASS, and about 60 percent had earnings in 1993. Nearly half were earning more in 1993 than they did before they had a PASS, including those with no prior earnings. The median increase in annual earnings for those with any earnings was more than $3,000. In addition, about one-fifth of the entire group had no earnings the year before their PASS started, but were working in 1993. PASS program participation had almost no financial impact on the DI program. Given the high benefit paid to many DI beneficiaries—$660 a month, on average—a successful PASS program offers a chance for significant DI trust fund savings if participants leave the DI rolls, even with the cost of additional SSI benefits. However, while approximately 40 percent of concurrent beneficiaries left the SSI rolls after their PASS exclusions ended, about 93 percent were still receiving their DI benefits in the following year. And for the remaining 7 percent, many had DI benefits terminated for reasons other than earnings, such as death. Furthermore, unlike the SSI program, DI benefits are not offset by monthly earnings below $500, and previous GAO analysis has shown that earnings above this amount may not be economically rational for DI beneficiaries who face the loss of substantial cash and medical benefits. For example, a DI beneficiary receiving $700 or $800 a month in cash benefits, plus Medicare benefits, could lose both for earning as little as $501 a month. While the PASS work incentive provision is designed to be flexible and individual, oversight from SSA headquarters has been lax and required little accountability, leaving the PASS program open to abuse. Although SSA has established few required elements for PASS plans, we found evidence of noncompliance even with these minimal standards. In addition, we found that SSA’s guidance to its field staff on administering the PASS program, as discussed in chapter 2, also lacks several internal control measures necessary to adequately ensure that expenditures are appropriate and beneficial. Specifically, SSA has no standard application form or effective time limits for PASSes, no penalties for willful noncompliance with the PASS program, and no standards for third-party PASS preparers. These internal control weaknesses result in SSA having little or no reasonable assurance that the PASS program is being used appropriately. While SSA headquarters is aware of many of these inadequacies and has established a work group to address some, efforts to date are not sufficient to protect taxpayer dollars and to restore public confidence. SSA regulations require only minimal information to be provided in PASSes, yet we found that even this information was not always present. According to regulations, the PASS must be in writing and (1) state a specific occupational goal, (2) disclose the amount of money the applicant has and will receive, (3) specify how this money will be spent to attain his or her occupational goal, and (4) explain how the PASS money will be kept separate from the applicant’s other funds. SSA’s PASS program guidance also states that PASSes should be “as descriptive as possible about the occupational objective.” In addition, the occupational objective must be a job or profession, or increased hours or responsibilities at a job already held by the applicant; completion of education/training programs or the purchase of transportation are not occupational objectives, although they may be a means to attain an objective. Field office staff sometimes approve PASSes even when these minimum standards are not met. Oversight of field office approvals and denials varies among the 10 regional offices; not all regions review the decisions made by their field offices. Our analysis of 366 approved PASSes found at least 20 that did not have acceptable occupational goals as defined in the most recent POMS. These included 11 approved plans with objectives solely to complete education/training programs, 4 plans with objectives to purchase automobiles and/or maintain car payments, and 5 plans in which no job or profession was identified. One of these plans, for example, stated that a PASS was necessary to “obtain a second job.” Our review of PASSes also revealed plans that did not state the amount of money recipients had and would receive, nor how the PASS money would be kept separate from the applicant’s other funds. For example, 27 of the PASSes we reviewed did not show what money and other resources the PASS program applicant had or would receive to use in attaining the occupational objective. Without this information, SSA field office staff cannot determine whether the occupational goal is economically feasible. Further, at least 75—or about 20 percent—of the PASSes did not describe how the applicant would keep income/resources separate from other assets. If PASS funds are commingled with other benefits in the same account, it is difficult for SSA field office staff to measure compliance and ensure that PASS funds are spent only for approved items. SSA’s internal controls over the PASS program result in only limited guarantees that program moneys are being used appropriately and that taxpayer dollars are being spent judiciously. SSA regional and field office staff raised concerns about the integrity of the PASS program, noting that in its current state the program may be vulnerable to abuse. While not all of the internal control weaknesses we found have a proven adverse impact, each contributes to the potential for misuse. Internal control weaknesses we noted included the lack of a standard application form, no effective limits on the length of time a PASS may be in effect, few penalties for willful noncompliance with the PASS program, infrequent and nonstandardized compliance reviews, and inadequate controls over third-party PASS preparers. While SSA headquarters staff are aware of field office concerns, their efforts to date, including an internal work group on the PASS program and a new version of the POMS, have not sufficiently addressed them. SSA regulations do not require a standard PASS program application form, and as a result, PASSes vary in specificity and completeness both within and across SSA field offices nationwide. Further, some SSA field office staff told us that many of the details they would find helpful for evaluating PASSes were often not provided on plans. SSA has developed a sample form, included in a pamphlet for SSI recipients, which captures the minimum of required types of information necessary to process the application. (See app. I.) However, applicants are not required to use this form. PASSes we reviewed ranged from a few sentences to very detailed business plans. For example, one PASS program applicant submitted a nine-page plan to open a business providing on-site fish aquarium maintenance for offices and other business establishments. Another presented an application to become a self-employed seamstress, which included an assessment of the market demand for her services and a detailed description of the need for proposed purchases. We also saw PASSes consisting of a few words jotted on notebook paper as well as plans lacking any specific occupational goal or discussion of the need for proposed purchases. Current limits on the length of PASSes are being reviewed by SSA and may soon be removed in response to a congressional mandate that SSA develop individualized criteria for determining the time limit of a plan. Program regulations provide that PASSes be approved for an initial period of up to 18 months. Changes and extensions must be approved by SSA, with a 36-month maximum for general occupational goals and 48 months maximum for PASSes that require lengthy educational or training programs. These time limits have been successfully challenged in one federal district court as “unreasonable” and without consideration of individual needs,and some PASS advocates have actively lobbied to have them removed. In 1992, the SSI Modernization Project recommended that the SSA Commissioner remove the regulatory limit on PASS length, and an internal SSA memo recommended allowing extensions beyond the existing limits, noting that time limits could be viewed as an additional barrier that keeps people with disabilities from achieving self-support. These recommendations were never implemented. Instead, the Social Security Independence and Program Improvements Act of 1994 (P.L. 103-296) required that SSA reexamine criteria for limits on the length of times PASSes may be active by taking into account reasonable individual needs. In an emergency teletype dated January 1995, SSA instructed field offices to grant 6-month extensions to PASS program participants who have exceeded the 36- or 48-month time limits, as long as the individual is in compliance and still needs to exclude income or resources to achieve his or her goal. Regulations currently under development will likely remove absolute time limits on the length of time a PASS may be in effect, according to SSA officials. Because the PASS program is operating without clearly defined objectives and other internal controls, removing all time limits for PASS completion would likely exacerbate the program’s vulnerability to misuse. Most SSA region and field office staff we interviewed who administer the PASS program do not want all PASS time limits eliminated. Some staff told us they believed that without any time limits PASS program participants may abuse the provision, staying out of work indefinitely and continuing to receive benefits such as Medicaid as well as increased SSI payments; others said that field office staff lack the background to make assessments of reasonable lengths of time for proposed PASSes in light of the applicant’s disability. No limit is placed on the number of approved PASSes an individual can have, provided that each one involves a different occupational objective. While only one PASS can be in effect at a time, a participant could have an unlimited number of subsequent plans if the first is unsuccessful. Individuals can even have a new plan if they were found to be deliberately noncompliant on a previous plan. While few in number, we saw instances where individuals were on their second or third PASS. For example, we reviewed one case in which a recipient had previous PASSes to be a word processor and receive a vocational evaluation, and now was pursuing a third goal of being a florist. The lack of adequate guidance on acceptable PASS expenditures means that SSA cannot provide reasonable assurance that PASS funds are being spent appropriately. While POMS states that PASSes must explain the necessity of each proposed purchase in the PASS budget, and that these items must be reasonably priced (moderate or fair and not extreme or excessive within the geographic location), it offers insufficient specific guidance on allowable PASS expenditures, according to field office staff. While the current POMS provides examples of acceptable expenditures such as tuition, books, uniforms, equipment, child care, and attendant care that are acceptable if they are found necessary and reasonably priced, it places no absolute limits on the types of items that can be approved, nor their costs. For example, the POMS specifies only that a luxury or sports car would “rarely” be appropriate, but sets no limit on the amount that can be spent on an automobile. Furthermore, the standards for justifying necessity or cost have not been specified, leaving them open to individual interpretation. Some SSA field office staff cited examples of approved PASSes that they believed did not contain evidence that the proposed expenditure items were reasonably priced and/or in direct relation to the proposed occupational goal. Not surprisingly, approved and denied expenditures in the plans we examined were inconsistent. For example, the proposed purchase of a $13,000 automobile in one plan was denied on the basis that the applicant did not provide sufficient evidence to justify the car’s cost; in another plan, the purchase of an automobile was denied altogether because the plan did not specify why the purchase of any automobile was necessary to achieve the occupational goal. In contrast, we also saw plans in which similarly expensive vehicles were approved, or in which less justification for purchasing an automobile was provided. Some field office staff also told us that some individuals who helped prepare PASSes were occasionally unwilling to provide documentation to support proposed expenditures. The periodic reviews SSA requires to determine whether PASS program participants are complying with their plans are infrequent and nonstandardized. These reviews are intended to ensure adherence to the spending plan and to determine whether the occupational objective has been reached or whether the participant is meeting plan milestones. Recent changes to the POMS require compliance reviews at least every 12 months, compared with the previous requirement of at least every 18 months, and require reviews every 6 months under certain circumstances. SSA field staff in most offices we visited, as well as several third-party PASS preparers, told us that frequent reviews would help prevent noncompliance and the resulting overpayments from additional SSI payments to PASS program participants not following their plans. In addition, staff in at least four offices told us they review plans more frequently than the POMS guidelines recommend if they have concerns about the feasibility of a plan. One field office staff member, for example, said that she performs early compliance reviews for plans that include high expenditures, such as a $20,000 car. SSA headquarters, however, does not have specific requirements about how compliance reviews should be done. The POMS directs field office staff to conduct the review in a manner convenient to the individual, either in person, by mail, or over the telephone. We found that all three methods were used. Ensuring compliance consists of reconciling actual expenditures with funds set aside under a plan and determining whether the program participant has reached his or her goal. Field office staff cited examples of PASS program participants submitting grocery bags or shoe boxes of receipts to be reconciled. In addition, according to SSA field office staff, many PASS program participants do not understand how to account for their funds, and some have not kept records of purchases. Little specific information about compliance reviews is included in the notification letter sent to individuals when their PASS is approved. Furthermore, some third-party PASS preparers told us that proper accounting of expenditures can be a difficult task for some program participants. Guidance to field office staff does not make a distinction between willful noncompliance and noncompliance for other reasons. While the POMS specifies that a series of unsuccessful plans “may be grounds” for questioning the feasibility of a new PASS, there are no prohibitions on subsequent PASSes for individuals who have abandoned or willfully not complied with previous ones. Materials from one organization that prepares PASSes, for example, tells prospective clients that “there is no penalty for not obtaining or maintaining employment after participating in PASS.” POMS directs SSA field office staff to ask the participant for evidence of PASS expenditures. If the participant provides none, the field office staff must obtain authorization from the participant to contact appropriate third parties to verify savings and purchases. Therefore, SSA field office staff must rely on the cooperation of the program participant and the third party to obtain needed evidence. SSA headquarters guidance is unclear about actions field office staff should take if a PASS program participant is noncompliant with his or her plan. Although SSA regulations require all changes to PASSes to receive prior SSA approval, if during a compliance review a PASS program participant is found to be out of compliance with the terms of the PASS, the POMS gives the participant the opportunity to amend the plan and return to compliance. Specifically, if the PASS program participant has not met the occupational goal or is not in compliance, the POMS states that field office staff should amend the PASS retroactively to fit the participant’s circumstances if this will result in compliance. For example, if an individual proposes to use a PASS to attend college as an engineering student and subsequently changes majors to journalism, the plan can be amended to reflect this change if this would make the program participant compliant with his or her amended plan. Similarly, if a PASS budget includes the purchase of a used car and the participant leases a new car instead, the plan can also be changed retroactively. SSA’s POMS also does not encourage terminating PASSes for recipient noncompliance. If the PASS cannot be amended or amendment would not result in compliance, POMS recommends suspending the PASS. Terminating a PASS is recommended only as a last option. If they are found noncompliant, program participants may be charged with repaying the additional benefits they received during their PASS. However, some SSA field office staff we interviewed were unaware they could collect these overpayments from individuals who had not complied with their plans. The amount of overpaid funds that can be withheld by SSA from a subsequent SSI check is limited to 10 percent of the total benefit, unless the claimant agrees to a higher amount. SSA guidance states that anyone can help prepare a PASS, including a vocational rehabilitation counselor, an organization that helps people with disabilities, an employer, a friend or relative, or SSA. Of the 380 PASSes we reviewed, more than half were prepared by third parties. Although SSA staff will help disability benefit recipients to fill out a plan free of charge, field office staff infrequently prepare PASSes. Indeed, SSA field office staff prepared only 14 of the 380 PASSes we reviewed. Recognizing the need for outside help, SSA field offices are to maintain a referral list of organizations that can assist people with writing their PASS, for a fee or not, although it does not endorse any organization. Although SSA has determined that anyone can help prepare a PASS, it did not establish minimum standards for third-party PASS preparation. As a result, the services provided by third-party PASS preparers vary greatly. Some preparers we interviewed provide extensive assistance and resources to their clients to determine their goals and needed expenses, including vocational evaluations of their abilities and needs; others provide only guidance to clients, directing them to do their own research. In addition, the amount of vocational rehabilitation expertise and experience varied considerably among the PASS preparers we interviewed. Some were certified vocational rehabilitation professionals or had graduate training; others had no special experience or training in assisting people with disabilities with obtaining or maintaining work. In the last several years, a number of SSI and DI recipients have received training, generally funded by a PASS, on preparing PASSes for a fee in several states, including Iowa, Oregon, Maine, and Colorado. The fees charged by preparers also range considerably. For example, the median fee charged among the PASSes we reviewed was $200, while the highest fee for PASS preparation and monitoring was $832. Some PASS preparers charge fees for PASS preparation as well as additional fees for other services, such as PASS amendments and monitoring. Other PASS preparers charge no fees but provide some or all of the services mentioned. POMS clearly states that fees are acceptable PASS expenditure items, but it is not clear whether SSA anticipated the amount of fees PASS preparers would charge in some cases and whether SSA intended to allow the applicants to claim the total amount as a planned PASS expenditure that would be offset by additional SSI payments. According to the current POMS, PASS preparation fees should be evaluated on the basis of the preparer’s involvement in formulating the plan, including the type of work done and number of hours. However, the absence of preparer standards and qualifications and the lack of nationwide SSA guidelines on appropriate charges make this guidance difficult for SSA field staff to administer. At least one SSA region issued its own guidance on acceptable fee amounts, recommending limits of $25 an hour and a total of no more than $500 for an individual PASS. Other regions approve fees in excess of these recommended limits. SSA lacks a uniform policy toward third-party PASS preparers, and conflict exists between some preparers and the SSA field staff. While some PASS preparers encourage good relationships with SSA, others have adversarial relations with local field offices. The lack of written objectives and a clear definition of success creates the potential for disagreement and conflict regarding PASS approvals and denials. We found in at least six offices we visited that this conflict was disruptive to field office operations. Most of the PASS preparers we interviewed expressed frustration with the inconsistencies in PASS approvals and denials across field offices where they submit plans. At the same time, some SSA field office staff told us they believed that some third-party preparers wrote “ridiculous” PASSes, unlikely to result in economic self-support, or simply designed to maximize an individual’s purchases or the preparer’s earnings. Field office staff and PASS preparers we spoke with were sometimes divided as to whether individuals who request a PASS should have a high probability of achieving the occupational goal, or if it should just be theoretically possible. Some PASS preparers we spoke with believed that nearly all PASSes are feasible, citing the requirement that the PASS program be liberally interpreted. One PASS preparer told us, for example, that the PASS program is an individual’s opportunity “to pursue their dream.” Another said he encourages his clients to “shoot for the stars” when developing a PASS. Staff in at least nine SSA field offices we visited said that they felt pressured by third-party PASS preparers to approve PASSes. A few third-party preparers said they use tactics such as repeated calls to SSA field offices, involvement of advocacy groups, publicity, and letters to local congressional representatives to pressure field office staff to approve plans. Tension between PASS preparers and SSA field office staff may also arise because PASS preparers sometimes get paid their fee only if a PASS is approved. SSA has no internal controls to prevent conflicts of interest on the part of third-party preparers. For example, some PASS preparers also provide vocational and rehabilitation services for people with disabilities, and may write PASSes to pay for the services they also provide. We saw plans, for example, written to fund job coaching or independent living services provided by the agency developing the PASS. SSA is aware of and is trying to address some, but not all, of the internal control weaknesses present in the PASS program. Many are detailed in a 1994 internal SSA PASS program strategy paper. In addition, SSA issued new POMS guidance on the PASS program in January 1995 in an attempt to provide field office staff with better control over the program. Nonetheless, field office staff in nearly every office we visited reported that SSA’s guidance on administering the PASS program still does not provide them with enough specifics for approving and denying plans. In September 1994, SSA assembled a work group to address various PASS program issues, including the role of third-party PASS preparers and field office staff’s evaluation of plans. This group reconvened in August 1995 to review data from an internal SSA study of the PASS program conducted by the Office of Program Integrity Review and to address several related policy issues stemming from this study. SSA officials responsible for the PASS program told us they are currently considering a number of regulatory and policy options resulting from this group. In the near future, SSA plans to disseminate materials to educate PASS program participants and third-party preparers about PASS rules and responsibilities, but this will not address the issues of third-party preparer qualifications, services provided, fees charged, or conflicts of interest. SSA has done a poor job implementing and managing the PASS program. The PASS program is small, comprising less than three-tenths of 1 percent of the working-age disabled SSI population; however, SSA pays about $30 million in additional cash benefits to PASS program participants annually, not including medical and other benefits. Moreover, the large potential for future growth merits attention now to serious management and internal control weaknesses. As a result of the PASS program’s design, more than 40 percent of all program participants had income, primarily DI benefits, that exceeded SSI standards and used their PASS to gain eligibility for federal SSI payments. These individuals maximize their federal benefits while participating in the PASS program, but almost none leave the DI rolls as a result of work. In addition, SSA has allowed PASS preparation and monitoring fees paid to third parties to be disregarded when calculating benefit amounts and has neither placed limits on the amount of such fees nor set standards for what services those fees should cover. Administrative action on these issues by SSA may well result in legal challenge because of SSA’s long-standing practices of allowing DI beneficiaries to use the PASS program as a means for gaining SSI eligibility and of allowing fees to third-party preparers as PASS expenses. SSA has not translated the Congress’ broad goals for the PASS work incentive into a coherent program design, and it has not provided adequate criteria or guidance to field offices charged with administering the program. In following the congressional directive that the work incentive be “liberally construed,” SSA has placed few limits on the program, such as stringent criteria for assessing whether an individual PASS work goal is feasible and whether proposed expenses are appropriate. SSA has also not developed outcome measures to evaluate the program’s effect on participants’ return to work. At the same time, field office staff who administer the PASS program receive inadequate training, information, and credit for this task. Finally, SSA has not addressed internal control weaknesses that have left the program vulnerable to abuse and undermined program integrity. The PASS program lacks even minimal controls to provide reasonable assurance that additional funds are being spent appropriately and to safeguard against fraud and abuse. Currently, the lack of a standardized application form or compliance review process results in inconsistent and inequitable implementation, and insufficient guidelines on expenditures and lack of penalties for willful noncompliance make the PASS program a potential target for abuse. Further, while third-party PASS preparers may play an important role in assisting disability benefit recipients, some of the services provided and fees received could create conflicts of interest and additional potential for abuse. Through its recently assembled work group, SSA is planning to address some of the design and internal control weaknesses we identified, but it is too early to determine whether SSA’s actions will be effective. The Congress may wish to consider whether individuals otherwise financially ineligible for SSI because their DI benefits or other income exceeds the eligibility threshold should continue to gain eligibility for SSI through the PASS program. SSA needs to make major improvements in the management of the PASS program to achieve more consistent administration, better support field staff, collect data sufficient to control and evaluate the program, and provide internal controls against program waste and abuse. We recommend that the Commissioner take the following actions, or, if necessary, seek legislation to do the following: clarify the goals of the PASS program; decide whether fees paid to third parties should continue to be disregarded when calculating benefit payment amounts and whether the amount of disregarded fees should be capped; standardize the PASS program, including the application, reporting guidelines on expenditures for compliance reviews, and informational and educational materials for PASS preparers; improve support to field staff, including enhancing their ability to evaluate the feasibility of proposed work goals, and requiring PASSes to incorporate additional data relevant to determining their feasibility, including the applicant’s disability, previous work experience, if any, and education; gather additional management data on PASS program participation and impact, and use these data to evaluate the impact of PASS program participation on employment; and strengthen internal controls by establishing more specific guidelines on acceptable PASS expenditures, developing penalties for willful noncompliance with the PASS program, including a determination of whether subsequent plans are permissible, examining the role of third-party preparers, including their potential financial conflicts of interest, and considering the strength of existing safeguards against abuse when determining the appropriate limits on the length and number of PASSes individual participants may have. SSA generally concurred with our recommendations, and cited in its comments actions it intends to take to address them, including such actions as developing a standardized application form and providing improved support to field office staff. We deleted a recommendation in our draft report regarding the work credit field office staff receive for PASS tasks, because of the progress SSA has made in this area. SSA commented that it had limited statutory authority to address all of our recommendations. As we had stated in our recommendations, SSA may find it necessary to seek legislation to implement program changes. More details on SSA’s specific proposed actions are included in appendix VI, as well as a full copy of SSA’s comments and our response.
Pursuant to a legislative requirement, GAO reviewed the Social Security Administration's (SSA) Plan for Achieving Self-Support (PASS) Program, focusing on: (1) SSA management of the program and its impact on employment of the disabled; and (2) the program's vulnerability to abuse. GAO found that: (1) SSA has poorly implemented and managed the PASS program and has not given its field office staff adequate program guidance, support, or training; (2) the diversity of individual plans' goals and expenditures reflects the diversity of the disability population; (3) the PASS program has grown over the past 5 years as program awareness has increased, and further growth is predicted because millions of Supplemental Security Income (SSI) and Disability Insurance (DI) beneficiaries are eligible; (4) about 40 percent of PASS program participants, who are mainly DI beneficiaries, would not be eligible for SSI benefits if some of their income was not excluded under PASS; (5) the impact of the PASS program cannot be accurately determined because SSA does not have basic data on program participation and has not defined clear program goals; (6) former PASS participants are more likely than other SSI beneficiaries to have earnings that reduce their SSI benefits, but few have left the SSI and DI rolls; (7) PASS participation increases SSI benefit outlays by about $30 million annually; (8) the PASS program is vulnerable to abuse because internal controls are weak, guidelines are vague, applications are not uniform, there is no limit on individuals' participation, compliance monitoring is infrequent and nonstandardized, and few penalties exist for willful noncompliance; and (9) SSA has not addressed the potential financial conflicts of interest caused by professional PASS preparers.
RHS, a component of USDA’s RD mission area, is responsible for rural housing and community facilities programs. Under the single-family guarantee program, RHS provides lenders guarantees on residential mortgage loans to households with low to moderate incomes in areas statutorily designated as rural. The guarantees cover 30-year fixed-rate loans made to purchase a home or refinance an existing RHS direct or guaranteed loan. The guarantee program requires no down payment from borrowers and currently charges a 2.75 percent up-front guarantee fee (which borrowers may finance in the approved loan amount) and a 0.5 percent annual guarantee fee. The guarantee provides coverage for eligible losses of up to 90 percent of the original loan balance, including unpaid principal and interest, principal and interest on USDA-approved advances for protection and preservation of the property, and the costs associated with selling a foreclosed property. RHS-approved lenders and servicers originate, underwrite, and service the mortgage loans that RHS guarantees. According to RHS, in fiscal year 2014, about 1,700 lenders originated loans guaranteed by RHS. A borrower (home buyer) applies for a guaranteed loan through an RHS- approved lender. Since 2006, RHS has provided an automated underwriting system for lenders to submit loan information and determine borrower eligibility. RHS staff are to review the loan information and, if it meets RHS’s requirements, issue a conditional commitment to guarantee the loan. Upon receiving and satisfying the commitment conditions, the lender closes the loan and submits the closing package to RHS. After reviewing the closing package, RHS issues the loan guarantee. A lender may service its own loans—including collecting monthly mortgage payments, maintaining escrow accounts for property taxes and hazard insurance, and conducting loss mitigation activities—or pay a fee for another organization to service its loans. Servicers also are responsible for liquidating foreclosed properties. In December 2014, new program regulations went into effect that expanded the pool of lenders eligible to participate in the guarantee program. With the regulation change, any lender supervised and regulated by the Federal Deposit Insurance Corporation, the National Credit Union Administration, the Office of the Comptroller of the Currency, the Federal Reserve System, or the Federal Housing Finance Board was eligible. According to RHS, the regulation enables many small community banks and credit unions that were ineligible prior to the change in regulations to participate in the guarantee program. RD offices at the national, state, and local levels play important roles in the guarantee program. The Single Family Housing Guaranteed Loan Division in Washington, D.C., is responsible for developing, implementing, and monitoring program policy and procedures. The division’s functions include legislative and budget planning; management reporting; issuance of regulatory and policy directives; portfolio monitoring; and approval, training, and review of nationwide lenders and servicers. RD state and local offices conduct program operations within their geographic jurisdictions. Their responsibilities include approving lenders that operate in a single state to participate in the program and monitoring the lenders’ underwriting and servicing of guaranteed loans. In addition, staff in state and local offices are responsible for reviewing loan applications and closing documentation and issuing conditional and final loan guarantee commitments. They also are to provide loan servicing guidance to lenders and servicers and train lenders on program requirements. Other offices play key roles in administering and overseeing the guarantee program including, but not limited to, the following: RHS’s Centralized Servicing Center (CSC) in St. Louis, Missouri, reviews and approves lender loss-mitigation efforts and lender claims, among other functions. RD’s Office of the Chief Financial Officer calculates credit subsidy estimates and reestimates; oversees periodic management control reviews and state internal reviews of RD programs, including the guarantee program; and reviews documentation related to implementation of USDA Office of the Inspector General audit recommendations and forwards it to USDA’s Office of the Chief Financial Officer, which determines whether the recommendation can be closed. Under FCRA, USDA and other federal agencies must estimate the credit subsidy costs of their direct loan and loan guarantee programs and include the costs to the government in their annual budgets. Agencies annually estimate credit subsidy costs for each program by cohort—the loans agencies commit to insure or guarantee in a given fiscal year. The credit subsidy cost is equal to the net present value of estimated lifetime cash flows to and from the government, excluding administrative costs. For a mortgage guarantee program, cash inflows consist primarily of premiums received from borrowers and cash outflows consist mostly of claim payments to lenders. Credit programs have a positive subsidy cost when the present value of estimated payments by the government exceeds the present value of estimated premiums and other funds received by the government (collections). When credit programs have a positive subsidy cost, they require appropriations. Conversely, negative subsidy programs are those in which the present value of estimated collections is expected to exceed the present value of estimated payments. FCRA requires that agencies have budget authority to cover credit subsidy costs before entering into credit transactions. To estimate their subsidy costs for annual appropriation requests, credit agencies estimate the future performance of direct loans and loan guarantees. Agencies are responsible for accumulating relevant, sufficient, and reliable data on which to base these estimates. To estimate future credit performance, agencies generally have models that include assumptions about defaults, prepayments, recoveries, and the timing of these events and are based on the nature of their credit programs. As needed, agencies also incorporate economic assumptions provided by the President into credit subsidy calculations. Further, OMB requires agencies to discount cash flows using projected Treasury interest rates that are consistent with the economic assumptions underlying the President’s budget. The discount rates are used to derive the present value of future cash flows that, in turn, indicate the credit subsidy costs. The costs can be expressed as a rate. For example, if an agency commits to guarantee loans totaling $1 million and has estimated that the present value of cash outflows will exceed the present value of cash inflows by $15,000, the estimated credit subsidy rate is 1.5 percent. Under FCRA, agencies generally must produce annual updates of their credit subsidy estimates—known as reestimates—of each cohort based on information about the actual performance and estimated changes in future credit performance. This requirement reflects the fact that estimates of credit subsidy costs can change over time. Beyond changes in estimation methodology, each additional year provides more historical data on credit performance that may influence estimates of the amount and timing of future cash flows. Economic assumptions also can change from one year to the next, including assumptions on interest rates. When reestimated credit subsidy costs exceed agencies’ original cost estimates—resulting in an upward reestimate—the additional subsidy costs are not covered by new discretionary appropriations but rather are funded from permanent, indefinite budget authority. In January 2013, OMB reissued its Circular A-129, which provides guidance to federal agencies on managing credit programs. The guidance addresses key aspects of managing a loan guarantee program, including assessing the eligibility and creditworthiness of borrowers, overseeing guaranteed loan lenders and servicers, developing performance indicators and risk thresholds, and analyzing and reporting on portfolio risks. The circular also provides guidance on management structures, including the need for risk-management functions that are independent from credit program administration. According to OMB staff, the 2013 update to the circular incorporated best practices for risk management. RD’s Office of the Chief Financial Officer has primary responsibility for ensuring the guarantee program’s compliance with the circular. In part due to the recent housing crisis, the estimated credit subsidy costs of RHS’s guarantee program rose in recent years. RD uses information on historical average performance to develop its cost estimates, although it adjusted its method in recent years to account for the effects of the housing crisis. Furthermore, RD has been developing econometric (statistical) models to estimate future credit subsidy costs that should help address the limitations of its current method, such as reduced reliability when economic conditions vary from those in the past. RHS estimated the initial subsidy rates of its most recent single-family mortgage guarantee cohorts to be around zero. As required by FCRA, RD annually estimates the credit subsidy cost of the loans it plans to guarantee in the upcoming fiscal year and reestimates credit subsidy costs for prior loan cohorts. According to RHS officials, since 2010 RHS has had the goal of making each new loan guarantee cohort “subsidy neutral”—that is, initially, the present value of lifetime estimated cash inflows equals the present value of lifetime estimated cash outflows. Accordingly, the initial credit subsidy rate estimates for the 2011 through 2014 cohorts were close to zero (ranging from -0.04 percent to -0.25 percent). However, the current reestimated rates for the 2011 and 2012 cohorts are slightly positive (1.39 percent and 0.86 percent, respectively). The current reestimated rates for the 2013 and 2014 cohorts—the most recent cohorts to be reestimated—are slightly negative, each at -0.31 percent. The reestimated costs of the RHS guarantee portfolio as a whole substantially increased in recent years. In part, the larger reestimates reflected growth in the size of RHS’s loan cohorts. RHS has submitted net upward credit subsidy reestimates—expectations that the guaranteed portfolio as a whole will cost more or produce less revenue than previously estimated—in 8 of the last 11 years (see fig. 1). The upward reestimates for fiscal years 2012 through 2014 were significantly larger than those of prior years. For example, the reestimates for 2012, 2013, and 2014 were $364 million, $804 million, and $615 million, respectively, compared with $42 million for 2010. A change in the estimated subsidy rate (even a small one) will result in larger reestimated amounts in dollar terms for relatively larger loan cohorts because the change would apply to a higher dollar volume of loans. RHS guaranteed fewer than 40,000 loans annually from 1992 (the first year RHS made guarantees nationwide) through 2007, but volume grew significantly from 2008, when RHS guaranteed about 62,000 loans, through 2014 when RHS guaranteed about 140,000 loans. The total amount of guarantees outstanding increased from less than $22 billion in 2008 to more than $100 billion in 2014. According to RD management, the large upward credit subsidy reestimates for fiscal years 2013 and 2014 also were due to higher-than- expected loss amounts (claims paid to lenders after defaults) and to changes in the estimation methodology RD used those years (discussed later in this report). RD’s financial statement auditor for federal credit subsidy issues (credit subsidy auditor) agreed with RD management’s explanation. Cumulative loss rates (total losses divided by the dollar volume of loans guaranteed) were especially high for cohorts guaranteed directly before and during the early years of the 2007 through 2011 housing crisis. (In a loan cohort, losses are expected, and any losses are offset in part or in whole by guarantee fees.) As of the end of fiscal year 2014, the 2007 cohort had the highest cumulative loss rate of any cohort since 2000, followed by cohorts from 2006, 2005, and 2008, respectively (see fig. 2). For example, the 2007 cohort had a cumulative loss rate of almost 10 percent. In contrast, the 2003 cohort had a cumulative loss rate under 3 percent at the comparable point in its life cycle (8 years) and a cumulative loss rate of 4.7 percent at the end of 2014. The higher losses for these cohorts may have stemmed from homeowners’ inability to build equity before housing prices declined. Borrowers who owe more on their mortgages than their homes are worth may be more likely to default because (1) they may not be able to sell or refinance their homes to relieve unsustainable mortgage payments and (2) they may choose to stop making mortgage payments to minimize losses. Furthermore, when lenders foreclose on the borrowers, the lower home values reduce the amount that lenders recover through sale of the properties, resulting in higher losses for RHS. As shown in figure 2, cohorts guaranteed since 2010 had lower loss rates in each year of their life cycle after the first year than all other cohorts since 2000. For example, the 2011 cohort had a 0.3 percent cumulative loss rate at the end of fiscal year 2014, whereas the 2000 cohort had a 1.6 percent rate at the comparable point (after 4 years). Improved economic conditions as well as other factors contributed to the improved performance of recent cohorts. For example, a report from USDA’s Office of the Inspector General on RD’s fiscal year 2014 financial statements noted that losses that year were lower than expected for the 2012 through 2014 cohorts as a result of stricter credit requirements RHS had implemented in response to the housing crisis. For instance, in 2009, RHS began requiring lenders to provide additional documentation to waive RHS underwriting guidelines for maximum borrower debt ratios and adverse credit histories. Also, according to an RHS official, some lenders may have tightened credit standards more than required by RHS as a result of the housing crisis and the risks associated with managing defaulted loans. RD—the USDA division that estimates the credit subsidy costs of RHS’s guarantee program—averages historical information on loan performance to estimate certain expected cash outflows and inflows for loan cohorts. As previously noted, cash outflows consist primarily of losses (claims paid to lenders after defaults) and inflows primarily of fees from borrowers and recoveries. In turn, the expected cash flows are inputs in the calculations that produce estimates of credit subsidy costs. Federal guidance states that the historical averages method is an acceptable approach for estimating credit subsidy costs. Beginning with the credit subsidy cost estimate for the 2013 cohort, RD has calculated average loss and recovery rates using the total dollar amounts of losses and recoveries for all prior loans. Previously, RD calculated the rates by averaging the average loss and recovery rates for all prior loan cohorts. By using total dollar amounts instead of an average of individual loss and recovery rates, RD’s method of projecting losses and recoveries for new loans accounts for variations in the size of loan cohorts. That is, the method gives more weight to the performance of large cohorts than smaller ones. To project losses and recoveries for a new cohort, RD averages historical information on loan performance (from 1992 through the last complete fiscal year). To illustrate, RD calculates the expected first-year loss rate for a new cohort as the total losses experienced by all prior cohorts during their first year divided by the total dollar amount of guarantees for loans aged at least 1 year. RHS then performs the calculations for the second-year loss rates and so on. RD calculates expected recovery rates—total recoveries divided by the total dollar amount of losses—in a similar manner. RD also projects cash inflows from annual and up-front guarantee fees. To project the annual guarantee fees for a new cohort, RD first estimates what portion of loans will prepay using a historical average method similar to the one used to estimate losses and recoveries. RD calculates an expected prepayment rate for each year of the new cohort’s life using data on the prepayment experience of prior cohorts in corresponding years. RD then uses the expectations for prepayments and loan terminations (for example, defaulted loans resulting in loss claims) to estimate the total outstanding loan balance expected at the end of each year of the cohort’s life. More specifically, RD reduces the amount of the estimated outstanding loan balance by the amount of prepayments and terminations expected each year. RD calculates the annual fee revenue using the estimated outstanding loan balance and the annual fee rates in effect at the time the guarantees were made. Finally, RD bases its estimate of cash flows from up-front guarantee fees on the dollar value of loans expected to be guaranteed in the given budget year and the guarantee fee percentage in effect. Then, to estimate the credit subsidy rate for a new cohort, RD runs its cash flow projections for losses, recoveries, and fees through OMB’s credit subsidy calculator. This tool produces the net present value of the cash flows, which is the credit subsidy cost estimate for that cohort, and an associated credit subsidy rate. The methodology RD currently uses to calculate reestimates includes certain adjustments made in 2012, 2013 and 2014 intended to more accurately predict cash flows by accounting for the effects of the housing crisis. To calculate reestimates, RD generally uses the same historical averages methodology that it uses to calculate the original credit subsidy estimates for new cohorts. However, according to RD officials, for the 2013 and 2014 reestimates, RD made adjustments to this method, as follows: Increased loss expectations for cohorts most affected by the housing crisis (2005 through 2008 cohorts). RD’s credit subsidy auditor found that during the housing crisis, the historical averages method underpredicted losses for certain cohorts. To more accurately predict losses, the credit subsidy auditor recommended that RD assess whether it should make manual adjustments to the reestimate calculations for cohorts most affected by the crisis. As a result of this assessment, RD increased loss amounts used to calculate the average loss rates for the 2005 through 2008 cohorts. RD increased the losses by a percentage equivalent to the difference between the defaults predicted when using the historical averages approach for each cohort and the actual defaults experienced by each cohort in the most recent fiscal year. Decreased losses for cohorts made with higher credit standards (2009 through 2014 cohorts). In a 2012 report, RD’s credit subsidy auditor also found that the historical averages method was likely overpredicting losses for the 2009 through 2012 cohorts. Similarly, an RD analysis in 2014 (that did not include the 2014 cohort) showed that this method overpredicted losses for the 2009 through 2013 cohorts. According to RD, the overprediction of losses was due to the historical averages method incorporating the unusually high defaults of the 2005 through 2008 cohorts into the default projections for the more recent cohorts. In addition, the cohorts guaranteed in 2009 and later were originated using higher borrower credit standards, which lowered their default risk, according to RD. To reduce the overestimation of losses for more recent cohorts, RD removed the default data for the 2005 through 2008 cohorts when calculating historical average loss rates for certain cohorts. For example, for the 2014 reestimate, RD removed these data from the loss rate calculation for the 2009 through 2014 cohorts. Third parties that reviewed these manual adjustments found them to be acceptable. For example, RD’s credit subsidy auditors found the adjustments to be reasonable, and OMB approved the methodology used to calculate the reestimates. In 2013 and 2014, respectively, a consultant and RD conducted analyses that found that excluding the 2005 through 2008 data when estimating cash flows for cohorts of more recent years improved the accuracy of the estimates. But subsequent analysis illustrated some limitations in making manual adjustments to the historical averages method. During the fiscal year 2014 audit, the credit subsidy auditor found that RD still might have been overprojecting defaults for the more recent cohorts even with the data exclusions. Also, in 2014 RD’s credit subsidy auditor found that the revised methodology continued to underestimate losses for the 2007 and 2008 cohorts. In 2014, RD contracted with a firm to develop econometric models to predict loan performance based on various loan, borrower, and economic variables. Federal guidance states that the historical averages method is an accepted approach for estimating credit subsidy costs, but the method has limitations that may prevent it from reliably predicting cash flows under certain conditions. Specifically, results may be less reliable when economic conditions, program policy, and borrower composition change, as follows: Economic conditions. When economic conditions vary from those in the past, estimates of future losses based solely on historical averages may not take into account the effects of the changed conditions on future performance. For instance, when interest rates decrease, homeowners may choose to refinance (and therefore prepay) their mortgages to receive lower interest rates. Cost estimates developed using only historical data on loan performance may not account for the increased likelihood of prepayments given the changed economic conditions. Policy. Policy changes such as changes to underwriting standards may result in new loans having a different default risk than loans made before the change. For instance, changes to loan-to-value ratios or maximum borrower debt ratios allowed by the program may result in borrowers participating in the program who present a different level of risk than previous borrowers. Composition of borrowers. Even without policy changes, the composition of the borrowers receiving RHS guarantees may change from year to year. For example, the geographic dispersion, average credit scores, or other characteristics of borrowers may shift. These changes may result in changed expectations for the future performance of the loans, which would not be reflected in the loss rates calculated using only historical data. The firm was contracted to develop the econometric models to estimate the likelihood of claims and prepayments on RHS guaranteed loans—key inputs into estimates of future cash flows used to develop RHS credit subsidy estimates and reestimates. According to the contractor, the models will incorporate RHS’s historical loan performance and borrower data, and economic data that may be predictive of loan performance, such as macroeconomic forecasts and home price forecasts. RD officials indicated that OMB has reviewed and approved the contractor’s models and that RD plans to use the models to develop its initial credit subsidy cost estimate for the 2017 cohort and the 2016 reestimate. Additionally, RD noted that based on its preliminary analysis, the new models will correct for the overestimation of future losses that resulted from using the historical method in recent years. As a result, RHS expects the 2016 reestimate to be a downward reestimate. According to federal guidance, econometric models have a number of advantages over other methods for estimating credit subsidy costs, such as historical averages and informed opinion. For example, econometric models can identify key relationships between loan performance and economic and other indicators; take into account changes in policy; be easily commented on and reestimated to take comments into be easily transferred between analysts (for instance, if the agency’s knowledgeable staff leave, the model and its key assumptions remain in place). Certain attributes of econometric models—in particular, the ability to take into account changes in economic conditions or policy—address limitations of the historical averages method that RHS currently uses. To illustrate, RD’s credit subsidy auditor noted that RD’s historical averages method did not produce accurate forecasts when the housing crisis caused losses to deviate from predictable patterns seen in prior years. The auditor reported that the historical averages method may not adequately take into account changes in the composition of borrowers or economic conditions that could materially affect the future performance of the program relative to its historical performance. Furthermore, the auditor said that using an econometric modeling methodology would allow RD to improve the quality of its estimates. The quality of the credit subsidy cost estimates produced by RD’s econometric models will depend on many factors. In a March 2004 report on another federal guarantee program, we found that the choice of which variables to include in an economic model is based on professional judgement, statistical testing, and economic theory. Excluding key predictive variables can reduce model quality. In addition, model validation is important to help ensure the models continue to be appropriate for the purpose they are intended and are calculating correctly. Further, once the models are developed, regularly updating them is important to help ensure their continued reliability. In addition to estimating credit subsidy costs, RHS program management may be able to use the econometric models for other risk-management functions. For instance, according to federal guidance, econometric models can be used in policy formulation to estimate how alternative changes to policies would affect future cash flows and thereby the subsidy cost of the guarantees. Econometric models also may allow RHS management to conduct simulations of portfolio performance under different scenarios of future economic conditions. For example, management could stress test the portfolio—a technique that allows managers to measure the vulnerability of the portfolio to unexpected losses. RHS officials told us that they plan to use the econometric models under development to help anticipate and assess potential risk to the program caused by changing conditions such as a future economic downturn. RHS’s policies and procedures for the guarantee program were consistent with 19 of 26 OMB A-129 standards for managing federal credit programs, but were not fully consistent with the other 7. Specifically, RHS policies and procedures were consistent with 10 of the 11 standards for extending credit, but partially consistent with the remaining standard. The agency’s policies and procedures were consistent with 7 of 9 standards for managing lenders and servicers, but partially consistent with the remaining 2. Finally, the policies and procedures were partially consistent with 4 of 6 standards for credit program management and consistent with the remaining 2. RHS’s policies and procedures were consistent with Circular A-129 standards for extending credit, with the exception of one standard concerning applicant screening. As shown in table 1, RHS’s policies and procedures were consistent with four of the five A-129 standards for screening applicants and partially consistent with the remaining one. Applicant screening refers to determining an applicant’s eligibility and creditworthiness for a loan. RHS policies were consistent with the standard for an applicant’s program eligibility and certifications. For example, RHS requires lenders to assess compliance with a number of eligibility requirements—for example, that the applicant’s income is no more than 115 percent of the area median, that the property to be purchased or refinanced is in a designated rural area, and that the applicant is not eligible for “traditional conventional credit.” RHS defines this term as mortgages with 20 percent down payments and other loan and borrower characteristics associated with lower-risk, uninsured private mortgages. Consequently, applicants who may qualify for other types of conventional credit, such as those with private mortgage insurance, or mortgages guaranteed by other federal agencies, such as the Federal Housing Administration (FHA) or Department of Veterans Affairs (VA), also may be eligible for an RHS- guaranteed loan. However, RHS’s policies and procedures were not fully consistent with the standard that states that the agency must deny an applicant who is subject to an administrative offset to collect delinquent child support payments. An administrative offset is an enforcement remedy that allows for the interception of certain federal payments—for example, tax refunds—to collect past-due child support. RHS policy requires court- ordered payments such as child support to be considered in assessing an applicant’s ability to repay a mortgage. Additionally, RHS officials said that delinquent child support payments should be reflected in an applicant’s credit report and that it was unlikely a lender would approve an applicant with that type of adverse credit history. However, RHS policy does not disqualify applicants solely on the basis of delinquent child support payments and does not address the ineligibility of applicants subject to administrative offsets for past-due child support. Furthermore, RHS has lacked the information needed to identify these ineligible applicants. The Department of the Treasury (Treasury) maintains a database of individuals subject to administrative offset that federal agencies can access through Treasury’s “Do Not Pay” portal. However, RHS officials acknowledged that they had not yet taken the necessary steps to access it because they were not aware of the tool. Consequently, it is possible that applicants subject to administrative offsets for past-due child support may be able to obtain RHS-guaranteed mortgages, contrary to the OMB standard. As shown in table 2, RHS’s policies and procedures were consistent with the six standards in Circular A-129 for loan documentation and loan collateral. Loan documentation refers to the maintenance of files containing key information used in loan underwriting. Collateral refers to the assets that secure the loan (for the guarantee program, the mortgaged property). RHS also has made or has been pursuing process enhancements related to loan documentation and collateral. In March 2015, it implemented a paperless processing system that uses web-based document uploads and electronic signatures to help save the time and expense of sending paper documents between lenders and RHS field offices for every guarantee. In addition, RHS officials told us that they were in discussions with VA, which also administers a loan guarantee program, about an interagency agreement that would allow RHS to use an automated appraisal evaluation tool that VA implemented in June 2015. An RHS official said that the tool would increase the efficiency of the appraisal review process and help identify problematic appraisals, such as those that may overvalue a property. As shown in table 3, RHS’s policies and procedures for managing entities that originate or service RHS-guaranteed mortgages (lenders and servicers) were consistent with seven of the nine standards in Circular A- 129, but only partially consistent with the remaining two. The circular contains standards for lender and servicer eligibility, monitoring, recertification, and reporting. The seven standards with which RHS’s policies and procedures were consistent include those concerning on-site reviews of lenders and servicers, review of lender claims, and collection and maintenance of data from lenders and servicers. For example, RHS lenders and servicers are to be subject to periodic compliance reviews conducted either on-site (at lenders’ offices) or off-site (“desk” reviews). RHS’s compliance review guide contains risk factors for prioritizing the reviews. RHS policy also requires reviews of loss claim packages to determine whether lenders fulfilled all program obligations and, if not, whether reduction or denial of the loss claims would be warranted. However, RHS does not have policies and procedures that address all aspects of two standards for lender and servicer eligibility. First, the circular describes the various lender and servicer eligibility criteria agencies should publish. Although RHS published a Federal Register notice in 2013 containing such criteria, it did not include qualification requirements for principal officers, such as years of experience in the mortgage industry. The notice also does not include financial and capital requirements for lenders not regulated by a federal financial institution regulator (referred to as nonsupervised lenders). RHS officials said they effectively relied on the requirements of other mortgage institutions, such as FHA and VA, because approval by these institutions is generally the means by which nonsupervised lenders become eligible to participate in the guarantee program. They also expressed concern that imposing additional requirements would increase the complexity of the lender approval process. But FHA’s and VA’s requirements differ and may not be well-suited for RHS’s program. For example, FHA calibrates its net worth requirement to the amount of FHA business a lender does. Specifically, FHA requires nonsupervised lenders to have a minimum adjusted net worth of $1 million, plus 1 percent of their total FHA business volume in excess of $25 million, up to a maximum required adjusted net worth of $2.5 million. As such, FHA’s requirement does not take into account any additional risk represented by a lender’s business with RHS. In contrast, VA requires nonsupervised lenders to have a minimum adjusted net worth of $250,000 or have at least $50,000 in working capital, regardless of lending volume. The suitability of VA’s net worth requirement for RHS may be limited, among other things, by VA’s lower loss coverage for lenders—from 25 percent to 50 percent compared with up to 90 percent for RHS. In previously issued work, we discussed the view of some mortgage industry observers that a lower level of loss coverage may provide lenders an incentive to improve underwriting quality, thus reducing the risk of default. By not specifying its own requirements, RHS increases the potential that entities that originate and service RHS-guaranteed mortgages may lack the experience and financial soundness to perform these functions in a manner that protects RHS’s financial interests or lack the ability to cover any liability for violations of RHS requirements. Second, the circular states that agencies should review and document a lender’s or servicer’s eligibility at least every 2 years. According to RHS officials, the agency’s practice is to biennially assess the eligibility of previously approved lenders and servicers and to maintain documentation of eligibility in paper files. They said that they issue instructions every other year directing staff to complete the eligibility reviews within 180 days of the issuance date. However, RHS has not established standing written policies or guidance requiring eligibility reviews to be completed at least every 2 years. RHS officials said they had not seen the need to disclose the 2-year review cycle to lenders by putting it in the guarantee program handbook. Without explicitly stating the required frequency of eligibility reviews, RHS increases the risk of not complying with the OMB 2-year minimum standard and of guaranteeing loans originated or serviced by ineligible lenders. For example, RHS issued the 2013 instruction more than 2 years and 7 months after the 2011 instruction, which is not consistent with a standard of reviewing eligibility within the minimum 2-year time frame. While RHS’s policies and procedures did not fully comply with all the Circular A-129 standards for managing lenders and servicers, the agency has been taking steps to improve its lender and servicer oversight. For example, RHS officials told us that they were taking steps to automate the eligibility recertification process, which could facilitate implementation of a more regular and streamlined review of lenders and servicers. In addition, RHS has proposed regulations that would strengthen its authority to require lenders to indemnify (compensate) RHS for loss claims on defaulted loans that were not properly underwritten. Current regulations authorize RHS to seek indemnification within 24 months of loan closing when RHS concludes that the lender did not comply with the agency’s underwriting standards. In March 2015, RHS issued a proposed rule that would increase the indemnification period to 5 years. According to RHS, the comment period has ended, and the agency plans to issue the final rule upon completing its review of the comments. However, the agency did not have a specific time frame for issuing the final rule. In addition, in December 2015, Congress enacted legislation authorizing the Secretary of Agriculture to grant qualified lenders the authority to determine the eligibility of loans for RHS guarantees without RHS’s prior approval (similar to FHA’s and VA’s single-family mortgage guarantee programs). According to RHS, this change will improve program delivery and increase efficiency, while also requiring RHS to shift additional resources to lender and servicer monitoring. RHS officials said the change will take several years to implement. As shown in table 4, RHS’s policies and procedures were consistent with two of the six standards in Circular A-129 concerning credit program management and partially consistent with the remaining four. OMB added the six standards as part of its revision of the circular in 2013. The standards address various aspects of credit program management, including lines of authority and communication, performance and risk indicators, and reporting mechanisms. RHS policies and procedures were consistent with two standards (performance data and separation of program functions). For example, RHS produces a monthly Portfolio Performance Report that includes national summary statistics on delinquencies, foreclosures, loss mitigation actions, and loss claims as well as detailed loss claim data organized by state. Senior RHS officials, including the Undersecretary for Rural Development, the RHS Administrator, the RHS Deputy Administrator for Single Family Housing, and the Director of the guarantee program receive these reports. In addition, program operations are structured to separate key functions such as approval of loan guarantees and approval of loss claims. Furthermore, agency contracting policies and procedures incorporate requirements concerning the retention of inherently governmental functions and require progress reporting for advisory and assistance services contracts. Finally, RHS’s handbook for the guarantee program contains policies for how agency staff should communicate with lenders when RHS reviews the lender’s loan guarantee application package. However, RHS’s policies and procedures did not fully align with the other four standards, as follows: Defined responsibilities and codified lines of authority and communication. While RD has position descriptions for individuals involved in risk-management functions that specify duties and responsibilities, RHS does not have written procedures for a key part of its risk-management structure and documented lines of communication, as required by OMB’s Circular A-129. Specifically, since 2009 RHS has had a Credit Policy Committee that, according to RHS officials, meets regularly to detect, discuss, and analyze credit quality issues and address them through policy changes. However, as we testified in May 2015, the committee operated without policies and procedures describing its purpose, scope, membership, or decision-making process. We also testified that RHS had not defined the roles and responsibilities of committee members and did not prepare minutes of meeting discussions and results. RHS officials said they saw no need to formalize the committee’s operations when the committee was created because the staff was small and in frequent communication. But in November 2015, the officials told us they had drafted a charter for the committee in response to our findings. Without written policies and procedures, accountability for and transparency of the credit policy committee’s activities may be limited. Additionally, RHS has not documented the lines of communication between the agency components that have risk-management functions and responsibilities. RHS’s risk-management structure is decentralized and complex. According to RHS, it involves staff in 47 state offices; the Centralized Servicing Center and National Financial and Accounting Operations Center in St. Louis, Missouri; and USDA headquarters. RHS has basic organizational charts for these components that show lines of authority, but has not codified how and what types of information should flow among the components. Instead, they share information on a less formal basis built on established working relationships. While we found evidence that communication on financial, budget, and operational matters occurs between key staff, not documenting lines of communication increases the risk that information flows will break down in the event that these staff transfer or retire. Independent oversight and control functions for risk management, including credit and operational risks. RHS’s management structure does not fully align with OMB standards or a congressional directive, which call for an independent risk management function. RHS officials identified various USDA components that perform oversight and control functions and operate independently of guaranteed loan program staff. For example, RD’s Office of the Chief Financial Officer oversees periodic management control reviews of the guarantee program and other programs. These reviews, which occur every 5 years, are designed to assess the effectiveness and efficiency of management controls, inform senior managers of the status of operations and internal controls, and provide solutions to reduce or eliminate any deficiencies. In addition, the Centralized Servicing Center reviews lenders’ loss claims to help ensure that lenders complied with program guidelines before the agency pays the claims. However, neither RHS nor RD has an independent function specifically tasked with identifying the range of credit and operational risks facing the guarantee program. Circular A- 129 states that agencies should strongly consider the formalization of risk-management functions through the creation of a risk- management office led by a Chief Risk Officer. Consistent with this recommendation, the House Committee on Appropriations directed RD in June 2014 to “expeditiously create and fill a position of Chief Risk Officer” whose responsibility would be to manage and mitigate the agency’s financial risk. RD officials told us that in early 2015 they had created a working group to examine how to create the position, including looking at similar efforts at other federal agencies. But as of March 2016, RD had not established the position. RD officials told us they expected to create and fill the position sometime in 2016 but did not have a more detailed timeline. As a result, RD’s efforts to manage and mitigate the risks of the guarantee program may not be as effective as they could be. Performance indicators and risk thresholds. Although RHS uses a number of indicators to assess the performance of its guaranteed portfolio, two key indicators have limitations that diminish their usefulness and appropriateness. In addition, RHS has not established risk thresholds for the guarantee program. Circular A-129 states that agencies should establish and periodically review appropriate performance measures for their credit programs. According to RHS officials, since 2004, they have compared the overall delinquency and foreclosure rates for RHS’s portfolio with corresponding rates for FHA’s insured portfolio of 30- year fixed-rate mortgages. RHS officials justified the performance measures based on the similarity of the FHA and RHS mortgage programs. Additionally, they noted that performance data on FHA’s portfolio was readily available from a mortgage industry group. RHS has established performance goals stating that RHS should be within a specified range of FHA’s delinquency and foreclosure rates at the end of each fiscal year. Although RHS generally has met these goals, the performance measures are not fully consistent with certain attributes of successful performance measures—such as objectivity and reliability—that we identified in previously issued work. The weaknesses in the performance measures are two-fold. First, a simple comparison of two portfolios ignores potential differences in their composition—for example, in the age and geographic distribution of loans—that may influence loan performance and make comparisons of the portfolios invalid. FHA maintains data that can be segmented by loan cohort and property location, which could help address some limitations of the industry group data. Second, it implies that FHA has been effectively managing its risk. However, FHA has at times exhibited shortcomings in this area. For example, in a 2006 report, we found that FHA had not developed sufficient standards and controls to manage risks associated with the substantial proportion of FHA- insured loans with down payment assistance. Because of these weaknesses, a performance measure that does not account for such portfolio differences may not provide a useful and appropriate benchmark for RHS risk management. Circular A-129 also states that agencies should establish risk thresholds for their credit programs. RHS has established a risk appetite—the amount and type of risk an organization is willing to accept in pursuit of its objectives—for the single-family guarantee program. According to RHS officials, the program’s risk appetite is expressed primarily through the goal of making each annual cohort of loan guarantees subsidy-neutral, while keeping guarantee fees at a level affordable to low- and moderate-income households. However, RHS has not established associated risk thresholds—that is, target values above which risks are not tolerated or that trigger application of additional risk controls. For example, RHS has not developed thresholds for the magnitude of expected losses that are acceptable at the portfolio or loan level. Without established risk thresholds, RHS’s ability to determine when risk levels are too high is diminished. Reporting of performance information in watch lists and dashboards. RHS’s performance reports were not fully consistent with the OMB standard concerning portfolio dashboards. RHS produces three reports—one specifically called a dashboard and two others with some characteristics of a dashboard—that generally contain the types of quantitative information identified in the OMB guidance. However, these reports do not include a qualitative discussion of areas meriting increased management focus, as specified in Circular A-129. RHS officials said they orally discussed issues warranting greater management attention in briefings and meetings in which the reports are used. However, by not highlighting and documenting issues for management attention in the performance reports, RHS increases the possibility that senior managers will not have the information necessary to address emerging risks in a timely manner. The inconsistencies we identified between RHS’s policies and procedures and Circular A-129 standards—both for credit program management and the areas discussed previously—occurred, in part, because RD did not compare and align its requirements with all elements of the circular. Circular A-129 requires agencies to periodically conduct program reviews that assess whether credit programs are achieving policy goals while mitigating risk and cost to the taxpayer and minimizing displacement of private credit markets. The reviews also should identify any area where a program is not consistent with the requirements of Circular A-129, evaluate the effects of any deviation, and whether the deviation is still necessary. RD officials told us that RD’s Office of the Chief Financial Officer was primarily responsible for ensuring that policies and procedures for the guarantee program were consistent with the circular. They said that the office began reviewing the compliance of all RD credit programs with the circular in 2014. RD officials said they did not expect to complete the review of the single-family loan guarantee program until 2016. RD officials added that this program review was begun on an ad hoc basis rather than part of a schedule. Furthermore, they noted that because of the large number of credit programs RD operates, they intended to complete the program reviews on a rotating basis, an approach that does not establish priorities based on risk. Federal internal control standards state that agencies should identify risks, including by using qualitative and quantitative ranking activities, and have controls such as policies and procedures to address risks. At present, RD operates 27 loan and loan guarantee programs. Without procedures for prioritizing program reviews based on risk level, RD may not be able to fully realize the intended benefits of the reviews, which include mitigating risk and cost to the taxpayer. Congress and OMB have established requirements and standards for estimating the costs of and managing federal credit programs, including FCRA and OMB Circular A-129. In the wake of the recent housing crisis, RHS’s guarantee program has expanded dramatically and the estimated costs of the guarantee program have risen, due partly to higher-than- expected losses from mortgages made shortly before or during the housing downturn. Furthermore, RHS has recently been granted the authority to give qualified lenders the ability to determine the eligibility of loans for guarantees without RHS’s prior approval. These developments underscore the importance of complying with requirements and standards intended to improve the reliability of cost estimates and help ensure that risks are prudently managed. RHS has taken a number of steps to enhance its administration of the guarantee program, including development of an econometric model to estimate credit subsidy costs and potentially enhance risk analysis. However, RHS could further strengthen its policies and procedures for managing the guarantee program by addressing inconsistencies with Circular A-129 standards related to applicant screening, lender oversight, management frameworks, and risk assessment and reporting. By doing so, the agency would help decrease the risk that ineligible borrowers would receive guaranteed loans and that unqualified or ineligible firms would originate or service the loans. RHS also would enhance its capabilities to manage its expanded portfolio and strengthen its ability to identify and mitigate risks in a timely manner. Finally, by not having procedures for risk-based scheduling of the program reviews required by OMB guidance, RD may be limiting its ability to manage its multiple credit programs in the most effective manner. To improve compliance with OMB Circular A-129 standards and strengthen management and oversight of the guarantee program, we recommend that the Secretary of Agriculture direct the Undersecretary for Rural Development to take the following 11 actions: To enhance screening of loan guarantee applicants, complete steps to obtain access to Treasury’s Do Not Pay portal and establish policies and procedures to deny loan guarantees to applicants who are subject to administrative offsets for delinquent child support payments. To strengthen oversight of lenders and servicers, develop and publish in the Federal Register qualification requirements for the principal officers of lenders and servicers seeking initial or continued approval to participate in the guarantee program, develop and publish in the Federal Register capital and financial requirements for guarantee program lenders that are not regulated by a federal financial institution regulatory agency, and establish standing policies and procedures to help ensure that the agency reviews the eligibility of lenders and servicers participating in the guarantee program at least every 2 years. To enhance and formalize the guarantee program’s risk-management finalize and adopt policies and procedures for the guarantee program’s Credit Policy Committee, document lines of communication between the different components of the risk-management structure for the guarantee program, and complete steps to create and fill a Chief Risk Officer position for RD as soon as practicable. To strengthen risk assessment and reporting, improve performance measures comparing RHS and FHA loan performance, potentially by making comparisons on a cohort basis and limiting comparisons to loans made in similar geographic areas, develop risk thresholds for the guarantee program, potentially in the form of maximum portfolio- or loan-level loss tolerances, and identify issues for increased management focus in high-level dashboard reports. To more effectively fulfill the requirements for conducting program reviews described in OMB Circular A-129, develop procedures for selecting RD credit programs for review based on risk and establish a prioritized schedule for conducting the reviews. We provided a draft of this report to OMB and USDA for their review and comment. OMB staff provided technical comments, which we incorporated into the report. USDA provided comments in an e-mail from the audit liaison officer in RD’s Financial Management Division. In its comments, RD agreed with or indicated that it was taking steps to address 5 of our 11 recommendations and neither agreed or disagreed with the remaining 6. Concerning our recommendation to enhance screening of loan guarantee applicants using Treasury’s Do Not Pay portal, RD noted that it did not have the resources to manually conduct Do Not Pay searches for all loan guarantee applicants at this time because of the large volume of applicants and the technological limitations of the portal. RD also said that RHS staff had completed the Do Not Pay enrollment process and were working with Treasury to begin accessing the portal as an additional verification resource. Concerning our recommendation to create and fill a Chief Risk Officer position as soon as practicable, RD said it planned to hire someone for the position in fiscal year 2016. RD stated it agreed with our recommendations to establish standing policies and procedures governing the frequency of lender and servicer eligibility reviews, finalize and adopt policies and procedures for the Credit Policy Committee, and document lines of communication among components of the risk-management structure. For four of the six recommendations with which RD neither agreed nor disagreed, RD said it recognized the underlying risk implications and was continuing to consider the recommendations. The recommendations concern the development of qualification requirements for principal officers of guarantee program lenders and servicers, capital and financial requirements for nonsupervised guarantee program lenders, risk thresholds for the guarantee program, and improved measures for comparing RHS and FHA loan performance. For the first three of these recommendations, RD stated that “existing requirements may currently address concern,” but it did not cite the requirements to which it was referring or otherwise elaborate. We maintain that existing requirements do not address the concerns underlying our recommendations. As our report notes, RHS effectively relies on requirements of other mortgage institutions (such as FHA and VA) for lender and servicer approval, and these requirements may not be well-suited to RHS’s program. For example, FHA’s net worth requirement for nonsupervised lenders is calibrated to the amount of FHA business the lender does, and the suitability of VA’s net worth requirement for RHS is limited by the substantially lower loss coverage of VA’s program compared with RHS’s. Furthermore, while it is possible that detailed analysis of FHA and VA requirements would find them sufficient for RHS’s program, RHS did not provide any evidence that it had conducted such an analysis. Regarding risk thresholds, while our report notes that statutory limits exist on RHS’s annual business volume and the percentage of each mortgage it can guarantee, these limits define RHS’s maximum possible financial exposure rather than loss tolerances established by agency management. OMB Circular A-129 requirements, including the requirement for establishing risk thresholds, outline steps agency officials should take to manage the risks of their credit programs. For the remaining two recommendations—which concern identifying areas for increased management focus in dashboard reports and prioritizing Circular A-129 program reviews based on risk—RD elaborated on current agency practices but did not indicate whether or how it planned to address the recommendations. RD stated that its dashboard reports provided differing levels of detail, but also acknowledged that they contained no specific issues for increased focus despite the existence of program challenges identified by agency staff. Including a qualitative discussion of areas meriting greater attention in dashboard reports would help ensure that agency managers address emerging risks in a timely manner. Finally, RD described its process for prioritizing programs for Management Control Reviews. As we described in our report, these periodic reviews are a USDA requirement designed to determine whether necessary controls are in place and producing intended results, comply with applicable laws and regulations, and provide solutions to reduce or eliminate any deficiencies. While we acknowledge RHS’s risk-based process for prioritizing programs for Management Control Reviews, our recommendation addressed program reviews required by OMB Circular A-129. Although the two types of reviews may be complementary, the A- 129 program reviews are broader in scope than Management Control Reviews. For example, A-129 program reviews should assess whether programs are achieving policy goals while mitigating risk and cost to the taxpayer and minimizing displacement of private credit markets. In addition, they should identify any area in which a program is not consistent with the requirements of Circular A-129, and evaluate the effects of any deviation and if the deviation is still necessary. USDA and RD instructions for Management Control Reviews do not include these requirements and do not reference A-129. Using a risk-based approach for selecting programs for A-129 program reviews, rather than the rotational process RD previously described and that we discussed in our report, would help RD manage its multiple credit programs in the most effective manner. We added language to the body of our report and to our recommendation to clarify that our focus was on risk-based selection of programs for program reviews conducted in accordance with A-129 requirements. In addition, in its comments, RD concurred with our characterization of trends in the estimated long-term costs of the guarantee program and RD’s efforts to develop an econometric model to improve the quality of cost estimates. RD also said that, based on its preliminary analysis, the econometric models will correct for the overestimation of future losses that resulted from using the historical method in recent years. As a result, RD said that it expects its 2016 credit subsidy reestimate will be a downward reestimate. RD also provided clarification on the role of RD’s Office of the Chief Financial Officer in reviewing RHS actions to address Office of the Inspector General audit recommendations. We incorporated the information about the potential impact of the econometric models and the role of RD’s Office of the Chief Financial Officer into the final report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Director of the Office of Management and Budget, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Our objectives were to examine: (1) recent trends in the credit subsidy costs of the Rural Housing Service’s (RHS) single-family guarantee program (guarantee program) and the process for estimating those costs and (2) the extent to which RHS’s policies and procedures for the guarantee program are consistent with Office of Management and Budget (OMB) standards for managing credit programs. To examine recent trends in credit subsidy costs for the guarantee program and the process for estimating these costs, we analyzed credit subsidy cost estimates and reestimates from the President’s budgets for fiscal years 2006 through 2016 (which include the final reestimates for fiscal years 2004 through 2014). We reviewed related requirements and guidance, including the Federal Credit Reform Act of 1990; the Office of Management and Budget (OMB) Circular No. A-11 (Preparation, Submission, and Execution of the Budget); the Federal Accounting Standards Advisory Board’s Federal Financial Accounting and Auditing Technical Release 6 (Preparing Estimates for Direct Loan and Loan Guarantee Subsidies under the Federal Credit Reform Act); and the Government-wide Audited Financial Statement Task Force Subcommittee on Credit Reform Issue Paper 96-CR-7 (Model Credit Program Methods and Documentation for Estimating Subsidy Rates and the Model Information Store). We examined documentation on the processes and tools Rural Development (RD) uses to determine subsidy costs for the guarantee program, including RD’s cash flow model and technical and other guidance associated with the model. We also reviewed documentation of analyses RD’s independent financial statement auditor conducted on the model as part of RD’s fiscal years 2012 through 2014 financial statement audits and associated findings and recommendations. In addition, we reviewed analyses conducted by an independent contractor in 2013 on the model’s ability to predict cash flows and associated recommendations. To obtain information about manual adjustments made to the model for credit subsidy cost reestimates in recent years, we interviewed officials from RD’s Office of the Chief Financial Officer, including staff from the National Financial and Accounting Operations Center. In addition, we reviewed the contract RD awarded for the development of an econometric model for estimating credit subsidy costs for future budgets and interviewed RHS and RD officials and contractor staff on their plans for and progress on developing the model. To provide context for recent trends in the program’s credit subsidy costs, we analyzed RD data on the number of loans guaranteed annually from fiscal year 1992 (the first year RHS made guarantees nationwide) through fiscal year 2014 and the total dollar amount of outstanding guarantees each year in fiscal years 2004 through 2014. We also analyzed RD data on loss amounts for the fiscal year 2000 through fiscal year 2013 cohorts as of September 30, 2014. Specifically, we calculated cumulative loss rates by cohort and year from origination. The cumulative loss rates represent the total losses at a given point in time divided by the original dollar volume of loans guaranteed. To assess the reliability of these data, we reviewed related documentation, including information about their source systems and how the data were compiled. We also interviewed RD officials knowledgeable about the data and compared them with other data sources, where possible. We concluded that the data elements we used were sufficiently reliable for the purposes of describing trends in the guarantee program’s business activity, portfolio size, and loss experience. Additionally, we obtained extracts of RHS loan-level data, including loan and borrower characteristics and performance information, for guarantees made in fiscal years 2010 through 2014 to confirm that RHS maintained the types of data suitable for credit subsidy cost modeling. To determine the extent to which RHS’s policies and procedures were consistent with OMB standards, we reviewed OMB Circular A-129 (Policies for Federal Credit Programs and Non-Tax Receivables). We focused on part III of the guidance, which contains a number of standards pertinent to risk management for a loan guarantee program, including standards for credit extension (applicant screening, loan documentation, and collateral requirements), credit program management (management and oversight and data-driven decision making), and management of guaranteed loan lenders and servicers (lender and servicer eligibility, agreements, reviews, and corrective actions). We reviewed RHS’s policies and procedures for these functions contained in regulations, handbooks, and other guidance and documentation. These included U.S. Department of Agriculture (USDA) and RD policies, Federal Register notices, regulations, and RD and RHS organizational charts and position descriptions. They also included the guarantee program’s technical handbook, lender and servicer compliance review guides, loss mitigation and loss claim guides, Guaranteed Underwriting System guidance, administrative notices, unnumbered letters (a type of internal guidance), annual reports, and portfolio performance reports. We assessed the extent to which they were consistent with the OMB A-129 standards. For several of the OMB standards, we identified related, sound risk management practices cited in documents from other organizations. These included previously issued GAO reports on risk management frameworks, federal internal control standards, and attributes of successful performance measures and publications from the Committee on the Sponsoring Organizations of the Treadway Commission and the International Association of Credit Portfolio Managers. To supplement our understanding of the OMB guidance and RHS’s policies and procedures, we interviewed OMB staff knowledgeable of the 2013 update of Circular A-129 and various USDA officials. The USDA officials included the RHS Administrator, Deputy Administrator, and Director of the Single Family Housing Guarantee Loan Division, as well as staff from that division, RHS’s Centralized Servicing Center, and RD’s Office of the Chief Financial Officer (which includes the National Finance and Accounting Operations Center). These interviews typically included representatives from an RHS contractor with responsibilities for producing risk analytics and conducting compliance reviews of national lenders and servicers. To determine what prior audits and evaluations of the guarantee program had found, we met with USDA’s Office of Inspector General and reviewed pertinent Inspector General audit reports. We also reviewed RD’s 2008 and 2013 management control reviews for the guarantee program. For both the Inspector General reports and the management control reviews, we determined the status of recommendations made to address deficiencies in program management and implementation. We also reviewed nationwide summaries of RD state internal reviews, which include the guarantee program in their scope, for fiscal years 2012 through 2014. We did not verify RHS’s compliance with its own policies and procedures or assess their effectiveness. However, the prior audits and evaluations we reviewed included compliance testing and reviews of information documenting RHS actions to address any recommendations. We conducted this performance audit from May 2014 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Steve Westley (Assistant Director); Alexandra Martin-Arseneau (Analyst-in-Charge); Abiud Amaro Diaz; Stephen Brown; Marcia Carlsen; William R. Chatlos; Melissa Kornblau; John McGrail; Barbara Roesmann; Jena Sinkfield; and Heneng Yu made key contributions to this report.
In recent years, RHS's single-family mortgage guarantee program has grown significantly, and RHS currently manages a guaranteed portfolio of more than $100 billion. RHS helps low- and moderate-income rural residents purchase homes by guaranteeing mortgages made by private lenders. GAO was asked to examine the program's cost estimation methodology and risk-management structure. This report discusses (1) recent trends in the credit subsidy costs of RHS's guarantee program and the process for estimating those costs and (2) the extent to which RHS's policies and procedures for the program are consistent with federal standards for managing credit programs. GAO analyzed RHS budget data for fiscal years 2004 through 2014, examined RHS policies and procedures, reviewed OMB standards, and interviewed RHS officials. The estimated credit subsidy costs (expected net lifetime costs) of single-family mortgages guaranteed by the Department of Agriculture's (USDA) Rural Housing Service (RHS) substantially increased in recent years, partly due to high losses from the 2007 through 2011 housing crisis. For example, the fiscal year 2013 and 2014 reestimates (which federal agencies must do annually) indicated higher expected costs of $804 million and $615 million, respectively, compared with the prior reestimates (see fig.). To improve the current estimation method (which relies on average historical losses), RHS hired a contractor to develop statistical models that will predict losses based on loan, borrower, and economic variables. RHS's policies and procedures are not fully consistent with all Office of Management and Budget (OMB) standards for managing credit programs (OMB Circular A-129). RHS's policies and procedures are consistent with the OMB standards in most areas, including loan documentation, collateral requirements, and aspects of applicant screening and lender oversight. However, RHS has not established and published all required lender eligibility standards such as principal officer qualifications (e.g., experience level) and financial standards (e.g., minimum net worth); lacks written policies and procedures for a committee responsible for analyzing and addressing the credit quality (default risk) of guaranteed loans; has not established a position independent of program management to help manage the risks of its guaranteed portfolio; has not established risk thresholds (for example, maximum portfolio- or loan-level loss tolerances) and uses certain loan performance benchmarks that have limited value for risk management; and has not incorporated a discussion of areas needing increased management focus into its “dashboard” reports. These and other inconsistencies occurred in part because RHS has not completed an ongoing assessment of its policies and procedures against Circular A-129. Furthermore, the Office of Rural Development (which oversees RHS) has not established procedures for prioritizing Circular A-129 reviews of its credit programs based on risk. More fully adhering to Circular A-129 standards would enhance RHS's effectiveness in managing the risks of its guarantee program. GAO is making 11 recommendations to USDA to help ensure that RHS's policies and procedures are consistent with OMB standards and to strengthen management of the guarantee program and other credit programs. Areas on which the recommendations focus include overseeing lenders, formalizing or establishing key risk management functions, and assessing and reporting on portfolio risk and performance. RHS agreed with or said it was acting on five of the recommendations. RHS neither agreed nor disagreed with the rest but said it generally recognized the underlying risk implications. GAO maintains that the recommendations are valid, as discussed in the report.
The United States and Mexico share a nearly 2,000-mile border that stretches from the Pacific Ocean to the Gulf of Mexico. Although more than half of this boundary is delineated by the Rio Grande, communities on both sides of the border are affected by the region’s air quality as well as such common natural resources as groundwater, aquifers, rivers, and watersheds. For example, several communities in and bordering Texas depend on the Rio Grande for drinking water, domestic and industrial uses, and discharging wastewater. The cities of San Diego and Tijuana use the Pacific coastal waters for recreation, fishing, and wastewater discharge. Because of the transboundary character of the border region’s ecosystem and the need to address pollution problems binationally, the United States and Mexico signed the 1983 La Paz Agreement, which defines the border region for purposes of environmental cooperation as the area within 100 kilometers (62 miles) of either side of their international boundary. In the last two decades, border communities have experienced significant population growth. Between 1980 and 1996, the total population of border communities has grown from over 4 million to almost 10 million people. Most of the population is concentrated in 14 pairs of neighboring cities that are distributed across four U.S. and six Mexican states. (See fig. 1.) Almost one-third of the population in the border region lives in the San Diego/Tijuana metropolitan area, while another third is distributed among the following four large metropolitan areas along the Rio Grande in Texas: El Paso/Ciudad Juarez, Laredo/Nuevo Laredo, McAllen/Reynosa, and Brownsville/Matamoros. Most of the remaining population in the border region is scattered among the other nine pairs of neighboring cities. The rapid growth in this region is generally attributed to the potential of northern Mexican communities to provide economic opportunities that cannot be found in Mexico’s interior because of adverse economic conditions. Northern border communities also offer potential access to the U.S. job market. The availability of jobs with maquiladoras—companies located in Mexico’s northern border region that use imported materials to produce finished goods for export—has been a key factor in attracting Mexican workers to migrate to that border. Many of these migrants tend to cluster in and around suburban areas where housing is affordable but basic environmental services (such as trash collection, sewage connections, and a potable water supply) are limited or not available. In developing binational solutions to the border region’s environmental problems, policymakers in both countries face unique challenges because of the transboundary nature of the border environment, differing approaches to addressing problems in public policy, and a substantial lack of financial and technical resources. To address these problems, the United States and Mexico have established several mechanisms; in 1993, both governments signed a supplemental agreement to NAFTA to establish the BECC and the NADBank, which were created to complement existing funding to improve the border region’s environmental infrastructure and to strengthen cooperation on addressing the region’s environmental problems. The BECC’s purpose is to certify environmental infrastructure projects—primarily for drinking water, wastewater treatment, and municipal solid waste—for subsequent financing by the NADBank in the form of loans and loan guarantees at market interest rates with flexible repayment terms. The agreement encourages the private sector to invest in projects that are operated and maintained through user fees paid by polluters and the border communities benefiting from these projects. Because of the low income levels of border communities, both countries have recognized that the ongoing availability of grant funds and low-interest loans from both sides of the border that could be combined with NADBank funds was essential to make environmental infrastructure projects financially viable. The BECC and the NADBank complement an existing binational framework of environmental cooperation dating back to the late-1970s. As border communities grappled with an array of pollution problems linked to the rapid population and industrial expansion at that time, the International Boundary and Water Commission developed recommendations for addressing sanitation issues in the border region. The Commission now plans, constructs, and operates several wastewater treatment plants and projects on both sides of the border. In 1983, the La Paz Agreement, signed by the presidents of the United States and Mexico, established binational workgroups to address various problems with air, soil, and water quality as well as hazardous waste in the border region. In 1990, both governments agreed to implement action plans to bolster efforts undertaken under the La Paz Agreement to respond to various media-specific pollution problems, which appeared to have worsened with the region’s rapid population and economic growth. Although the United States and Mexico have expanded efforts in recent years to address environmental problems in the border region, many environmental infrastructure needs remain unmet and continue to pose serious threats to human health and the environment on both sides of the border. These unmet needs are particularly acute on the Mexican side of the border, where the basic infrastructure is generally insufficient and sometimes nonexistent for connecting outlying communities to services for municipal sewage collection, wastewater treatment, and solid waste disposal. Most U.S. border communities have an adequate basic infrastructure to provide drinking water, wastewater treatment, and solid and hazardous waste disposal. The colonias, however, have many unmet environmental infrastructure needs, and some other communities need to expand or upgrade the capacity of their existing infrastructure to meet the ever-increasing demand from population and industrial growth. EPA believes that insufficient infrastructure on the Mexican side, coupled with rapid population and economic growth, has contributed significantly to severe water pollution problems in the border region on both sides of the border and poses significant threats to human health and the environment. According to Mexico’s National Water Commission, the government agency responsible for national water policy, the Mexican border region has the capacity to treat only about 34 percent of the wastewater it generates, and most treatment plants are also underfinanced and poorly maintained and operated. For example, the municipal sewage connections of Matamoros and Ciudad Juarez reach only about 56 percent and 84 percent of their residents, respectively, and both cities lack wastewater treatment facilities. According to EPA, other sister cities experience similar problems with water pollution that is mostly caused by inadequate wastewater treatment capacity and problems with sewage collection. The sister cities of Mexicali and Calexico have contributed to severe pollution of the New River, which flows from Mexicali and drains into the Salton Sea in California. The domestic and industrial waste generated by Mexicali’s population of nearly 700,000 and its more than 200 industrial facilities exceeds the capacity of that city’s two wastewater treatment plants. As a result, raw and inadequately treated wastewater is routinely discharged into the New River. In Imperial County, California, agricultural runoff and irrigation return flows also pollute the New River. Several border communities, particularly in Mexico, lack the capacity to collect and dispose of the domestic and industrial solid wastes they generate. In such cities as Matamoros and Reynosa, municipal garbage collection trucks are in poor condition and too few in number to meet the needs. Both cities also have problems with their solid waste disposal facilities. According to the city official responsible for environmental control, because the entrance to the municipal solid waste disposal facility for Matamoros is generally unguarded, the site is vulnerable to illegal disposal of hazardous and/or dangerous industrial wastes that threaten the quality of groundwater. Furthermore, several families live at or near the site and rummage through its waste in search of items that can be used or sold. About 1 mile from the facility’s entrance, waste that is incinerated in the open produces a thick, dark cloud of smoke that impairs visibility and the area’s air quality. In addition, an open canal carrying the city’s untreated sewage passes through the site. This official told us that the overall conditions at the Matamoros municipal solid waste disposal facility threaten the health of the area’s residents and the environment. (See fig. 2.) Reynosa also suffers from an inadequate capacity for solid waste collection and disposal. While the city has one municipal dump, it also has 17 large illegal dumps and hundreds of vacant lots used as dumps. According to the Mayor of Reynosa, the inadequacy of the municipality’s domestic waste collection service has prompted the emergence of approximately 700 illegal trash collectors who use horse-drawn wagons and usually dispose of trash illegally, including dumping it into the Rio Grande. Most communities on the U.S. side have an adequate capacity for solid waste collection and disposal. According to officials from the Texas Natural Resource Conservation Commission, most border communities in Texas have adequate capacity to meet their solid waste disposal needs for at least the next 10 years. However, these officials cautioned that this capacity may be reduced as stricter enforcement curbs illegal dumping and solid waste collection service is extended to colonias, where it has often been inconsistent and inadequate. In addition, as Mexico increases its enforcement of solid waste laws, the return of additional maquiladora waste to the United States will reduce the years of landfill capacity that are currently projected. Several communities on both sides of the border have made progress in responding to some of their most severe environmental problems. Prior to the International Boundary and Water Commission’s decision in 1990 to construct an international treatment plant for Tijuana’s wastewater, the uncontrolled flows of untreated sewage crossing the international boundary reached a peak of 13 million gallons per day. As a result of improved sewage collection in Tijuana, the uncontrolled flow has been reduced to between 1 million and 2 million gallons per day. Furthermore, a sewage treatment facility in Nuevo Laredo is nearing completion and undergoing testing and will soon begin treating the city’s domestic and industrial wastewater, which currently drains into the Rio Grande untreated, thereby endangering human health and the environment on both sides of the border. Ciudad Juarez plans to construct wastewater treatment plants and has submitted proposals to the BECC for certification. To reduce pollution of the New River, the U.S. Congress has appropriated funds for wastewater infrastructure improvements in Mexicali. These improvements include planning and designing facilities and such short-term projects as “quick fixes” to upgrade aging and overwhelmed sewage collection and treatment systems. To address solid waste problems, the municipality of Nuevo Laredo granted a concession to a private sector investor for the city’s solid waste collection and disposal systems. Furthermore, the community opened a new solid waste landfill (with a guarded entrance) in 1993 and started patrolling illegal dump sites throughout the city. According to the Mayor of Reynosa, the municipal government has similar plans to grant a solid waste concession to private investors. In the meantime, the Ecological Commission of Reynosa, a nonprofit citizens’ group, has organized trash collection drives and has been educating horse-drawn trash collectors about environmentally sound waste disposal practices. Mexican border communities and U.S. colonias face the most immediate and basic environmental infrastructure needs in the border region, primarily because of financial and institutional obstacles. In Mexico, these obstacles center on the communities’ lack of financial autonomy from the federal and state governments. For example, the Mexican Constitution prohibits its states and municipalities from incurring financial obligations in foreign currencies and with foreign creditors. U.S. colonias are similarly dependent upon financial assistance from the federal and state governments as well as local government entities for this assistance to meet their environmental infrastructure needs. Communities on both sides of the border often lack the experience in planning, constructing, and operating public works projects as well as the financial and administrative ability to raise capital and to repay debt. To address these obstacles, the NADBank—in coordination with the BECC and responsible U.S. and Mexican government agencies (such as EPA) and border communities—plans to assemble innovative financing packages to make infrastructure projects financially viable and self-supporting. Both Mexican communities and U.S. colonias face financial and institutional obstacles to obtaining funds for environmental infrastructure projects. To finance these projects, Mexican states and communities rely heavily on the revenues they receive under a revenue-sharing system supported by a federal tax. These revenues may be used either for direct financing or as leverage for loans from domestic commercial or development banks. Mexican states generally have the power to decide the share that communities will receive, either by laws that have established allocation formulas or legislative decree. Communities in states with allocation formulas have reliable revenue streams, which provide the best guarantee that loans to them will be repaid in a timely manner. These communities tend to use their share of the tax as collateral for loans provided by commercial and federal government banks. If a municipal borrower defaults on a loan, creditors can inform the Mexican Treasury (the agency that administers the tax), which has the authority to make the loan payment from the revenue share of the delinquent municipality. However, the revenue available to most communities is uncertain because it is dependent upon allocations made annually by legislative decree. Such uncertainty deters these communities from investing in infrastructure development. For many environmental infrastructure projects, communities turn to Mexico’s National Bank of Public Works and Services, known as BANOBRAS, as a major source of credit. BANOBRAS lends to states and communities in Mexican pesos at a few points above the Mexican Treasury rate, which currently stands at about 26 percent. BANOBRAS levies additional interest rates to reduce the risk of losses from currency devaluations, which have occurred repeatedly in the last decade.BANOBRAS also administers loans from the World Bank, which in 1994 extended a $368 million line of credit to support environmental infrastructure projects under Mexico’s Northern Border Environment Project. BANOBRAS relends these funds to border communities in Mexican pesos and at higher interest rates to (1) construct solid waste, hazardous waste, and urban transportation infrastructure projects and (2) improve the ability of states and communities to administer environmental programs. According to officials of BANOBRAS and border communities, because these communities often cannot afford to borrow at the high interest rates BANOBRAS sets, to the extent possible, they use loans from commercial banks that offer lower rates. However, the lack of investors’ confidence in the ability of these communities to repay debt limits their access to commercial loans and makes competing with other borrowers difficult. U.S. colonias face financial and institutional obstacles similar to those of their neighboring communities in Mexico. Because colonias are unincorporated settlements, they lack the basic financial and institutional mechanisms available to U.S. cities with operating governments and tax bases. To expand their revenue base from property taxes and fees for basic public services, some U.S. cities have expressed interest in incorporating nearby colonias. However, ongoing jurisdictional disputes about service areas among counties, cities, and corporations that supply water to rural areas have left many colonias without an environmental infrastructure to meet their basic needs. This situation is compounded by the fact that border counties in Texas and New Mexico, which are usually responsible for providing basic services for areas outside a city’s jurisdiction, have a limited ability, in comparison to cities, to provide the needed environmental infrastructure and services because traditionally they have not provided these services. As a result, these border counties often lack the necessary technical, financial, and personnel resources to assist colonias with meeting their infrastructure needs. The strong dependence of border communities on the Mexican federal government has prevented them from gaining the experience necessary to plan, develop, and manage public works projects. As a part of federal efforts to decentralize decision-making, states and communities have only recently assumed responsibility for planning and providing key public services to their residents. Municipal officials therefore have limited experience in conducting thorough economic and fiscal analyses of proposed environmental infrastructure projects. According to a BANOBRAS official, although Mexican communities have invested substantial effort to develop their administrative capabilities, they have not yet reached the point at which they can issue debt. For example, recent plans to finance an $8 million wastewater treatment plant in Ensenada, Baja California, were delayed when the NADBank’s and the State of Baja California’s analyses showed that the project needed technical revisions to meet the Bank’s loan requirements. For example, the site selected for the plant and the plant’s capacity to treat wastewater were inadequate. The Mexican Constitution prohibits states and municipalities from incurring financial obligations in foreign currencies and/or with foreign creditors, which prevents them from raising capital outside of Mexico’s domestic market. Consequently, according to BANOBRAS officials, although a Mexican community can negotiate a line of credit, it cannot borrow directly from the NADBank. Instead, Mexico’s Treasury serves as the recipient of funds for borrowers and then forwards those funds to BANOBRAS, which relends them to the borrowers that had requested loans for specific projects. NADBank funds are loaned to Mexico’s Treasury in dollars but are repaid by Mexican borrowers through user fees in pesos, resulting in a foreign exchange risk. According to NADBank officials, the Mexican government is funding a new “hedging mechanism” to provide insurance against currency devaluations for both the NADBank and its Mexican borrowers. With this protection, the NADBank will be more willing to loan funds to Mexican communities because the Bank will have greater certainty that loans will be repaid. To supplement existing funding for environmental infrastructure projects (particularly for drinking water, wastewater treatment, and municipal solid waste), the NADBank has begun to facilitate developing and financing environmental infrastructure projects in the U.S.-Mexican border region.According to the NADBank’s Chief Operating Officer, the Bank plans to provide between $6 billion and $9 billion for investing in border environmental infrastructure projects over the next 10 years by using loans, loan guarantees, and joint arrangements with other sources of financing. The Bank also intends to provide financial advisory services to border communities that are developing projects, a key ingredient to making those projects financially viable. In providing these services, the Bank intends to play a role similar to that of an investment bank by “acting to secure needed equity, grants, and/or other sources of financing from a variety of public and private sources on a project-by-project basis.”According to officials from the NADBank and the U.S. Treasury, the Bank’s investment-banking role is intended to encourage border communities to depend less on grant-financing (until recently the predominant form of funding) and more on loans to be repaid through user fees or other dedicated sources of revenue. Providing loans to projects whose financial and technical elements have received the BECC’s certification is intended to help border communities build the financial and technical capability to operate and maintain environmental infrastructure projects through their useful lifetimes. Because most U.S. border cities and counties (with the exception of colonias) are rated by Moody’s as investment grade, they have the financial standing to qualify for market rate loans, such as those offered by the NADBank. However, it is unclear whether U.S. border cities and counties will turn to the NADBank for financial assistance. The Bank’s credit guidelines stipulate that for direct lending in U.S. dollars, the bank will charge an interest rate of at least 1 percent above U.S. Treasury rates for securities having comparable maturity dates. U.S. state and local officials told us that U.S. border communities have cheaper sources of capital for infrastructure financing at their disposal, such as state revolving fundsand tax-exempt municipal bonds. However, NADBank officials point out that the Bank will complement existing financing to help communities that cannot meet their infrastructure needs solely through existing financing arrangements. For example, the NADBank is reviewing a $25 million potable water treatment project the BECC has certified for the City of Brawley, California. Brawley has requested the NADBank’s assistance to develop a financing package to access about $17 million in private sector financing, with the remaining balance coming from state and federal grants ($3.85 million) and a state loan ($5 million). Because Mexico lacks a mechanism similar to state revolving loans, NADBank loans are an attractive alternative for Mexican communities that are able to incur and repay debt, provided they do not have to obtain those loans through BANOBRAS at a significantly higher interest rate. Most Mexican communities, however, have yet to achieve the financial standing in capital markets to meet the NADBank’s high standards for creditworthiness. According to BANOBRAS officials, the Mexican federal government will likely continue to play a significant role in providing financial backing to its border communities. This assistance will be provided either through BANOBRAS or through financial guarantees provided by the Mexican Treasury’s federal tax and revenue-sharing system. However, raising capital in foreign markets is difficult because Mexico’s current economic situation puts its credit rating just below investment grade for foreign currency. The Mexican government, in conjunction with the NADBank, is seeking to resolve some of the financial and administrative obstacles challenging Mexican communities. As described earlier, to protect the NADBank’s investors against potential losses from Mexican currency devaluations, the Mexican government has created a hedging mechanism for loans issued in dollars. This mechanism is a form of insurance for both the NADBank and its Mexican borrowers that will provide short-term emergency capital to continue repaying loans, thereby preventing defaults. To complement existing funding from the NADBank, the Mexican government has also created a revolving loan fund administered by BANOBRAS to encourage private sector investment in infrastructure projects that might not otherwise receive funding due to their size, risk, and/or low return on investment. Because U.S. colonias lack basic financial and administrative capabilities, state environmental officials do not believe that NADBank loans will be a practicable option for assisting them. To meet the special needs of colonias for water infrastructure assistance, Texas and New Mexico have received about $186 million, approximately 36 percent of all EPA’s funding for border projects over the last 5 years. Even with this funding, most of their residents have not benefited from environmental improvements, primarily because the cost to connect to nearby systems is prohibitive. Federal and state environmental officials believe that grant funds will continue indefinitely to be the primary funding source for U.S. colonias and similar Mexican communities. To assist with these needs, the United States and Mexico have agreed to provide $700 million each in grant funds to border communities over 7 to 10 years (beginning in fiscal year 1995) to supplement the NADBank and other funding sources. The U.S. share of these funds will be provided through EPA, which plans to fulfill its commitment within the next 6 years. In addition, EPA and the NADBank have begun to formalize their working relationship through meetings and correspondence to improve the border communities’ access to financing for infrastructure projects. The agency has also entered into a formal agreement with the International Boundary and Water Commission to provide financial and technical assistance to border communities to meet the BECC’s certification requirements and, in turn, qualify for financing from the NADBank. Under this agreement, EPA has begun to make funding available for wastewater treatment facility planning and is currently evaluating additional avenues for project development. Between fiscal years 1991 and 1995, EPA invested approximately $520 million on border-related environmental activities in two general categories—funds that the Congress had earmarked for water infrastructure assistance ($441 million) and funds that were spent at the agency’s discretion ($79 million). Those funds earmarked by the Congress were channeled to (1) the International Boundary and Water Commission, primarily to reduce wastewater flows from Mexico into the United States and the pollution of surface water and groundwater resources shared by the two nations and (2) Texas and New Mexico, to provide water infrastructure assistance to colonias. The remainder of EPA’s funding was spent at the agency’s discretion and supported a variety of media-specific activities outlined in the 1983 La Paz Agreement as well as other priorities for the agency. EPA’s discretionary expenditures for border-related activities were spread across 11 program areas. (See table 1.) Funding within each program area was further divided across a wide range of projects, such as training; technical assistance; data gathering on the types, magnitudes, sources, and impacts of pollution; coordinating existing data on the border region; and testing low-cost and/or less-polluting technologies. For example, the Compendium of EPA Binational and Domestic U.S.-Mexico Activities (June 1995) and the listing of the Border XXI Community Grants for fiscal year 1995 show that EPA dispersed these funds to over 130 projects. The expenditures shown in table 1 include a variety of media-specific projects that EPA has funded on the basis of input from stakeholders in the border region’s environmental activities, including binational workgroups, EPA’s program and regional offices, state and local governments, and nongovernmental organizations. Some of the activities receiving EPA’s funding clearly target environmental needs and provide details on how the information gathered will be used to remediate a specific problem. For example, one project is establishing air-monitoring networks in Tijuana and Mexicali to determine the sources, magnitude, and effects of air pollution. EPA plans to use the data collected from this effort to develop cost-effective control strategies and to measure progress and compliance. Nevertheless, many of the projects funded at EPA’s discretion are activities that do not include environmental indicators and specific objectives that are clearly linked to measurable environmental outcomes. For example, several of the agency’s binational activities are driven by objectives that include facilitating the exchange of information, improving peer relations, and reaching an understanding between the United States and Mexico. While regional agency officials believe that many of these activities will improve environmental conditions by increasing the ability of Mexican communities and agencies to address pollution problems and by developing basic information upon which to make funding decisions, they also acknowledged that the benefits of these activities may not be directly traceable to environmental improvements. Activities with general objectives that lack environmental indicators make it difficult for EPA to link specific activities to measurable changes in environmental conditions and to measure the effect of its funding decisions on remediating the border region’s most critical environmental problems. According to an EPA headquarters official, the agency could have been more thorough in quantifying the effects of its expenditures on improving conditions in the border region. EPA plans to initiate several efforts to link its future funding decisions on projects for the border region to environmental goals. For example, EPA plans to play a central role in a newly established binational workgroup that will inventory all existing environmental information for the border region. EPA also plans to focus its funding on projects that have measurable environmental benefits. In addition, EPA plans to assess the border region’s water supply and wastewater infrastructure needs and has initiated a dialogue with the NADBank to discuss cooperative funding arrangements for environmental projects. In the absence of a comprehensive assessment of needs among all environmental media and program areas, over the past 5 years EPA has spent about $79 million at its discretion to address various environmental problems (including activities to improve the quality of air and safely dispose of hazardous waste). While EPA officials told us that the agency has not initiated any actions to prepare such a comprehensive assessment, they said that it will likely assess these needs within 5 years. Timely action to establish priorities based upon such an assessment is essential to EPA’s selecting the most critical projects to fund. According to a NADBank official, the Bank believes its success depends on EPA’s timely efforts to provide funds for environmental infrastructure projects with the highest priority. He noted that the problems confronting the region greatly exceed the public and private finances available to address them over the next several years and that expenditures of limited funds should be directed to achieve the maximum environmental benefits for the region. He said that the Bank views EPA’s funding as critical to the Bank’s development of affordable financing packages for border communities and assistance in building their technical, financial, and administrative capacity to support infrastructure projects. EPA will continue its role in assisting border communities with their environmental infrastructure needs under the new Border XXI Program released in draft form in June 1996. This program will build upon efforts taken under EPA’s Integrated Environmental Plan for the Mexican-U.S. Border Area (First Stage, 1992-1994) to improve environmental conditions in the border region. The new program will attempt to overcome shortcomings identified with that plan by expanding the new program’s scope, increasing public input into the decision-making process, integrating environmental protection with natural resource management, and increasing attention to environmental health concerns. U.S. and Mexican federal entities responsible for environmental conditions in the border region will work cooperatively through nine multiagency Border XXI Workgroups to implement the new program. Among the objectives of the Border XXI Program will be to inventory all environmental data for the border region and to establish environmental indicators. Inventorying all environmental data would help ensure that EPA’s limited funds for the border region are spent on activities that address the most urgent needs first. In addition, a timely assessment of environmental data would help environmental stakeholders in the region target their funding requests, the NADBank consider funding requests from border communities, and the Congress earmark funds for the region’s highest-priority needs. However, the program does not include specific plans to use the inventory of environmental data to establish criteria within as well as across the nine Border XXI set priorities based upon the established criteria, and clearly link the activities chosen for funding to environmental indicators. Although the United States and Mexico have made some progress in improving the border region’s environmental infrastructure, serious pollution problems persist that pose an ongoing threat to the health of residents and the environment. The environmental infrastructure needs of Mexican communities and U.S. colonias are particularly acute because of insufficient financial and technical resources. Limited access to affordable financing continues to prevent many of these border communities from extending basic environmental infrastructure services to residents. To improve access by border communities to needed infrastructure financing, EPA and the NADBank have begun to formalize their working relationship through meetings and correspondence. Similarly, the International Boundary and Water Commission and EPA have formally agreed to support the wastewater infrastructure planning efforts of U.S. and Mexican border communities to help them meet the BECC’s certification requirements and enhance their eligibility for financing from the NADBank. Despite these efforts, it is not certain that this financing will be affordable to communities on either side of the border. EPA’s funding for the border region provides a critical resource for U.S. border communities that lack the necessary financial and technical capacity to address their basic environmental infrastructure needs. The agency has been working to improve the bases for making funding decisions on border-related activities through several data-gathering, coordination, and other efforts. EPA plans to build on its ongoing border-related activities under the new Border XXI Program. This will include a central role for the agency in inventorying all environmental information for the border region and assessing this region’s needs for water supply and wastewater infrastructure. However, the draft Border XXI Program does not detail specific plans to use this inventory of environmental information to sequentially do the following: establish criteria within as well as across the nine binational workgroups, set priorities based upon these criteria within and across these groups, and link the priority activities it chooses to fund to measurable environmental outcomes. Such a systematic approach is needed to ensure that EPA’s limited funds target the region’s most critical needs first. To ensure that EPA’s funding for border-related activities addresses the region’s highest-priority environmental needs, we recommend that the Administrator, EPA, work with key federal entities in the United States and Mexico that are involved in developing and implementing the U.S./Mexico Border XXI Program to ensure the program includes specific plans to (1) use the inventory of all environmental data for the border region to establish criteria within as well as across the nine binational workgroups (taking into account the relative risks to human health and the environment), (2) set priorities within and across the binational workgroups according to the established criteria, and (3) clearly link the priority activities chosen for funding to environmental indicators. We provided copies of a draft of this report to the State Department and EPA for their review and comment. We met with officials of these agencies who are responsible for environmental programs in the U.S.-Mexican border region. These officials included the Principal Deputy Assistant Administrator for the Office of International Activities, EPA; Chief of the Municipal Assistance Branch, Office of Wastewater Management, EPA; the Environmental Officer and the Special Assistant, International Boundary and Water Commission, both with the Office of Mexican Affairs, State Department; and the Deputy Director, International Finance and Development, State Department. State Department and EPA officials generally agreed with the information in the report and provided technical and editorial comments that we have incorporated into the report as appropriate. However, EPA had more extensive comments and wanted us to include some additional points. The principal comments are discussed below. EPA officials believe that the agency has made significant progress in meeting its obligations under the La Paz Agreement. This progress has primarily been made through (1) establishing binational workgroups and (2) setting joint binational priorities within these groups through negotiations with their Mexican counterparts and with input from key environmental stakeholders in the border region. Although we agree that EPA has made progress, as noted in our report, the agency did not use its available data on media and programs to comprehensively assess the border region’s environmental needs before negotiating joint binational priorities with its Mexican counterparts. Such an assessment is needed to prioritize projects within as well as across binational workgroups to (1) allow the relative merits of competing projects to be ranked by decisionmakers according to their urgency and (2) maximize the use of limited funding to achieve the greatest environmental benefits. EPA officials also provided us with a copy of its U.S./Mexico Border XXI Program: Draft Framework Document after we had submitted our draft report to them for comment. This new program details the plans of EPA and other key U.S. and Mexican federal entities for the border region and will build on current binational efforts. The draft program’s objectives include plans to inventory all existing environmental information on the border region and develop environmental indicators to measure whether environmental policy is addressing the most urgent environmental problems there. The program also states that each year the program’s priorities will be weighed against available funding. This program is a good start towards addressing shortcomings identified under EPA’s Integrated Environmental Plan for the Mexican-U.S. Border Area (First Stage, 1992-1994) because it includes plans to organize environmental information, expand public participation, and address environmental health concerns. However, the draft program does not clearly state that it will sequentially do the following within as well as across the nine binational workgroups: use the inventory to establish criteria, use these criteria to set priorities, and then use these priorities to determine which activities are most urgent and merit funding. In addition, the draft program should link all funded activities to environmental indicators. Without a systematic approach, EPA cannot prioritize projects within and across binational workgroups to ensure that its limited funds are used to target the highest-priority needs first. In light of the new information EPA provided, we have modified our recommendations to address the U.S./Mexico Border XXI Program: Draft Framework Document. To respond to this report’s objectives, we met with officials from EPA headquarters and regional offices as well as the departments of the State and Treasury. We also interviewed a wide range of other U.S. and Mexican officials from both governmental and nongovernmental organizations. In addition, we reviewed documents provided by these officials as well as pertinent laws and regulations. We also traveled extensively in the U.S.-Mexican border region. Appendix I contains additional information on our scope and methodology. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Administrator of EPA and the Secretary of State. We will also make copies available to others on request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix II. Concerned about the efforts of the United States and Mexico to address environmental infrastructure needs in the border region, the Ranking Minority Member of the House Committee on Commerce asked us to examine (1) the U.S.-Mexican border region’s current and projected unmet needs for environmental infrastructure, (2) the financial and institutional challenges each country faces in addressing present and future environmental infrastructure needs, and (3) the way in which the Environmental Protection Agency (EPA) has identified and prioritized funding for environmental problems along the U.S.-Mexican border. We reviewed relevant documents and agreements between the United States and Mexico, such as NAFTA’s supplemental agreement on environmental cooperation for the border region and the accompanying legislation to implement it, the 1983 La Paz Agreement, the Integrated Border Environmental Plan, and the International Boundary and Water Commission’s Minutes on sanitation issues in the region. To review the border region’s environmental infrastructure needs and the financial and institutional obstacles challenging its communities, we reviewed documentation from EPA, the Office of U.S. Trade Representative, the Office of the Texas Governor, the California Environmental Protection Agency, the Texas Natural Resource Conservation Commission, and the Texas Water Development Board, as well as nongovernmental organizations such as the U.S. Council of the Mexico-U.S. Business Committee, the Sierra Club, the Texas Center for Policy Studies, the Environmental Law Institute, and the International City/County Management Association. We interviewed officials from EPA headquarters and Regions 6 and 9; the EPA Representative to the U.S. Embassy in Mexico City; the New Mexico Environment Department; the Office of U.S. Trade Representative; the Treasury Department, the Office of International Debt Policy; the State Department (primarily Consulate General Staff in the border region); and the BECC, including members of the Board of Directors; and the General Manager and Deputy Manager of the NADBank. We also interviewed representatives of nongovernmental organizations on both sides of the border, such as the Sierra Club, the Texas Center for Policy Studies, the Environmental Defense Fund, the Border Ecology Project, the Environmental Health Coalition, the Center for International Environmental Law, the Southwest Center for Environmental Research and Policy, the Northern Border College, the Ecological Commission of Reynosa, the Natural Resources Defense Council, the Surfriders’ Foundation, and the U.S.-Mexico Border Progress Foundation, and the Mexican Embassy in Washington, D.C. In Mexico, we interviewed officials and reviewed documents from the National Water Commission; the Ministry of Social Development; the Ministry of Environment, Natural Resources, and Fisheries; the Office of the Mexican Attorney General for Environmental Protection; the Ministry of Commerce and Industrial Development; the Secretariat of Foreign Relations; the National Bank of Public Works and Services (BANOBRAS); and the World Bank. To complement our review of documents and information gathered from interviews, we visited the sister cities of Brownsville/Matamoros, McAllen/Reynosa, Laredo/Nuevo Laredo, El Paso/Ciudad Juarez, Calexico/Mexicali, and San Diego/Tijuana to interview a wide range of governmental and nongovernmental officials. We chose these cities on the basis of their relative size, the severity of their environmental problems, and the level of investment in their environmental infrastructure projects. In Matamoros, we visited the municipal solid waste disposal site and an industrial park. In Nuevo Laredo and San Diego, we toured wastewater treatment facilities managed by the International Boundary and Water Commission. We also visited colonias in El Paso, Texas, and Sunland Park, New Mexico, to assess the lack of basic environmental infrastructure. For our review of EPA’s efforts to identify and prioritize border environmental problems, we interviewed officials and analyzed documents from EPA’s Office of International Activities and Office of Water, Regions 6 and 9, and EPA’s San Diego and El Paso border offices. We did not independently confirm the accuracy and validity of technical data provided to us by various governmental and nongovernmental organizations on both sides of the border. We performed our work from June 1995 through June 1996 in accordance with generally accepted government auditing standards. EPA and the State Department reviewed a draft of this report, and we have incorporated their comments where appropriate. Edward Kratzer, Assistant Director Jaime E. Lizarraga, Senior Evaluator Beverly L. Norwood, Evaluator-in-Charge Karen Keegan, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO provided information on the U.S.-Mexican border region's unmet environmental infrastructure needs, focusing on: (1) the financial and institutional challenges facing the United States and Mexico; and (2) how the Environmental Protection Agency (EPA) identifies and prioritizes funding for environmental problems along the U.S.-Mexican border. GAO found that: (1) many environmental infrastructure needs remain unmet on both sides of the border; (2) these needs are particularly acute on the Mexican side of the border, where the basic infrastructure is ill equipped to handle sewage collection, wastewater treatment, and solid waste disposal; (3) some Mexican communities need to expand the capacity of their infrastructure to meet ever-increasing population demands and industrial growth; (4) the Mexican border region has the capacity to treat 34 percent of its wastewater; (5) the border communities in Texas have the capacity to meet their solid waste disposal needs for at least 10 years; (6) EPA has spent approximately $520 million to help address pollution problems along the U.S.-Mexican border, but it has not developed agencywide criteria to ensure that its resources target the region's highest-priority needs; (7) communities on both sides of the border lack experience in planning public works projects, as well as the financial capacity to fund these projects; (8) the North American Development Bank provides financing for environmental infrastructure projects by securing equity, grants, and other sources of funding on a project-by-project basis; and (9) the Border XXI Program provides information on how to improve environmental conditions along the U.S.-Mexican border, develop environmental indicators, expand public participation, and address environmental health concerns.
Congress passed the Occupational Safety and Health (OSH) Act in 1970 to ensure safe and healthful working conditions for working men and women, including federal employees. While OSHA was created to administer the OSH Act, the act also gave federal agencies primary responsibility for providing federal employees with working conditions and workplaces that are free from safety and health hazards. The act authorizes OSHA to set mandatory occupational safety and health standards, rules, and regulations and to enforce their compliance. In turn, each federal agency is required to establish and maintain a comprehensive and effective occupational safety and health program that is consistent with OSHA’s standards. OSHA’s Office of Federal Agency Programs within its Directorate of Enforcement Programs has primary responsibility for overseeing federal agencies’ safety programs. OSHA’s regulations and an Executive Order establish its responsibilities for monitoring federal agencies’ programs. OSHA uses two strategies to provide oversight of federal agencies’ safety programs—enforcement and compliance assistance. OSHA’s enforcement strategy includes inspections and evaluations of federal worksites that help ensure that federal agencies are not violating any OSHA standards and are complying with the requirements for their safety programs. In addition, agencies are required to submit annual reports to OSHA on their safety programs, which OSHA uses to prepare an annual report to the President on federal agencies’ safety programs. OSHA’s compliance assistance strategy consists of a range of programs intended to help agencies improve their safety programs. OSHA is authorized to conduct inspections of federal agency worksites but, as figure 1 illustrates, inspections of federal worksites represent a very small percentage of OSHA’s overall inspections. Between fiscal years 2000 and 2004, less than 1 percent of OSHA’s inspections were of federal worksites in executive branch agencies; the remaining 99.5 percent were primarily of private-sector worksites. Federal executive branch workers represented about 1.4 percent of the overall U.S. workforce between 2002 and 2004. Inspections are conducted by OSHA’s 80 area offices in its 10 regions. Figure 2 shows the location of OSHA’s 10 regions. OSHA categorizes inspections as those that are “programmed” and those that are “unprogrammed.” Programmed inspections are those that OSHA plans to conduct because it has targeted certain worksites for inspection due to their potential hazards. Unprogrammed inspections are not planned; they are prompted by actions such as complaints, accidents, and referrals. OSHA has established a system of inspection priorities that relate to these categories, with unprogrammed inspections being a higher priority than programmed inspections. Top priority goes to imminent danger situations in which death or serious physical harm could occur. The next priority for OSHA inspectors is catastrophes and fatal accidents, followed by complaints and referrals. Programmed inspections are OSHA’s fourth priority. OSHA’s last priority is to perform follow-up inspections, which are conducted to ensure that hazards identified during previous inspections have been corrected. From fiscal years 2000 through 2004, only 40 percent of OSHA’s inspections of federal worksites were programmed. During the same period, 54 percent of its inspections of non- federal worksites were programmed. OSHA is required to conduct comprehensive annual evaluations of the larger or more hazardous federal agencies. Results of these evaluations are summarized by OSHA in reports that include information from the review of an agency’s safety policies and reports, as well as inspections of the agency’s facilities and interviews with agency personnel. In addition, OSHA is required to submit to the President an annual report on the status of federal employees’ occupational safety and health. OSHA uses reports submitted annually by federal agencies to OSHA on their safety programs—along with the results of any evaluations it has conducted of federal agencies’ safety programs—to prepare its annual report to the President. The report should also contain recommendations for improving agencies’ performance. OSHA’s compliance assistance strategy consists of several programs available to federal agencies, although some programs have only recently been offered to federal employers. OSHA also provides technical support to federal agencies, such as conducting studies of accidents and the causes of injuries and illnesses, and providing training of agencies’ safety and health personnel. Two of OSHA’s compliance assistance programs—Field Federal Safety and Health Councils and Agency Technical Assistance Requests—specifically target federal agencies, while others are generally available to both private- and public-sector employers. Compliance assistance programs for private and public sector employers include the Voluntary Protection Programs (VPP), alliances, and strategic partnerships. Approval as a VPP site is OSHA’s official recognition of worksites that have implemented exemplary safety and health programs. The VPP was started in 1982 for private-sector companies but was expanded to include federal agencies in 1997. The alliance program was started in 2002 and includes organizations that have agreements with OSHA to focus on training, outreach, and promoting awareness of safety and health issues. The strategic partnership program was started in 1998 and consists of agreements between OSHA and employers to address specific safety and health problems. OSHA also has been responsible for helping implement Presidential initiatives, with the most recent initiative being issued in 2004: the Safety and Health and Return to Employment (SHARE) initiative. This initiative directs agencies to set and adhere to both safety and workers’ compensation goals. Specifically, the initiative directs federal agencies to achieve four goals: (1) reduce the overall case rate for these claims, (2) reduce the lost-time rate—the number of employees who could not return to work per 100 employees in the workforce, (3) improve the processing time of workers’ compensation claims, and (4) reduce the lost production day rate—the lost days due to injury or illness per 100 employees. OSHA works with agencies in addressing the first two goals and helps them calculate the rates monitored. OSHA’s regulations establish the basic elements of executive agencies’ safety and health programs. According to the regulations, agencies’ programs must include provisions for top management support, participation, and accountability; safety and health policies, procedures, and standards; goals and objectives; worker involvement; safety and health training of managers and workers; collection of occupational injury and illness data; self-inspection of workplaces and self-evaluation of the programs; abatement of unsafe and unhealthful working conditions; and adequate budgets, staff, and equipment and materials. In conducting self-inspections, agencies must meet certain requirements. Inspectors are required to be qualified to recognize and evaluate hazards and suggest corrections, and they must conduct inspections of every worksite at least once a year. According to the regulations, agencies should conduct “sufficient” unannounced inspections and unannounced follow-up inspections to ensure the identification and correction of hazardous conditions. Agencies must also report annually to OSHA on their programs. In November 2004, OSHA issued a final rule amending the injury recordkeeping and reporting requirements applicable to federal agencies. Prior to this time, federal agencies were required to collect only injury information related to workers’ compensation claims. OSHA revised the recordkeeping requirements in order to improve the quality of the federal recordkeeping system and to increase the utility of the data. Beginning in January 2005, federal agencies were required to record injuries in the same manner as private-sector employers and to apply new criteria to determine whether an injury must be recorded. Specifically, a work-related injury must be reported if, for example, it results in death, 1 or more days away from work, restricted work, loss of consciousness, or a significant injury or illness diagnosed by a physician. The regulations do not require that this data be reported to OSHA. However, the regulations state that agency heads must submit an annual report to OSHA, containing such information as OSHA requests. At a minimum, these reports are to describe the agency’s safety program and include, among other things, the agency’s required self-evaluation findings. OSHA uses these reports, along with any evaluations it has conducted to prepare its annual report to the President. Safety experts and federal safety agencies agree that, to build an effective safety program, organizations must take a strategic approach to managing workplace safety and health. This objective is generally accomplished by establishing programs built upon a set of commonly recognized components of sound safety programs, which, together, help an organization lay out what it is trying to achieve, assess progress, and ensure that safety policies and procedures are appropriate and effective. Drawing from our prior work, a review of the literature, and OSHA’s requirements, we identified six components often found in sound safety programs: (1) management commitment, (2) employee involvement, (3) education and training, (4) identification of hazards, (5) following up and correcting hazards, and (6) medical management. Table 1 lists these components, along with a description of their supporting activities. Over the last 10 years, the federal executive branch workforce has changed in a number of ways, including its size, demographic characteristics, experience levels, and types of occupations. During this time, there was a 6 percent decrease in the federal workforce—from 2 million employees in fiscal year 1995 to 1.9 million in fiscal year 2004. In addition, the average age of federal workers increased from 44 to 47 years old and the average length of time in service increased slightly, from 16 to 17 years. Likewise, the average pay grade level of federal workers increased from approximately GS-9 to about GS-10. Moreover, the percentage of workers in professional and administrative positions increased from 85 to 89 percent. Federal employees encompass a wide range of professions, ranging from low-risk occupations such as office workers to highly hazardous occupations such as law enforcement positions. For example, at the U.S. Marshals Service, duties of criminal investigators include seizing assets and apprehending fugitives. In addition, U.S. Forest Service employees are involved in a variety of potentially hazardous activities such as developing laboratory products, managing recreational lands, and fighting wildland fires, while inspectors with the Food Safety Inspection Service face daily hazards such as exposure to the chemicals used to kill pathogens in meat. Finally, employees at manufacturing operations such as at the U.S. Mint and the Bureau of Engraving and Printing use industrial production equipment such as forklifts and presses. The impact of demographic changes in the makeup of the federal workforce on the number of injuries they sustain is unclear. The number of active workers’ compensation claims for work-related injuries declined from approximately 154,000 claims in fiscal year 1995 to about 137,000 claims in fiscal year 1999. However, these claims increased from approximately 138,000 claims in fiscal year 2000 to about 148,000 claims in 2004, as shown in figure 3. Although the severity of the injuries changed during this period, the types of injuries that federal workers incurred remained the same. Despite the fact that the number of traumatic injury claims decreased slightly—from 76,633 claims in fiscal year 1995 to 74,322 claims in fiscal year 2004, as a proportion of total claims, traumatic injury claims increased slightly over this same period. However, non-traumatic injury claims decreased by over 30 percent during this period—from 8,508 claims in fiscal year 1995 to 5,903 claims in 2004. In addition, the top five types of traumatic injuries incurred by federal workers during this period ranged from sprains and strains of ligaments, muscles, or tendons to lacerations. Over this same period, the five most common types of non-traumatic injuries ranged from hearing loss to back sprain or strain. See table 2 for a list of the five most common types of traumatic and non-traumatic injuries federal workers incurred from fiscal year 1995 to 2004. While the size of the workforce declined, workers’ compensation costs for federal employees remained fairly constant during the most recent 10-year period, from about $1.54 billion in fiscal year 1995 to about $1.52 billion in 2004. In addition, the compensation per claim filed during this period increased. For example, while there were about 1,800 fewer new claims in fiscal year 2004 than there were in fiscal year 1995, the average compensation per claim increased by 3 percent from fiscal year 1995 to 2004, with the average payment per claim rising from $9,958 in 1995 to $10,242 in 2004. As shown in figure 4, the largest amount of workers’ compensation costs for federal workers paid from fiscal years 1995 to 2004 was for claims that were over 5 years old. Finally, the proportion of payments for lost wages, death benefits, medical costs, and rehabilitation have remained constant, with wage loss compensation being the largest proportion (approximately 70 percent) of workers’ compensation payments made from fiscal years 1995 to 2004. (See fig. 5.) Information reported by the 57 federal agencies illustrated various ways in which agencies carry out activities within the six safety program components—management commitment, employee involvement, training, identification of hazards, correction of hazards, and medical management. However, agency officials we surveyed and interviewed reported they face a number of implementation challenges that cut across the components, particularly in using automated systems, holding managers accountable for maintaining an effective safety program, and making the best use of their limited resources. Officials at these agencies also described measures they have taken to overcome each of these challenges. All of the 57 agencies surveyed reported that their safety programs incorporate activities for the management commitment component. Activities supporting management commitment include setting goals for the program and communicating from upper management to frontline staff about the importance of the safety program. Fifty-five of the agencies surveyed (96 percent) reported that they had established goals for their safety and health programs, and all 57 agencies reported conducting activities to communicate the importance of their safety programs to employees, such as through newsletters and Web sites. Almost all of the agencies we surveyed reported that they conduct activities for two other components—employee involvement and training. Most agencies reported having policies governing employees’ participation in safety committees and reporting injuries and hazards. While 56 (98 percent) of the agencies provided procedures for employees to report hazards, half of these procedures did not specify the right of employees to report hazards anonymously, as required by an executive order. Consistent with OSHA regulations, 56 of the 57 agencies reported that they offer some type of safety training for their employees. While many agencies identified a number of methods for identifying hazards, fewer had comprehensive procedures for tracking whether hazards are corrected—two additional components of safety programs. Fifty-five (96 percent) of the agencies reported that they conduct OSHA- required inspections, which must be performed at least once a year, in order to identify worksite hazards. However, although an executive order requires employee representatives to participate in these inspections, seven agencies (12 percent) reported not having any procedures for informing employees of their role during safety inspections. Furthermore, while most agencies reported having some procedures for following up on inspections and ensuring that hazards are corrected, we found that the procedures are not always adequate because a third of these agencies did not specify a reasonable timeframe for correction, as required by OSHA. Agencies reported having the fewest activities for the medical management component. Eight agencies (14 percent) reported that they do not have any procedures designed to ensure that an injured employee is seen promptly by a physician. In addition, 12 agencies (21 percent) reported they do not have programs for offering injured employees light or restricted duty to help them return to work more quickly. Another 11 agencies reported having such programs but did not provide sufficient documentation of them. For example, two agencies reported having return-to-work programs, but the documentation they provided showed that the programs had not yet been implemented. Although federal agencies are not legally required to include these activities in their safety programs, the failure to include them may limit the effectiveness of the programs. We found that agencies face some common challenges in implementing their safety programs, particularly in using automated systems to manage their programs, holding managers accountable for workplace safety, and operating with limited resources. The use of automated systems presented challenges for many agencies. Some agencies did not use such systems, while others cited difficulties in identifying systems that would allow them to collect data relevant to their safety programs. For example, 23 agencies (40 percent) reported that they do not have automated systems to collect information on hazards that have been identified and track whether they have been corrected in a timely manner. In addition, 14 of the agencies reporting that they have such a system (41 percent) either indicated that their hazard tracking systems were not currently operational, or they did not provide sufficient documentation to support the existence of such systems. Approximately a quarter of the agencies surveyed reported that they did not have automated systems for tracking safety training completed by their employees. Furthermore, 34 agencies (60 percent) reported they did not have an automated system for tracking the status of employees in light or restricted duty return-to-work programs, and another 16 agencies did not provide sufficient documentation of their systems. While federal agencies are not required to use automated data systems, without such a system, safety officials would have difficulty tracking broader trends such as participation rates in light or restricted duty programs and the effect of program participation on worker’s compensation costs. Ten of the 12 agencies we reviewed in more detail reported challenges with their automated systems, such as ensuring that these systems collected appropriate data needed to evaluate the effectiveness of their safety programs. For example, an official from the Tobyhanna Army Depot told us that their computer technicians were in the process of designing a hazard tracking program because no agencywide programs were available, and off-the-shelf programs required too much adaptation to be practical. She also developed a stand-alone spreadsheet to track all work-related injuries because the systems available did not capture injuries that were not recordable on the OSHA log (such as injuries requiring only first aid) or injuries for which workers’ compensation claims are not filed. Furthermore, a National Park Service official stated that entering safety meetings and other non-traditional training methods into the agency’s automated system is difficult because the system does not have data fields for recording these activities. As a result, the agency has difficulty determining the extent to which employees have been trained on many safety issues. Despite these challenges, several agencies told us they have started or are in the process of implementing automated safety systems that will allow them to collect and analyze data in order to better manage these safety programs, including assessing the effectiveness of their programs. For example, according to a Transportation Security Administration official, the agency is developing a new injury tracking system that will link injury and illness data with inspection data, allowing them to identify trends, such as where injuries commonly occur and demographic characteristics of injured employees. Similarly, officials with the Bureau of Engraving and Printing said they are testing a medical management system that will aggregate data from a number of different sources including the health unit, safety investigation reports, and their workers’ compensation system. Collecting these data will allow them to streamline the reporting process and better track injury trends. Another challenge agencies face is holding managers accountable for implementing effective safety programs. While 51 agencies (89 percent) reported having policies that establish responsibility for workers’ safety and health for all employees, 6 reported that they do not have such policies, despite an OSHA regulation requiring them to establish these policies for all management officials. Of the 51 agencies reporting having such policies, 11 agencies did not provide sufficient documentation of the policies. For example, one agency provided an Employee Performance Plan, but there were no performance expectations related to safety anywhere in the plan. Furthermore, although we asked the agencies to provide copies of their performance appraisal review forms citing safety as a rating element, only 16 agencies (28 percent) were able to do so. Agency officials and employee representatives at 7 of the 12 agencies selected for follow-up interviews cited further difficulties in maintaining accountability throughout all levels of their organizations. For example, a Veterans Health Administration employee representative reported that, while there is a high level of commitment to safety at the headquarters level, the message is diluted as it reaches lower levels of the agency. In another example, the Defense Commissary Agency implemented a program that requires regional safety managers to evaluate stores’ safety programs. Agency officials stated that regional officials are expected to follow up to ensure that stores make timely corrections, but are not required to document when hazards are corrected. As a result, the agency has little assurance that the safety of store employees is adequately protected. In addition, according to an employee representative from the Commissary, it is not always clear who is responsible for ensuring that hazards are corrected. Several agencies reported that they had developed ways to help ensure that employees and managers are held accountable for agency safety programs. For example, in order to address accountability issues within the agency, the Veterans Health Administration initiated a program that ties agency safety goals to performance ratings. Moreover, instead of simply including safety as a general element of performance review, the agency selects two to three specific safety program goals that change every few years according to agency needs. Past goals have included submitting workers’ compensation claims on time and reducing the occurrence of needle stick injuries. According to agency officials, bonuses for executive staff members are provided based on their progress in meeting these goals. When significant improvement has been made in these areas, safety officials set new goals—enabling continuous improvement. Both agency and OSHA officials cited challenges in funding their safety programs, although OSHA regulations require agencies to provide adequate resources to implement and maintain these programs. One agency official we interviewed reported difficulty identifying funding for the agency’s safety program because safety funding is not specifically designated as a line item in its budget. This lack of information on available resources makes it particularly difficult to plan for long-term safety issues, such as developing and providing training. For example, officials with the National Park Service, which employs a large cadre of seasonal workers, reported that the lack of itemized safety funds within its budget makes it hard to develop their training plans. OSHA officials cited the federal budget process itself as problematic because it requires federal agencies to budget months in advance for safety-related equipment purchases or other safety devices, long before they may have identified the need for this equipment. This was corroborated by a U.S. Mint official who reported that it is difficult to correct hazards that require a lot of capital investment and planning. In addition, a potential consequence of operating with limited resources is the use of collateral duty safety officers—employees whose primary responsibilities do not involve safety. Nearly all of the agency officials we interviewed reported relying on these positions, which are typically filled by employees who volunteer or are assigned by the agency. While some agency officials reported their collateral duty officers were appropriately trained, as required by OSHA, others reported that these officers have limited knowledge or experience in safety. Some agency officials we interviewed said that this lack of experience, as well as the limited amount of time collateral duty officers are allotted for safety duties, has made it difficult for these officers to learn all of the safety program requirements. For example, according to a National Park Service official, collateral duty officers at this agency typically spend about 10 percent of their time on their safety responsibilities, and this may inhibit their ability to respond effectively when safety concerns arise. Moreover, one Forest Service official told us that collateral duty officers questioned the feasibility of building safety programs with collateral duty officers, and was concerned that the safety duties might detract from their primary job responsibilities. Finally, 11 of the 12 agencies selected for follow-up interviews reported that competing priorities make it difficult to manage their safety programs. For example, a Food Safety Inspection Service official noted that completing safety forms and fulfilling data requests can be a burden to the agency’s overall mission of meat and poultry inspections. Similarly, an employee representative at the Forest Service told us that, because safety achievements are not typically recognized or rewarded—even though such recognition is encouraged by OSHA regulations—supervisors focus on meeting production targets rather than working safely. Agencies identified a number of techniques for addressing the difficulties associated with managing resources. For example, an official with a National Park Service regional office said that they host monthly conference calls with the collateral duty safety officers at several national parks, which gives these individuals a chance to ask technical questions of the regional safety officer and share effective practices among the parks. These monthly calls also enable their collateral duty officers, who have limited backgrounds in safety, to gain knowledge and experience over time. Other agencies maximized their resources by collaborating with each other. For example, one official with the Forest Service said that they have an informal partnership with the Bureau of Land Management that allows them to pool their resources by pursuing joint activities and sharing offices and staff. One activity involved jointly developing and teaching an accident investigation course and an off-highway vehicle course. OSHA’s oversight of federal agencies’ safety programs is not as effective as it could be because it does not use its enforcement and compliance assistance resources in a strategic manner. First, OSHA does not routinely conduct inspections that target federal worksites with high injury and illness rates. In addition, OSHA lacks procedures for tracking and resolving violations disputed by federal agencies. Third, OSHA has not conducted required evaluations of the larger or more hazardous agencies in the last 6 years. Fourth, OSHA has not submitted its own annual reports to the President in a timely manner, and they have not included an assessment of each agency’s safety program, as required. Finally, while OSHA has a range of promising programs for assisting agencies in complying with its regulations and improving worker safety, not all of these programs are being fully utilized. Unlike its enforcement strategy for private-sector employers, OSHA’s oversight of federal worksites does not include a national program that targets federal worksites with high injury and illness rates for inspection. According to its internal guidance, OSHA is supposed to develop a list that targets federal worksites for inspection. However, OSHA’s Office of Federal Agency Programs has not developed such a list in over 5 years. In the past, OSHA used workers’ compensation claims data collected by OWCP to identify federal worksites with high numbers of injuries and illnesses. Because of limitations in the data, however, it was difficult to identify where each injury occurred and, therefore, use this information to target federal worksites for inspection. OSHA officials at the national office reported that they are working to start a new targeting effort but are still facing the same difficulties in using workers’ compensation data to select federal worksites for inspection. As shown in figure 6, OSHA primarily conducts inspections of federal worksites as a result of complaints. Unprogrmmed inpection initited y complintOther nprogrmmed inpection (ccident, referr, etc.) OSHA’s inspection data of federal worksites show that complaint inspections generally result in few violations compared to targeted inspections, which generally identify a greater number of serious violations (see fig. 7). For example, over the last 10 years, unprogrammed inspections, which are generally initiated by complaints, uncovered an average of one serious violation per inspection, in contrast to an average of four serious violations for programmed (targeted) inspections. The small average number of violations for unprogrammed inspections is driven by the fact that over half of these inspections result in no violations being identified. The new recordkeeping rule, which was implemented in January 2005, requires federal agencies to begin collecting the same injury and illness data as private-sector employers and could help OSHA develop its targeting program, according to OSHA officials. Since the new rule requires federal worksites to keep logs that include information that can be used to calculate injury and illness rates, OSHA officials said these data would be more useful in creating an effective targeting program than the workers’ compensation data. While the new rule does not require federal agencies to report injury and illness data to OSHA, OSHA officials said they could target federal worksites for inspection in the same way they target private-sector employers in industries with high injury and illness rates for inspection. For its targeting program of private worksites, OSHA surveys a sample of worksites in industries with the highest injury and illness rates. The survey form requires employers to report on (1) the average number of employees who worked for them during the previous calendar year, (2) the total hours the employees worked during the previous year, and (3) summary injury and illness data from their OSHA logs. OSHA then uses this information to compute the worksites’ injury and illness rates and sends those with relatively high rates a letter informing them that they may be inspected. Finally OSHA develops a list of worksites with high injury and illness rates to be targeted for inspection. As an alternative to conducting a survey of federal worksites, OSHA has the option of requiring federal agencies to report this information in their annual reports to OSHA. One of OSHA’s regional offices—which includes four area offices—and an area office in another region developed their own targeted programs of federal agency worksites using the workers’ compensation data. While officials reported using the data has been difficult, they said that these efforts have resulted in improved safety at federal worksites. In addition, they reported that the agencies that were inspected have become more aware of OSHA’s role and, in turn, have sought OSHA’s assistance in improving their safety programs. Furthermore, agency officials whose worksites have been selected for inspection have focused more attention on safety and shared information, resulting in further improvements. For example, at one worksite in Montana, Forest Service officials reported that, after colleagues in Idaho told them OSHA had targeted federal worksites in the state for inspection, they were reviewing their safety programs and OSHA’s requirements in preparation for possible OSHA visits. Officials with OSHA’s national office said that they have encouraged regions to develop their own programs targeting federal agencies for inspection, but we identified some challenges that need to be addressed before more regions can successfully develop these programs. For example, one regional OSHA official reported requesting workers’ compensation data from the national office to start a targeting program, but was told the national OSHA office did not have enough time to provide the data requested. In addition, regional and area office OSHA officials said that the ability to develop and maintain targeting programs depends on the resources available. Besides the time and effort required to identify worksites, they said the availability of inspectors is also a factor. According to OSHA’s policies, OSHA inspectors’ top priority is responding to imminent danger situations, followed by accidents, followed by responding to complaints; conducting targeted inspections is a lower priority. OSHA’s procedures for tracking violations disputed by federal agencies differ from those for the private sector. Whereas private-sector employers can dispute OSHA violations cited during inspections by requesting that the violations be reviewed by an independent administrative law judge, federal agencies must seek resolution with OSHA officials. In these situations, federal agencies may first request an informal conference with OSHA area office officials to discuss the violation in question. If the dispute is not resolved, it is referred to the relevant OSHA region for review and, if necessary, to OSHA’s national office. While OSHA’s internal instructions require that area office and regional officials be consulted in decisions made by national office officials and an Executive Order requires OSHA to submit unresolved violation disputes to the President, neither of these things appears to be occurring. Although national office officials reported that there have not been any unresolved disputed violations, and they have not had to report any unresolved violations to the President in over 3 years, area office and regional staff told us some unresolved disputed violations from federal agencies have lingered for years. For example, a regional OSHA official reported that, in another region, a federal agency was cited for violating a safety standard that did not apply to that particular agency. The agency challenged the violation, and the dispute reached the national office, where no decision was made—leaving the violation unresolved for 7 years. Another OSHA official reported a case in which the Bureau of Prisons refused to have guards wear special gloves as required while conducting cell searches because the guards thought the gloves would not provide them with enough sensitivity to feel for objects hidden by prisoners. According to this official, it was important for the guards to wear gloves during these searches because of the danger of receiving needle sticks or cuts from sharp objects. The case reached OSHA’s national office, but it chose not to act on the case—leaving the guards at risk and the violation unresolved. OSHA could not provide us with a list of all violations disputed by federal agencies or the status of their resolution because it does not have a system for tracking these disputed violations. OSHA officials at the national office indicated that part of the reason the agency has not developed such a system is because few federal agencies dispute violations. In addition, according to these officials, disputed violations are resolved in a timely manner. These officials reported that they seek to review cases in a similar manner to the manner in which administrative law judges review private- sector employers’ cases and have considered using either a permanent or ad hoc panel to ensure consistency in their review of violations disputed by federal agencies. However, without a system for tracking violations disputed by federal agencies, OSHA cannot ensure that all disputes have been resolved or that they are resolved in a consistent manner. Although OSHA is required to conduct annual evaluations of the larger or more hazardous federal agencies, and less frequent evaluations for smaller and less hazardous federal agencies, it has not conducted any evaluations since 1999. OSHA officials reported that because evaluations are so resource intensive, they did not have enough staff to support doing them. Evaluations are another element of OSHA’s enforcement strategy and include both a national-level review of an agency’s safety program and site-specific assessments. In the past, OSHA’s national office identified federal worksites for evaluations and the area offices inspected them. OSHA’s policies require agencies to correct any violations identified during inspections conducted as part of its evaluations. In addition, OSHA’s internal guidance encourages its officials to coordinate evaluations with targeted inspections in order to use its resources more efficiently. The last evaluation that OSHA conducted, at the Veterans Health Administration, resulted in a report that agency officials said they still use to improve their safety program. While some OSHA officials told us that evaluations are resource intensive and ineffective because agencies have not always corrected the problems identified, other OSHA and agency officials said OSHA’s evaluation of the Veterans Health Administration helped bring management and union officials together for discussions during the evaluation process. According to these officials, this improved relationship continued after the evaluation was completed. As of February 2006, OSHA had not submitted its annual report to the President that summarized and assessed the status of federal agencies’ safety programs since 2000 nor provided recommendations of ways for federal agencies to improve their safety programs, as required. OSHA is working to reduce the backlog for these reports, according to the officials we interviewed. In addition, OSHA officials told us that they could not assess the effectiveness of these programs or make recommendations because they do not collect original data on agencies’ safety programs but, instead, rely on the reports agencies provide to them on an annual basis. According to these officials, they cannot assess or evaluate agencies’ programs without collecting independent information on their programs. However, we believe that OSHA could use the information provided by the agencies in their annual reports to assess agencies’ safety programs, including whether they are meeting OSHA’s requirements. For example, OSHA could use the agencies’ reports to determine what types of safety and health training they are providing to their managers and workers, the number and types of self-inspections they are conducting of their workplaces, and the measures used to correct unsafe and unhealthful working conditions identified during these inspections. In addition, OSHA could use these reports to make recommendations for improvement. OSHA requires agencies to summarize their injury and illness rates and provide information on new initiatives they have started and their accomplishments in their annual reports. However, OSHA officials told us that they do not systematically review these reports over time to ensure that agencies are making progress. Our analysis of the agencies’ reports for fiscal years 2000 through 2004 showed that agencies generally described the accomplishments of their safety programs but sometimes repeated their safety goals across years. For example, one agency reported in 2 consecutive years that it had “launched a new e-training program” that included safety modules. In addition, agencies generally did not provide any follow up information on their prior years’ goals or challenges. For example, one agency reported having a goal to develop a database for tracking injury and illness trends but made no mention of the system in the following year’s report. One OSHA regional official suggested that the national office could use regional staff more effectively by requiring each region to review selected federal agencies’ annual reports. In this way, regional staff could become more familiar with specific agencies’ programs, which would allow them to more readily identify discrepancies and deficiencies in their annual reports. Federal agencies can receive compliance assistance from OSHA through programs developed especially for federal agencies as well as programs initially developed for private-sector employers. The two compliance assistance programs developed specifically for federal worksites—Field Federal Safety and Health Councils and Agency Technical Assistance Requests—have generally been helpful, according to OSHA officials, but they are not consistently available to all federal agencies. Some of the programs that OSHA initially developed for private-sector employers and later expanded to federal agencies—the VPP, strategic partnerships, and alliances—have not all been widely used by federal agencies. As of January 2006, only 14 federal worksites had joined the VPP and OSHA had established few strategic partnerships and alliances with federal agencies. However, although only a limited number of federal worksites have used these programs, OSHA officials told us many of these efforts have been successful and they are encouraging more agencies to participate. Regions have anywhere from 2 to 13 active Field Federal Safety and Health Councils, depending on the effort regional OSHA officials have made to develop and maintain them. These councils, established by OSHA to facilitate the exchange of ideas and information about occupational safety throughout the federal government, consist of management and employee representatives from local federal agencies. OSHA officials reported that the councils are intended to provide a networking and training forum for safety officials from different agencies in a given area, but all agreed that maintaining the councils has been a struggle. Both OSHA and agency officials cited challenges in maintaining the councils. Some OSHA officials reported that federal agencies do not always give their representatives time to attend the meetings. Other OSHA officials raised concerns that federal agencies have failed to properly train their collateral duty safety officials, which has inhibited their contributions to the councils. In addition, some officials reported that distance makes it difficult for council members to attend meetings. One OSHA area director used the state’s library videoconferencing system to bring together council members from different areas and suggested that OSHA consider similar methods to encourage collaboration. On the other hand, a couple of agency safety managers and OSHA officials told us the councils are not necessarily an effective tool for agencies because the safety concerns are so different among the agencies. For example, a Department of Veterans Affairs’ safety manager might be focused on preventing needle sticks and identifying violent patients, while National Park Service safety staff might be concerned about snake bites and heat exhaustion. The councils also have limited financial resources. Funding is provided solely by OSHA’s regional offices and is not a line item in their budgets. While regions attempt to provide training to the councils, any budget constraint can quickly eliminate their ability to do so. Until last year, OSHA’s national office sponsored an annual conference and the regions provided the travel funds for the council presidents to attend the conference. However, the conference was canceled in fiscal year 2005, partly because the national office did not have the funds to set up the meeting and partly because the regions reported not having the travel funds required. OSHA officials said they sometimes are reluctant to respond to Agency Technical Assistance Requests, which can delay this assistance, because they consume their limited enforcement resources. An agency can request OSHA to provide advice on hazard abatement, training, or program assistance. OSHA cannot cite agencies for violations during this process but, in making the request, agencies understand they are expected to correct any violations OSHA observes. While these requests for technical assistance are considered part of OSHA’s compliance assistance strategy, rather than enforcement, OSHA area offices and regions must use their enforcement budgets and staff to conduct them. Because these offices have limited enforcement resources, a regional OSHA official told us that, although OSHA responds to all of these requests, this assistance may be delayed. As of January 2006, there were 14 federal worksites among the more than 900 private-sector worksites in OSHA’s VPP, which promotes effective worksite safety and health. In general, OSHA and agency officials told us the program is beneficial for federal agencies and they expect more worksites to join. An agency official also said that having one federal worksite join often is an impetus for others to consider applying to join the program. For example, since the U.S. Mint in Philadelphia became a VPP site in 2005, other agencies within the Department of Treasury have considered joining. In addition, some OSHA field staff reported that they are in the process of assisting agencies with their VPP applications. While a few agency officials told us that the VPP was not feasible for agencies because of the resources required, many told us they had worksites seeking to join the program. Some OSHA officials reported that federal agencies face unique challenges in joining the VPP. For example, in order to participate, agencies must have an injury and illness rate below the average within their given industry. However, some agencies do not fit within a particular industry code or definition. This was the case for Yellowstone National Park when the worksite first applied to join the VPP. The park was required to classify itself in an industry category that included amusement parks and miniature golf courses, worksites with much lower injury and illness rates than the park. The industry codes were recently changed and now include a code for national parks, but Yellowstone is still challenged because its injury and illness rates are higher than those of other parks such as national monuments with many fewer hazards and injuries. OSHA has developed relatively few strategic partnerships and alliances with federal agencies, although OSHA officials said those that have been formed have generally been beneficial to the agencies in improving their safety programs. Strategic partnerships are agreements that employers make with OSHA to address specific safety and health problems, while alliances are agreements organizations make with OSHA to focus on training, outreach, and promoting awareness of safety and health issues. OSHA has created a limited number of strategic partnerships with federal agencies at the national and regional level. At the national level, OSHA has one partnership—an agreement with the Army created in October 2004 aimed at increasing awareness of safety, reducing ergonomic injuries, and sharing best practices. At the regional level, OSHA has 7 current and 10 completed partnerships with federal agencies. (See table 3.) In general, OSHA officials said that these partnerships have helped agencies reduce their injury and illness rates by helping them to develop stronger safety programs. However, in two instances OSHA terminated its strategic partnerships with federal agencies prior to their completion, either because the agency could not agree on the terms of the partnership or because the agency lacked the commitment to make the changes needed to improve their safety programs. Federal agencies have joined two national alliances and formed a total of 10 regional or local alliances. While most of the alliances have focused on general safety issues, more recently Region 10 signed an alliance with the Fort Lewis Army Garrison that focuses on improving the training and communication for emergency response efforts. According to one OSHA official, this alliance has leveraged both agencies’ resources well. OSHA has gained training from Fort Lewis on emergency response techniques, and Fort Lewis has utilized OSHA’s expertise in properly fitting staff members for personal protective equipment to be worn during an emergency response. OSHA assists federal agencies with SHARE, the Presidential initiative begun in 2004 and intended to encourage federal agencies to improve their safety programs and reduce federal workers’ compensation costs, but the impact of the initiative on agencies’ safety programs is not clear. Specifically, OSHA officials reported coordinating with OWCP to provide training to the agencies about SHARE, but they had different views on the effectiveness of the SHARE initiative. According to some OSHA officials, the initiative has encouraged agencies’ national offices to pay more attention to safety issues than they otherwise would have. Other officials said that they thought SHARE was a paper exercise rather than a tool for agencies to improve their safety programs, or that this type of program might encourage underreporting of injuries. OSHA’s national office uses workers’ compensation data to calculate agencies’ injury and illness rates to determine whether they have met their SHARE goals related to workers’ safety, but it has not conducted any agency reviews to determine whether underreporting has increased, according to OSHA officials. OSHA officials at the national office said that they would like to use the SHARE data to develop a list of agencies to target for inspection. By focusing on agencies that are not meeting their SHARE goals, these officials said they thought they could assist agencies in reducing their injury and illness rates. OSHA officials said the agency will continue to use workers’ compensation data to calculate agencies’ injury and illness rates through 2006, but would consider using injury and illness data collected under the new recordkeeping requirements after this time. Using this new information would allow OSHA to identify trends for each federal agency worksite and set more specific goals for improving agencies’ safety programs. OSHA faces a number of challenges in monitoring federal agencies’ safety programs and, over time, has adapted its methods to try to make the most of its resources. However, OSHA’s oversight could be further strengthened if it took a more strategic approach. Because targeted inspections generally uncover more workplace hazards than its other inspections, by not targeting its inspection efforts to the most hazardous federal worksites, OSHA is not using its limited enforcement staff and resources in the best way possible. Now that federal agencies are collecting injury data that would make targeting more feasible, OSHA is missing a critical opportunity to identify and correct hazards. OSHA could require, as part of the federal agencies’ annual reports, that each agency submit certain portions or summaries of the data that agencies are required to collect under the new recordkeeping requirements. This information could be used to target federal worksites for inspection in the same way it targets private-sector employers in industries with high injury and illness rates for inspection. Alternatively, as OSHA does with private employers, OSHA could develop its targeting program using the newly-required data that federal agencies are collecting by surveying selected agencies and worksites. In addition, OSHA is not tracking violations disputed by federal agencies or how they are resolved. As a result, hazardous worksite conditions may remain uncorrected for years and OSHA may be limiting its ability to address challenges agencies are facing in complying with OSHA’s standards and to provide additional assistance to the agencies. While inspections are specific to individual federal agency’s worksites, evaluations allow OSHA to make thorough, agencywide assessments of their safety programs. These evaluations require a lot of time and staff, but, in the past, OSHA has been able to maximize its resources by strategically combining evaluations of entire agencies with inspections of federal worksites. By not conducting evaluations of the larger or more hazardous federal agencies, OSHA is missing a critical opportunity to provide agencies valuable feedback and assistance to agencies for improving their safety programs in a more systematic way. OSHA could also more effectively assess federal agencies’ safety programs if it ensured that the agencies complied with the requirements for filing annual reports and used the reports, as well as OSHA’s evaluations and inspection data, to assess their safety programs and develop recommendations for improvement. Because OSHA does not provide an assessment of agencies’ safety programs in its annual report to the President or recommendations for improvement as required, its ability to ensure the effectiveness of these programs is limited. The Secretary of Labor should direct OSHA to develop a targeted inspection program for federal worksites based on the new worker injury and illness data federal agencies are required to collect by requiring that relevant portions or summaries of that data be included in agencies’ annual reports to OSHA or by obtaining the data from agencies or worksites through periodic, selected surveys; track violations disputed by federal agencies to their resolution and ensure that unresolved disputes are reported to the President; conduct evaluations of the largest and most hazardous federal agencies use evaluations, inspection data, and annual reports submitted by federal agencies to assess the effectiveness of their safety programs, and include, in OSHA’s annual report to the President, an assessment of each agency’s worker safety program and recommendations for improvement. We provided a draft of this report to the Secretaries of the Departments of Labor, Agriculture, Defense, Homeland Security, Interior, Justice, Treasury, and Veterans Affairs and the Commissioner of the Social Security Administration. Officials from Agriculture, Treasury, and the Social Security Administration informed us that their agencies did not have any comments on our draft report. We received written comments from the Departments of Labor, Homeland Security, and Interior. These comments are reproduced in appendixes II, III, and IV. The Departments of Defense, Justice, and Veterans Affairs provided technical clarifications, which we incorporated as appropriate. Labor generally agreed with all of our recommendations. In responding to our first recommendation, OSHA explained that, for the immediate future, it would use OWCP data to identify federal worksites for inspection. It did not support the use of the annual reports to collect data on injury and illness recorded by the agencies to use in targeting federal worksites for inspection, but thought the use of surveys to collect these data was noteworthy. In regard to our second recommendation, OSHA reported that it will create a database to track the status of OSHA citations disputed by federal agencies. In responding to our final two recommendations, OSHA reported that it would begin evaluations and a more rigorous review of agencies’ annual reports once staffing had increased. The Departments of Homeland Security and Interior noted that Labor could provide more assistance to agencies in addressing the challenges we identified. While we believe agencies should seek assistance from OSHA on ways to overcome these challenges, we also believe that these challenges will require agencies to work internally to build support for worker safety programs. In addition, the Department of Homeland Security suggested that our recommendations to Labor to increase OSHA’s enforcement activities may not appreciably lower the incidence of injuries and illnesses and may indeed reduce agencies’ requests for OSHA’s assistance. We continue to believe that increased enforcement activities would provide OSHA with a balanced strategy for ensuring workplace safety. In addition, inspections will allow OSHA to review federal agencies’ injury and illness logs to ensure that underreporting is not occurring—another concern that Homeland Security raised in its comments. Finally, Homeland Security suggested that OSHA should take the lead on developing a government- wide safety information system. We agree that it is important to have a governmentwide safety information system and note that Labor has made some effort in that direction. We will make copies of this report available upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 9889 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. We sent a data collection instrument to 57 agencies within the 8 largest departments. The instrument requested information and documentation on six components of sound safety programs we identified from previous GAO reports: (1) management commitment, (2) employee involvement, (3) education and training, (4) identification of hazards, (5) following up and correcting hazards, and (6) medical management. We chose the eight departments because they represented 80 percent of the federal executive branch workforce—excluding the U.S. Postal Service, which under the OSH Act is considered a private sector employer. We contacted officials with each of the 8 departments to obtain the names of their operational agencies and they provided us with the names of 57 agencies. We reviewed the documentation supplied by the agencies to support their answers to selected questions on the data collection instrument. In reviewing the documentation, we made two assessments: (1) whether the documentation supported the agency’s responses and (2) what types of activities the agency conducted for each program component. We examined each document provided by the agencies in support of their responses and assessed each as either “supporting” or “not supporting” the agencies’ responses. Each document was reviewed by two people to ensure that our assessment of the sufficiency of the documents provided by the agencies was consistent. Of the 57 agencies that completed the data collection instrument, two did not provide any supporting documentation. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the survey instrument, the data collection, and the data analysis stages for the purpose of minimizing such nonsampling errors. For example, a survey specialist designed the survey instrument in collaboration with GAO staff with subject matter expertise. We pre-tested this survey at two agencies and, based on the results and comments received during pre-testing, made appropriate revisions. We also independently verified the entry of all survey responses entered into an analysis database as well as data analyses procedures. We conducted follow-up interviews with safety managers and when possible, employee representatives from the largest agency within each department, and with the agencies with the highest lost-time or injury and illness rates. In some cases, the largest agency had both the highest lost- time and injury and illness rates. In total, we conducted follow-up interviews with safety managers in 12 agencies, as well as employee representatives in 8 agencies. The interview questions were based on how each agency originally responded to the data collection instrument and their supporting documentation. We also visited five federal agencies’ worksites: the Tobyhanna Army Depot in Tobyhanna, Pennsylvania; Yellowstone National Park in Wyoming; the U.S. Mint in Philadelphia, Pennsylvania; the U.S. Forest Service’s Gardiner District Office (Gallatin National Forest) in Gardiner, Montana; and the Veterans Health Administration’s Rocky Mountain Network Office (a Veteran Integrated Service Network site) in Glendale, Colorado, and Eastern Colorado Health Care Center in Denver, Colorado. The first three worksites are OSHA recognized VPP sites. The Forest Service site bordered Yellowstone National Park, and the Veterans Health Administration site was recognized by OSHA as having a good safety program. At each of these locations, we interviewed safety officials and discussed the challenges and solutions they faced in developing their safety programs. We obtained information on claims filed by federal workers for injuries they incurred from fiscal years 1995 through 2004 from OWCP. We used data from two of OWCP’s data systems to tabulate basic descriptive statistics provided in this report. One system provides injury and case status information on all individuals who have filed claims with OWCP while the other is used to bill agencies for the actual amount of workers’ compensation payments made on the agencies’ behalf. These systems were used to develop our tables including the number of new cases filed; the types of injuries incurred; and the actual amounts paid, by the age of the case and types of payment. To assess the reliability of these data, we interviewed OWCP and OSHA officials, reviewed published reports based on these data (including reports from Labor’s Office of the Inspector General), and performed our own tests for consistency and completeness. We found that certain data elements had high levels of missing information and thus could not be used in this report. For the elements we used, although small data discrepancies were found, we determined that the data were sufficiently reliable for providing the basic descriptive statistics reported. In reviewing OSHA’s role, we analyzed inspection data of federal agencies for fiscal years 1995 through 2004 from OSHA’s Integrated Management Information System. We interviewed OSHA officials at the national office and all of its 10 regional administrators and federal agency program officers. For each region, we interviewed the director of the area office that had the largest number of inspections of federal worksites in the last 5 years. We also interviewed a compliance safety and health official in each of these offices identified by the area director and, where possible, the compliance assistance specialist, although not every area office had a compliance assistance specialist. In addition, we interviewed two OSHA officials about the local emphasis program for federal worksites that one region had implemented. Finally, when available, we examined agencies’ annual reports to OSHA from 2000 to 2004 and asked to review OSHA’s annual reports to the President for the same time period. However, as noted in the report, OSHA had not completed its annual reports to the President for fiscal years 2001 through 2004 as required. We reviewed the report to the President that OSHA had completed for fiscal year 2000. The following are GAO comments on Labor’s letter dated March 17, 2006. 1. We reordered the SHARE goals as OSHA requested and identified those goals for which OSHA is responsible. 2. OSHA suggests that our finding—that violations disputed by federal agencies were not being tracked—was confusing because the agency has an inspection database that it uses to track the status of all violations. However, as noted in its comments, when OSHA generated a report to identify unresolved violations at federal agencies, staff could not determine the status of 11 violations. In addition, OSHA acknowledged that the Office of Federal Agency Programs (OFAP) does not have a formal tracking system for cases it receives for resolution. We reviewed the report language and believe that it accurately explains the process in place. The following table summarizes agencies’ responses to the data collection instrument. In addition, the last column summarizes whether the documentation agencies provided to support their responses for selected questions was sufficient. Revae E. Moran (Assistant Director) and Margaret A. Holmes (Analyst in Charge) managed all aspects of the assignment. Jessica A. Lemke, Sheila R. McCoy, and Kris Trueblood made significant contributions to this report. Other key contributors to this report included Kyle Browning, Richard Burkard, Nina E. Horowitz, Tovah Rom, Beverly Ross, Jeremy D. Sebest, John G. Smale Jr., Rachael C. Valliere, and Eric A. Wenner.
Federal workers' compensation costs exceeded $1.5 billion in 2004, with approximately 148,000 new claims filed that year. Because of concerns for the safety of federal workers, as well as the costs associated with unsafe workplaces, GAO described the characteristics of federal agencies' safety programs and the implementation challenges they face, and assessed how well the Occupational Safety and Health Administration (OSHA) oversees and assists federal agencies' efforts to develop and administer their safety programs. Based on a survey of 57 agencies, GAO found that most agencies reported having at least one activity for each of the six components generally associated with a sound safety program--(1) management commitment, (2) employee involvement, (3) education and training, (4) identification of hazards, (5) correction of hazards, and (6) medical management (which includes having a return-to-work program for injured employees). However, agencies faced implementation challenges that cut across the components in the areas of data management, accountability, and safety resources. The survey results indicated that many agencies do not have automated systems for tracking elements of their safety programs, such as training. In addition, several of the agencies did not demonstrate that their managers are held accountable for maintaining effective safety programs. Finally, many agency officials stated that, due to limited resources, they often must depend on safety officers with limited professional safety experience. OSHA's oversight of federal agencies' safety programs is not as effective as it could be because the agency does not use its enforcement and compliance assistance resources in a strategic manner. Although inspections are one of OSHA's primary enforcement tools, it does not conduct many inspections of federal worksites or have a national strategy for targeting worksites with high injury and illness rates for inspection. Furthermore, although OSHA is responsible for tracking violations that agencies dispute and reporting any unresolved disputes to the President, OSHA does not track these disputed violations or their resolution. In addition, although OSHA is required to review agencies' safety programs annually and submit a report on them to the President each year, as of January 2006, the last report submitted was for fiscal year 2000. Finally, while OSHA has a range of compliance assistance programs designed to help agencies comply with its regulations and improve safety, these programs are not being fully utilized.
After 36 years of using chemical and mechanical processes to produce slightly enriched uranium from ore, DOE’s Fernald site is faced with a variety of environmental problems. As with other sites in DOE’s nuclear weapons complex, an emphasis on production versus safety has produced a legacy of contaminated radioactive and hazardous wastes at storage sites, in buildings that are deteriorating, or in seepage to underground water supplies. Also, as with other DOE sites, contract management has been an ongoing problem. Stemming from the special contracting arrangements for the development of the atomic bomb during World War II, DOE continued with lax oversight of contractors of the weapons complex for decades. For this reason, in 1990 we designated DOE’s contracting as a high-risk area vulnerable to waste, fraud, abuse, and mismanagement and have issued numerous reports and testimonies that provided an impetus for change. The responsibility for the management and oversight of Fernald’s cleanup rests with two units at DOE’s headquarters—the Office of Environmental Management manages the technical, financial, and overall safety aspects of the cleanup, while the Office of Environment, Safety, and Health conducts periodic reviews to independently evaluate safety and heath programs at the site. At the field level, DOE’s Ohio Field Office and Fernald Area Office provide the planning, budgeting, and oversight of cleanup activities. Fernald Area Office staff interact daily with Fluor Daniel Fernald staff, who either directly or through subcontractors actually conduct the cleanup. As one of the first former weapons sites to be completely shut down—temporarily in 1989 and permanently in 1991—Fernald, in 1992, became one of the sites to pilot test a new contracting concept called the environmental restoration management contractor. DOE wanted to bring in new contractors, such as Fluor Daniel Fernald, that were experienced in environmental restoration to focus solely on the management and oversight of the cleanup. The actual cleanup was expected to be carried out by subcontractors. In addition, Fernald was one of the first DOE cleanup sites to propose accelerating its schedule for completing work at the site from 25 to 10 years. The management of the site’s activities has been complicated by reductions in the contractor’s workforce, DOE’s downsizing, and budget pressures common to other DOE sites. In 1993, shortly after Fluor Daniel Fernald assumed full responsibility for the site’s activities, DOE began a workforce reduction at the site to better match employees’ skills with Fernald’s cleanup needs. As a result, about 250 company and subcontractor employees were released, and 62 employees retired or resigned. These separations caused unrest and concerns among the remaining employees. For its part, DOE has not fully staffed the Fernald Area Office. From February 1992, when DOE established Fernald as a field office, through March 1994, when DOE proposed staffing for the newly created Ohio Field Office, DOE decreased Fernald’s staffing authorization from 190 to 82. At the time, DOE officials at Fernald had hired 72 individuals. After transferring positions and staff to the Ohio Field Office, Fernald was left with 39 individuals and an authorized staff level of 68. By April 1996, DOE had decreased Fernald’s authorized staff level to 53 and had 47 individuals on board at the site. DOE’s limited oversight early in the two key cleanup projects we reviewed contributed to cost increases and schedule slippages that mirror problems we have identified across DOE. The two projects cited in the Cincinnati Enquirer are (1) the vitrification pilot plant project to confirm the feasibility of converting 20 million pounds of low-level radioactive waste into a glass-like form for disposal and (2) the uranyl nitrate hexahydrate (uranium ore dissolved in nitric acid) project to process and dispose of about 200,000 gallons of the substance. From a budget perspective, these two projects represent about 5 percent of the site’s funding for fiscal years 1993 through 1996. The vitrification and uranyl projects are of similar size and complexity as some of the projects that DOE will undertake in the future. For the vitrification project, which is still ongoing, the estimated schedule to complete the testing of the waste has slipped 19 months, from March 1996 to October 1997. The original cost estimate in February 1994 was $14.1 million. This estimate did not include the costs for operating, maintaining, decontaminating, and decommissioning the plant. By December 1994, when DOE included operating costs in the estimate, DOE increased the projects to about $20.6 million, assuming that a key part of the facility—the melter used to superheat waste material—could operate at 100-percent efficiency. In July 1996, the estimate increased to $56 million, reflecting cost overruns in the initial estimates, and a more conservative estimate of 33-percent operating efficiency was made for the melter, as well as operating, maintaining, decontaminating, and decommissioning costs. As of September 1996, the estimate was $66 million. For the uranyl project, the original estimates made in fiscal year 1990 increased from $750,000 to more than $16.8 million and from 7 months to about 5 years for the project’s completion. DOE officials believe that (1) the Department’s deliberate policy of relying on the technical and managerial expertise of its new environmental restoration and management contractor to accomplish cleanup objectives and (2) the technical complexity of the vitrification project led to many of the Department’s subsequent problems with the projects. Although we agree that these factors contributed to the projects’ problems, other actions and decisions by DOE and the contractor helped cause the projects’ cost increases and delays. In fact, the projects suffered from several management and oversight weaknesses. For example, DOE had limited involvement during the early design and procurement stages of the vitrification plant and could have avoided major problems if it had exercised more oversight of the contractor’s early decisions. In addition, DOE and the contractor decided early on to accelerate the pace of this project without having fully tested the feasibility of the technology and underestimated the technical complexity of this first-of-a-kind project. DOE also allowed concurrent design and construction at the vitrification plant, which resulted in increased costs and schedule delays. Because the contractor built interfacing systems for a piece of equipment still in the design phase, about 225 design changes had to be made when the final components of the equipment differed from their preliminary designs. For the uranyl project, many of the required project management documents were not prepared until late or not prepared at all, contributing to the cost growth and schedule delays. For example, because a technical information plan was not prepared until late in the project, significant work was not done according to DOE’s requirements. As a result of a December 1995 DOE study of the problems at the vitrification plant and preliminary evaluations of alternatives to the current vitrification strategy, DOE has decided to postpone the additional construction and testing of radioactive material at the plant and to convene a panel of experts to reexamine the Department’s strategy for cleaning up the area. DOE expects that by June 1997, the Department and its stakeholders will reach a consensus on the appropriate cleanup strategy for the area. Furthermore, for its most important projects, DOE has increased the frequency with which it meets with the contractor to discuss the status of the projects. Cost overruns and schedule slippages similar to those of these two projects exist Departmentwide. They occurred in most of the 80 major systems acquisitions conducted across DOE from 1980 through 1996, one of which is the Fernald Environmental Management Program. Over the years, we and DOE’s Inspector General have reported that cost and schedule overruns on DOE’s major acquisitions have occurred for a number of reasons, including technical problems, poor initial cost estimates, and the ineffective oversight of contractors’ operations. Furthermore, we reported that underlying the problems were, among other things, a lack of sufficient DOE personnel with the appropriate skills to effectively oversee contractors’ operations and a flawed system of incentives both for DOE’s employees and contractors. As noted in a May 1996 report by DOE, the Fernald Area Office has made progress in its oversight of safety and health. However, the Area Office is still not complying with some oversight-related requirements and is in the early stages of planning changes to its program that may better address these requirements. However, because the plans have not been fully implemented, it is too early to assess whether they will fully comply with DOE’s standards and guidance. The ongoing decontamination and decommissioning activities at Fernald involve radioactive hazards, such as contaminated facilities and nearly 16 million pounds of stored uranium, as well as chemical hazards, such as acids and process waste. To minimize the risks of potential hazards to the workers and the public, DOE requires the contractor to comply with numerous safety and health standards. They include radiation protection of workers and the public, nuclear criticality safety, and occupational safety and health, among others. The Fernald Area Office is responsible for overseeing the contractor’s compliance with the safety and health requirements. The Area Office’s oversight activities include, among other things, formal assessments of the contractor’s processes, surveillance of items or activities, and walk-throughs to observe conditions in the site’s facilities. The Area Office’s facility representatives are responsible for monitoring the performance of the site’s facilities and serve as DOE’s primary points of contact with the contractor. Although many of the safety and health allegations in the Cincinnati Enquirer overstated the situation at Fernald (see app. II), the site did have serious problems. From 1993 to 1995, the Defense Nuclear Facilities Safety Board and DOE’s headquarters offices raised serious concerns regarding the Fernald Area Office’s ability to ensure the contractor’s compliance with DOE’s safety and health requirements. For example, the Board found in 1992 and 1993 that the Area Office had inadequate plans to supervise the contractor’s activities, did not have the technical staff to ensure that safety requirements were adhered to, and did not stay on top of the daily activities of the contractor. The Board made several recommendations to correct these problems. DOE’s Office of Environmental Management found in 1994 that the program for assessing operations at the site was unsatisfactory for a number of reasons. For example, the Area Office was not conducting required assessments, did not systematically follow up on prior assessments, and did not transmit the results of assessments to the contractor. Two 1995 reports identified safety and health problems. The first report by DOE, Fluor Daniel Fernald, and consultants stated that an emphasis on meeting projects’ target dates at Fernald contributed to a breakdown in contamination control and an increase in personnel contaminations in July and August 1995. The other report by the Office of Environment, Safety, and Health stated that the Area Office’s oversight program lacked “the structure and resources necessary to validate the adequacy of the contractor’s operational safety and health program.” Specifically, the Area Office had not developed procedures for implementing its safety and health responsibilities, line managers did not conduct routine walk-throughs of Fernald facilities, and the Area Office did not have a formalized system for tracking and showing trends in the status of safety problems it had identified. The low level of oversight activity in 1993 and 1994, according to the Associate Director for Safety and Assessment in the Fernald Area Office, was partly due to confusion over the level of oversight that DOE should exercise over the new environmental restoration management contractor and the change in primary responsibility for oversight from the Oak Ridge Field Office to the Fernald Area Office. As a result of these reviews, the Fernald Area Office has made a number of improvements over the years in its oversight of the contractor’s safety and health activities. For example, the Area Office developed a technical management plan for Fernald that outlined a detailed program for ensuring the contractor’s compliance with DOE’s safety and health requirements. The Office also established a group of facility representatives to monitor daily activities at the site and initiated a qualification program for these staff. The Office also increased the number of safety and health assessments from 1 in fiscal year 1993 to 15 in fiscal year 1996 and the number of surveillances from zero to 14. The site’s record of persons contaminated by radiation is an indicator of improvement in DOE’s oversight program. Although Fernald had 69 contamination occurrences from January 1, 1993, through February 12, 1996, several later assessments by DOE found that the radiological control program had improved. One DOE review compared Fernald’s personnel contamination events per 100 staff years with similar events at other comparable DOE remediation sites. The review concluded that while the type and number of occurrences indicated weaknesses in Fernald’s program, the rate of occurrence was not excessive when compared with that of other remediation sites. DOE’s and the contractor’s responses to correct a recently disclosed safety and health problem at the site is yet another indicator of improvements in the area. After a February 1996 surveillance by the contractor identified, among other things, that some inspection records of hazardous and radioactive wastes were missing, DOE and the contractor agreed in April 1996 to ensure that compliance personnel would perform weekly checks of the hazardous waste areas and examine records to ensure that inspections were performed and documented. Some recommended improvements in safety and health oversight have just been completed, but other aspects of the Fernald Area Office’s oversight still do not meet DOE’s safety and health standards and guidance. For example, in spite of a June 1993 Defense Board recommendation to immediately establish a group of technically qualified facility representatives, as of May 1996, only one out of six appointed representatives had completed the basic qualification requirements, and not until November 1996 did four more representatives complete the requirements. In addition, despite a 1995 DOE recommendation to track and trend identified problems and corrections, the Fernald Area Office is just now implementing a computerized system to do so. Furthermore, the Area Office did not fully implement its plan for assessments that it must perform in some areas, such as waste management and occupational medical programs until fiscal year 1997, according to DOE. The Area Office also has not developed an assessment schedule for its facility representatives or a surveillance schedule for its other oversight staff. In addition, the Area Office has not developed guidelines for performing walk-throughs of facilities by DOE facility representatives. Such schedules and guidelines are intended to ensure the conduct of comprehensive and systematic reviews of all aspects of facility operations over an established period of time. Furthermore, although a lack of formal reporting is contrary to DOE’s standards and procedures, facility representatives generally do not formally document their findings. The purpose of this reporting is to transmit the findings and follow-up items from surveillances and walk-throughs to the contractor’s and Area Office’s managers. Yet, the representatives usually relay their findings verbally. DOE’s Fernald Area Office is either in the process of making changes to its oversight program to correct these weaknesses or plans to do so. Because the efforts are not complete, it is too early to assess how well the efforts will correct the weaknesses. Fluor Daniel Fernald’s compliance with procedures that we reviewed in the performance and financial systems was mixed, but some weaknesses make it difficult for both DOE’s and the contractor’s managers to exercise effective control and oversight of the contractor’s costs and performance. These weaknesses include such problems as incomplete documentation for changing the contractor’s cost and schedule baseline, on which the contractor’s performance is based, and inadequate control of the opening and closing of financial accounts to ensure that only appropriate charges are made to them. DOE has directed the contractor to make numerous changes to address the weaknesses identified in recent reviews of the contractor’s financial and performance management, but it is too early to assess the impact. In some cases, the procedures for maintaining and updating the performance measurement baseline were not followed, while in other cases the current procedures are limited or unclear. The baseline governs the expenditure of the site’s budget, which was about $266 million in fiscal year 1997, and defines what work has been authorized. The baseline is the standard against which DOE assesses the contractor’s cost and schedule performance. The baseline is approved by the Fernald Area Office and can be adjusted to reflect changes that are not under the contractor’s control, such as a change in the authorized level of funding or changes in costs due to amended labor rates. DOE’s and the contractor’s procedures define when and how the baseline is adjusted. When the contractor wants to change the baseline, a control account manager prepares a proposal to change it. The required level of approval for the change depends on the magnitude of the change. On the basis of our random sample of 176 baseline change proposals, the contractor complied with most but not all of the site’s written procedures for controlling the baseline. For example, the contractor had maintained the required records that described and justified a proposed change for all but one of the randomly selected change proposals that we reviewed. The documentation was usually adequate to support the need for changing the baseline, except that in some cases, the required information on the impact of changes on site activities was not well documented. In addition, we estimated that for about 12 percent of the proposals, the documentation did not include the required source of funding for the change as required by the procedures. In some cases, DOE’s and the contractor’s written procedures for maintaining and updating the baseline are unclear and do not facilitate the efficient review and approval by management of either organization. For example, neither the contractor’s nor the Area Office’s written procedures require that if a proposal is disapproved, the reasons for disapproval be formally documented on the proposal form. The procedures also do not require that the contractor clearly mark documents that support change proposals in order to indicate differences between the current approved baseline and the proposed change. The lack of such documentation inhibits the subsequent review or oversight of proposed changes. As for requirements for the approval of change proposals, DOE’s and the contractor’s procedures for designating which level within each organization should approve change proposals do not clearly define the criteria for determining the approving officials. Although one of the criteria for determining approval levels is the amount of funds involved in the change, the procedures do not clearly define whether the criteria should be the net change in funds over 1 year or over several years. Because Area Office and contractor officials can interpret the criteria differently, change proposals that involve moving similar amounts of funds among activities may be approved at different levels within the organizations. The incompleteness of the formal documentation highlights the degree to which the Fernald Area Office’s management relies on informal and verbal communications to support decision-making. The current procedures and quality of information do not facilitate DOE’s oversight process and also do not provide a complete official record for subsequent internal or external review. In controlling financial accounts, some charges are posted to accounts after they have been closed, and the required approvals for opening and closing accounts are not always obtained. These practices make it difficult for DOE’s and the contractor’s managers to exercise effective control and oversight of the contractor’s costs and performance. The contractor processes several hundred thousand financial transactions each year to accumulate the costs in its accounts. Accounts are opened to allow costs for specific work to be charged against the appropriate account and closed when all related charges have been made to the account. Procedures require that the contractor’s control account managers, who are responsible for managing accounts and verifying the accuracy of charges, perform the opening and closing functions to ensure that a person knowledgeable about the scope of work and the related costs monitors and controls the charges that are made against the account. Nearly all charges in the contractor’s financial system occurred when the accounts were properly opened in compliance with standard procedures. However, a small percentage of the charges were routinely made to accounts after the control account managers had closed them, making the effective control of the accounts difficult. This percentage averaged from 1 to 2 percent of the several hundred thousand charges that Fluor Daniel Fernald processes annually to accumulate costs in its authorized accounts. The system will accept charges to closed accounts, according to contractor officials, to allow for certain adjustments to be made, such as the allocation of sales tax to an account, which is posted monthly rather than after each invoice. In addition to allowing charges to be made to closed accounts—without reopening them—the contractor’s financial system allowed some accounts to be reopened for charges without the required control account manager’s approval. On the basis of our random sample of 87 control accounts and their associated 239 charge numbers, we estimate that 46 percent of the contractor’s accounts were missing at least one of the documents required to open or close the account. Furthermore, some control account managers we interviewed said they were unaware that their accounts had been reopened until after they saw new charges appear in the accounts. Making charges to closed accounts and reopening accounts without the control account managers’ awareness and approval make it difficult for the managers to effectively control what is charged to their accounts and thus ensure the accuracy of the cost data that DOE uses to make payments to the contractor. DOE recognizes that its management and contracting problems are Departmentwide and is implementing major reform efforts to improve these areas. For example, in contracting, a DOE team that was established in 1993 to evaluate the Department’s contracting practices recommended 48 actions to fundamentally change the Department’s way of doing business. In stark contrast to its historical contracting patterns, DOE has published a policy adopting a standard of full and open competition, developed guidance for contract performance criteria and measures, created incentive mechanisms for contractors, and developed training in performance-based contracting for DOE personnel. DOE also has several initiatives under way that could help the Department better manage its affairs. For example, DOE has developed strategic goals to guide the Department and contractors; defined new requirements for managing major assets throughout their life-cycle; and is evaluating revisions to its management, financial, and business information systems to provide managers with more consistent and accurate information on their projects and budgets. DOE’s Fernald site is participating in many of these contracting and management initiatives. However, because the Fernald contract was executed prior to most of DOE’s contract reform initiatives, it will take time for these new initiatives to be formalized into DOE’s relationship with the contractor at Fernald. The test of DOE’s success will occur as DOE implements and monitors the broad changes it is making, awards new contracts for managing its sites, and fine-tunes existing contracts to improve contractors’ performance. At Fernald, DOE must decide by November 30, 1997, whether to extend Fluor Daniel Fernald’s contract for an additional 3 years or competitively award it. At Fernald, weaknesses existed in DOE’s management and oversight of the cleanup projects we reviewed, in DOE’s development of a safety and health oversight program, and in the contractor’s implementation of procedures for key financial and performance systems. Although DOE has already taken some actions to respond to the findings of recent reviews, some problems still remain unaddressed or need further action. Left uncorrected, these weaknesses could increase the cost, timing, and safety and health risks of cleaning up the Fernald site. The expiration of DOE’s current contract with Fluor Daniel Fernald provides an opportune time for DOE to strengthen the specific oversight weaknesses we identified. The contract’s expiration also will provide a test of the implementation of DOE’s contract reform initiatives. DOE can demonstrate the effectiveness of its incentive mechanisms and contract performance criteria and measures, its commitment to a policy of full and open competition, and the effects of its training of DOE personnel in performance-based contracting. In view of the approaching expiration of the contract with Fluor Daniel Fernald, we recommend that the Secretary of Energy ensure that (1) the contract reform initiatives that DOE has undertaken are fully integrated into the Fernald contract and that (2) the Area Office strengthen its oversight at Fernald in order to correct the project management, safety and health program, and performance and financial system weaknesses that we have identified. We provided a draft of this report to DOE for its review and comment, and DOE provided its comments in a letter and two enclosures. DOE’s letter and enclosure I contain the Department’s overall comments, its response to our recommendations, and DOE’s major concerns regarding our presentation of the allegations, management and oversight of the two projects we reviewed, safety and health oversight, and compliance with performance and financial system procedures (see app. VI). This section of the report contains our response to those comments. DOE’s enclosure II, which is not included in this report, contains more detailed comments that we incorporated into the report as appropriate. Overall, DOE plans to take actions related to our report recommendations. DOE says it will convene a panel to consider the opportunity to integrate additional contract reform initiatives into the next Fernald contract and will continue to focus attention on and strengthen oversight of the contractor’s activities. DOE had four major concerns with our draft report. First, DOE was concerned that our report did not bring closure to what DOE characterized as the two key issues raised by the allegations—the Cincinnati Enquirer’s broad conclusions that the site has jeopardized the safety of site workers and neighhbors and that the government is being systematically cheated out of millions of dollars. The scope and objectives of our work, however, were not so broad that we could either validate or dismiss the conclusions drawn from the allegations. Rather, our work points out specific weaknesses that exist in both the safety and health and financial areas that diminish the assurance that safety is adequately addressed and costs are adequately controlled at Fernald. For example, weak processes exist for ensuring that identified safety problems are adequately corrected, and failure to correct such deficiencies present safety risks to workers and the public. In controlling financial accounts, some charges are posted to accounts after they have been closed, and the required approvals for opening and closing accounts are not always obtained. These practices make it difficult for DOE and the contractor’s managers to exercise effective control and oversight of the contractor’s costs and performance. Second, with regard to the oversight and management of two key cleanup projects at Fernald—the vitrification pilot plant and the uranyl nitrate hexahydrate project—DOE generally did not dispute the lack of oversight or the cost and schedule increases, but it did disagree with the reasons for them. DOE cited the transition to the new environmental restoration management contract at Fernald and the technical complexities of the project. We agree that DOE’s approach for implementing the new contracting concept contributed to DOE’s initial limited oversight of the project and have added language to the report to this effect. We also agree that the vitrification project was technically complex. However, we continue to believe, as stated in our report, that other factors, such as DOE and the contractor’s decisions to accelerate the pace of the project and the contractor’s decision to allow concurrent design and construction of key parts of the plant also contributed to the delays and cost increases. Third, DOE disagrees with our characterization of the weak safety and health oversight program from 1992 to 1995 and the representation of the present program as continuing to have weaknesses. DOE maintains that it has shown continuous improvement in its safety and health oversight program since 1992 and that a 1996 DOE review reported that the program was effective. We agree that DOE has made improvements and recognize that in our report. However, prior to 1995, DOE demonstrated little formal oversight, with most of the improvements occurring more recently. In addition, we acknowledge in our report that the 1996 review found the program to be effective. However, the DOE report also identified numerous weaknesses which we also acknowledge, such as the many unstructured and informally documented activities of the facility representatives which are subsequently not useful for tracking and trending safety problems. Fourth, DOE stated that appendix III of our report showed that there was no evidence to the allegation that charges were made to cost accounts with no budget and that the tests we conducted showed that the accounting system was functioning properly. In addition, DOE cited two reviews that it believes indicate that the performance system is performing adequately and that strong controls exist over selected financial activities. We did not perform the type of testing that would allow us to say that no unauthorized work was performed or that all charges in the accounting system were valid. For example, we reviewed only selected control accounts, which did not constitute a statistically valid sample. In addition, while our testing showed that the contractor’s system will not accept charges against fictitious accounts, our work also revealed that charges are routinely made against closed accounts and that accounts are routinely reopened without the knowledge of the responsible account manager. In this connection, partly because the Chief Financial Officer’s 1996 review covered the work authorization process, control of funds, and invoice review, our work did not cover those aspects at Fernald. However, while the Chief Financial Officer’s report characterized some areas as strong, it also states that the team identified areas where controls should be strengthened and made several recommendations for changes at the site, such as strengthening certain controls over expenditures of funds to ensure that overexpenditures that have occurred in the past do not recur. An additional concern raised by DOE was the cleanup schedule, which DOE thought should be brought up into the report summary. However, because we did not consider this a major objective, as we explain earlier in this report, we present this information in appendix IV. We conducted our review from March 1, 1996, through January 31, 1997, in accordance with generally accepted government auditing standards. Appendix V contains our detailed objectives, scope, and methodology. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of the report to the Secretary of Energy; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. Please call me at (202) 512-3841 if you have any questions about this report. The following discusses the purpose and status of the Department of Energy’s (DOE) vitrification pilot plant (VITPP) and uranyl nitrate hexahydrate (UNH) projects and information relevant to the allegations published by the Cincinnati Enquirer about these projects. DOE has divided the Fernald site into five segmented, or operable, units. Unit 1 is the waste pit area; unit 2 consists of other waste areas; unit 3 is the former production area; unit 4 consists of four silos and their contents; and unit 5 handles the remediation of the soils, groundwater, surface water and sediment, and flora and fauna. The VITPP project is located in operable unit 4; the UNH project was part of the cleanup of operable unit 3. DOE’s VITPP project at Fernald is a major step toward remediating 20 million pounds of low-level radioactive waste stored in three above-ground concrete silos since the 1950s. Although the silos may pose relatively little risk of radioactive leaks now, DOE has recognized that the deteriorating silos cannot stand indefinitely and has taken several steps to mitigate potential risks from them. DOE’s latest effort calls for DOE to treat the wastes now stored in the silos and ship the residuals off-site for long-term storage. VITPP is an interim facility designed to confirm the feasibility of vitrifying the silos’ contents outside of a laboratory setting. If tests at the plant are successful, DOE could use the test results from VITPP to design equipment and procedures for operating a full-scale vitrification plant at the site. DOE has established internal project milestones for the construction and testing of VITPP. It also has regulatory milestones established under a 1991 amended consent agreement between DOE and the Environmental Protection Agency (EPA) for the overall operable unit, such as implementing work plans for treating and burying the vitrified waste at an off-site location, that depend on the successful operation of the pilot plant. As of September 9, 1996, DOE had spent about $41.4 million on the project. DOE has completed enough construction at the plant to begin vitrifying material formulated to simulate the radioactive wastes contained in the silos. DOE plans to complete these initial tests of simulated silo material by January 1997. DOE originally intended to follow up on the initial tests of simulated material by (1) completing additional construction at the plant necessary to safely process radioactive wastes stored in the silos and (2) conducting several months of equipment tests using the radioactive material. However, as discussed later, the project has experienced significant delays, equipment problems, and cost overruns. In light of these problems, DOE has decided to postpone the additional construction and testing of radioactive material at the plant and to convene a panel of experts to reexamine its strategy for cleaning up the area. DOE expects that by June 1997, the Department and its stakeholders will reach a consensus on the appropriate cleanup strategy for the area. Allegation: DOE Has Missed Construction and Operating Milestones for the Project. Testing Will Not Be Completed Until 17 Months Later Than Originally Planned. The Cincinnati Enquirer’s November 27, 1995, article reasonably reported the project’s status as of October 1995. As indicated in table 1, at that time, DOE (1) had missed its June and July 1995 internal milestones for completing construction and starting tests for the initial nonradioactive portion of the project, (2) was projecting 7-to 8-month delays in completing these steps, and (3) was estimating a 19-month overall delay in completing the nonradioactive and radioactive phases of testing at the project. The 17-month delay reported by the Cincinnati Enquirer differs from the 19 months estimated by DOE in October 1995 because the newspaper used an August 1995 DOE work plan for the cleanup of the silos to estimate completion of the project. Nov. 1996 actual or latest estimate (actual) (actual) Complete testing of radioactive material (est.) Table I.1 also illustrates that DOE is continuing to experience delays with VITPP. Specifically, DOE was not able to meet the milestones established in November 1995 for completing the first phase of construction or for starting initial testing at the facility. For example, the Department completed construction 4 months later than planned and started testing 3 months later than anticipated. DOE officials agree that their latest estimate for completing testing at VITPP needs to be revised to reflect these most recent delays. However, the officials do not intend to revise the estimate until DOE, its stakeholders, and regulators review the results of initial testing and agree on the future of the project. Allegation: The Project’s Estimated Total Cost Has Jumped From $14 Million to $56 Million. DOE’s estimate of VITPP’s total cost has increased significantly since the Department first estimated these costs. During February 1994, DOE approved an original cost estimate of $14.1 million and established this as an initial baseline against which to measure the project’s future costs. Since then, DOE or Fluor Daniel Fernald has approved more than 20 changes to its baseline cost estimate to account for technical problems with the project, weather-related delays, and other factors. In its July 1996 baseline for a 10-year cleanup of the site, DOE increased the estimated budget to build, operate, decontaminate, and decommission VITPP to $56 million. The $56 million estimate is a more accurate estimate than the original $14.1 million because the original estimate did not include operating or decontamination and decommissioning costs for the plant. However, the $56 million estimate understates the project’s total costs because it does not include (1) VITPP’s share of such sitewide services as providing drinking water, heat, and other utilities and of general administrative costs or (2) estimates of the total cost needed to complete the project. As of September 9, 1996, DOE’s estimate of costs to complete the project, excluding general services and administrative costs, was $66 million. Allegation: DOE’s December 1995 Study of VITPP’s Problems Identified Over 100 Safety, Maintenance, and Reliability and Availability Concerns. DOE and Fluor Daniel Fernald Did Not Have a Firm Date for Correcting These Problems. DOE’s December 1995 study of VITPP problems and a companion analysis of the plant’s potential reliability, availability, and maintenance (the RAM study) reported 70 items of potential concern. The items generally related to safety issues, such as the need to conduct a more extensive analysis of methods to shield workers from the radiation associated with later testing at the plant, posting signs to alert workers of possible dangers, and precautions needed for safely working near the high-temperature melter; maintenance concerns, such as the limited space throughout the plant to access equipment and perform anticipated maintenance and the need to develop worker-friendly procedures for cleaning pipelines that may plug or equipment that might have to be replaced; and suggestions to improve the management process for turning the completed VITPP project over to operating personnel and questions about the reliability of some of the plant’s major systems, such as the system to remove waste gases from the plant. The Cincinnati Enquirer’s allegation that when the article was published, DOE and Fluor Daniel Fernald did not have a firm date for addressing the concerns is essentially correct. The contractor’s January 1996 response to the concerns raised by the RAM study indicated that about 40 percent of the items had already been addressed or were being corrected and about 30 percent would be fixed. For the remaining 30 percent, the contractor disagreed that problems existed. Neither DOE nor the contractor identified specific dates for completing work on any of the concerns or for resolving differences of opinion. Since that time, DOE still has not established completion or resolution dates. DOE officials reviewed Fluor Daniel Fernald’s January 1996 response to the RAM study and twice asked the contractor to respond to additional questions. DOE’s requests generally asked for additional technical detail to explain Fluor Daniel Fernald’s initial information or to clarify partial responses. DOE officials have also worked closely with Fluor Daniel Fernald managers to correct problems that delayed the plant’s opening. Some of the problems that Fluor Daniel Fernald corrected, such as covering areas of the plant exposed to freezing rain or snow to improve the safety of workers, were mentioned in the RAM study. DOE officials believe that all issues raised by the study have been addressed. However, DOE did not establish a mechanism for formally tracking the status of all safety and maintenance issues raised by the studies. Allegation: Fluor Daniel Fernald Has Not Fixed Life-Threatening Structural Defects That Existed at the Plant. The Cincinnati Enquirer’s March 3, 1996, article alleged that Fluor Daniel Fernald had not fixed (1) concrete walls that were pockmarked or incorrectly poured, (2) welds on a major tank that were improperly done, (3) steel reinforcement rods that extended outside concrete walls, and (4) other problems. The newspaper supported some of these allegations with photographs of alleged defects; other alleged defects that involved questions concerning the quality of construction did not lend themselves to photographs or direct observation. In March 1996, DOE reviewed the allegations and Fluor Daniel Fernald’s efforts to identify and correct construction problems at the plant. Although DOE officials found no support for the allegations, they found that in some cases, representatives of the design contractor had not consistently documented their approval of design changes needed to correct construction problems. DOE officials later satisfied themselves that the alleged structural defects had been corrected or did not pose a hazard and that the documentation problems did not jeopardize the overall integrity of the contractor’s construction activities. During two tours of the pilot plant during March and April 1996, we observed the results of Fluor Daniel Fernald’s efforts to correct several of the alleged construction problems at the plant. For example, we observed that Fluor Daniel Fernald had coated many of VITPP’s walls with an epoxy-like material from the floor to about 3 feet from the floor. DOE’s facility representative conducting one of the tours indicated that the coating would minimize seepage of any radioactive material that might possibly leak from equipment during vitrification. A December 13, 1994, engineering evaluation of the plant’s poured-concrete walls commissioned by Fluor Daniel Fernald concluded that although some walls were pockmarked, they met design specifications. In addition, we observed that extra concrete had been cut away from an improperly poured wall to make a straight vertical surface. The remaining concrete did not appear to be damaged. Also, we observed that the tank discussed by the Cincinnati Enquirer, which had been damaged during delivery and installation, was in place and ready for testing. According to DOE’s December 1995 study of VITPP, after an independent inspection team questioned the integrity of the welds used to fix the tank, Fluor Daniel Fernald satisfactorily repaired the tank. During our tours, we did not observe steel reinforcement rods jutting outside of concrete walls similar to those in the photographs published by the Cincinnati Enquirer. Although the steel rods may have protruded from the walls during the plant’s construction, they were no longer visible. Overall, the alleged construction problems at VITPP do not appear to have seriously compromised safety. Between June 1996, when DOE started operating the plant, and September 1996, DOE had not reported any occurrence of health or safety problems from the construction or operation of VITPP. However, on December 26, 1996, a small fire developed at the plant after heated glass from the melter leaked onto the epoxy-covered floor. No one was injured in the fire, and DOE is investigating the causes of the leak and fire. Allegation: DOE’s December 1995 Study Reported That (1) the Fast-Tracking of the Building of a Full-Scale Plant Was a Major Concern to the Study’s Investigators and (2) DOE and Fluor Daniel Fernald Should Evaluate the Costs and Benefits of Alternatives to Vitrification. DOE’s December 1995 evaluation of VITPP discussed both concerns. In regard to fast-tracking the remaining work, the study team observed that the strategy was valid but cautioned that managing a fast-track project is difficult. As for evaluating alternatives, the study team noted that numerous approaches to cleaning up the operable unit existed and recommended that DOE and Fluor Daniel Fernald review the cost and benefits of key alternatives. DOE has responded positively to these concerns. Within a few weeks of completing the December 1995 study, a DOE-sponsored value engineering team met to study alternatives to building a full-scale vitrification plant at the site. The resulting study, issued in January 1996, proposed (1) upgrading VITPP and building another pilot-plant-size vitrification facility to operate in tandem with the upgraded plant, (2) using other solidification and stabilization technologies on the less radioactive wastes now stored in one of the silos, and (3) using other technologies to clean up the more radioactive wastes stored in the remaining two silos. DOE has notified its regulatory agencies that it is evaluating the second option, which the study estimated could save $68 million, and plans to evaluate the remaining options in time for the spring 1997 evaluation of the plant’s future. DOE site officials have also stopped the design, procurement, and construction of the full-scale plant until after the spring 1997 evaluation. Allegation: Various Problems Contributed to VITPP’s Schedule Delays and Cost Overruns. DOE and Fluor Daniel Fernald officials acknowledge that many of the problems discussed by the Cincinnati Enquirer contributed to poor performance at VITPP. These problems included fast-tracking, the project’s underestimated complexity, concurrent design and construction of the project, and the contractor’s overly optimistic assessment of its ability to recover from schedule delays. DOE and Fluor Daniel Fernald fast-tracked VITPP in order to meet regulatory milestones under DOE’s amended consent agreement with the EPA for the overall operable unit, despite the technical risks of the project. In 1993, when Fluor Daniel Fernald issued its first request for proposals for a vitrification melter, DOE had completed only laboratory-scale tests of the feasibility of vitrifying the silos’ wastes. Nevertheless, DOE decided to overlap phases of the plant’s design, construction, and operation in order to meet these milestones for the overall operable unit. Fluor Daniel Fernald also initially underestimated the complexity of building a larger-than-laboratory-scale, high-temperature vitrification facility. The contractor’s early cost estimates for the project assumed that the plant’s melter, which is a key component of the facility, could operate at 100-percent efficiency. Subsequent baselines have assumed less optimistic 50-percent and 33-percent efficiencies. In addition, procurement, design, and delivery of the melter took 9 months longer than expected. Because Fluor Daniel Fernald subcontractors needed information about the melter to complete the design and construction of other parts of the plant, the delays in selecting a vendor for the melter and designing the melter delayed completion of the plant’s design and mechanical and electrical work. Fluor Daniel Fernald continued the design and construction of the plant and plant systems concurrent with a subcontractor’s design and fabrication of the melter. Fluor Daniel Fernald used preliminary information about the melter to design and build interfacing equipment systems and water and electricity hook-ups in the plant. After the vendor delivered melter components that were different from the preliminary designs, Fluor Daniel Fernald had to rework parts of VITPP to connect utilities and equipment systems with the melter. For example, from May 1995, when Fluor Daniel Fernald began receiving melter components, through May 1996, the contractor issued about 225 design change notices to (1) correct problems caused by the concurrent design of the melter and VITPP, (2) improve the plant’s overall safety, or (3) redesign pumps and other equipment that had been installed at the plant but that did not pass initial tests. According to DOE’s December 1995 study of VITPP’s problems, the number of design changes is indicative of problems within a project. The contractor was also overly optimistic in assessing its ability to recover from schedule delays. Fluor Daniel Fernald officials provided monthly information for the contractor’s cost performance reports and DOE’s progress-tracking system that highlighted (1) delays in obtaining design information from equipment vendors, (2) frequent design changes needed because of limited data, and (3) delays in starting mechanical and electrical work at the plant. However, the contractor repeatedly assured DOE that it could overcome these delays and meet the regulatory milestones. It was not until August 1995, after the contractor had missed the project’s original milestone for completing construction, that Fluor Daniel Fernald admitted that problems at VITPP could delay the design and construction of the full-scale vitrification plant. Allegation: DOE Managers at Fernald Exercised Limited Oversight Over the Project and Allowed Problems at the Plant to Fester Too Long. DOE’s Associate Director and Deputy Associate Director for Environmental Restoration at Fernald acknowledge that if DOE managers had exercised more oversight of Fluor Daniel Fernald’s early decisions on the project, DOE could have avoided some of VITPP’s major problems. At the project’s beginning, site managers at the associate director level and above and at DOE headquarters involved themselves by approving the plant’s original baseline schedule. DOE’s primary project manager was also generally aware of early delays and overruns with the project. However, neither level of site managers exercised sufficient oversight of the project to correct problems before they became significant. For example, DOE senior site managers focused their attention during this early phase of the project on whether Fluor Daniel Fernald was meeting regulatory milestones for the overall operable unit. Although some DOE senior managers were aware of early procurement and design delays, they generally did not question the impact of these problems on the schedule or the appropriateness of Fluor Daniel Fernald’s corrective actions. This was largely because (1) no regulatory milestones were associated with construction of VITPP and (2) Fluor Daniel Fernald insisted that the problems would not affect its ability to meet the regulatory milestones of the overall operable unit. DOE also did not assign early in the project a sufficient number of staff with the technical capability to challenge Fluor Daniel Fernald’s early assertions that the project would recover from its delays. During 1993, 1994, and the first half of 1995, DOE assigned primarily one staff to the project assisted by a facility representative who monitored field activities. They were to (1) prepare regulatory documents for the overall operable unit, (2) monitor the design and construction of the pilot plant, review monthly invoices of project costs, and (4) prepare budget requests and respond to funding changes that affected the entire operable unit. In balancing this workload, DOE staff did not have the time nor the technical expertise to counter Fluor Daniel Fernald’s assertions that it could recover from the project’s initial delays and meet the plant’s cost and schedule goals. DOE did not have a firm basis for revising the plant’s cost and time estimates until August 1995, when Fluor Daniel Fernald admitted schedule delays. Allegation: DOE Did Not Penalize Fluor Daniel Fernald for Poor Performance at VITPP Until November 1995. At That Time, DOE Penalized the Company $675,000 for Missing VITPP’s Milestones. DOE has a cost-reimbursable performance-based fee contract with Fluor Daniel Fernald, which reimburses the contractor for its monthly costs and provides for additional semiannual fees on the basis of the contractor’s performance. Specific to VITPP, the contractor can earn award fees for the project if it meets milestones that have been agreed to by DOE and the contractor and are included in semiannual performance evaluation plans. The contractor can also earn award fees if DOE subjectively determines that the contractor’s overall performance for the entire site, including VITPP, is satisfactory. Depending on its performance on VITPP, the contractor may earn all of the milestone and subjective award fees or some portion thereof. For example, the contractor can earn less than the maximum award fee possible during every 6 months if (1) it misses one or more VITPP milestones and/or (2) performance on the project is sufficiently poor enough for DOE to deduct fees from its overall subjective evaluation. DOE has twice paid Fluor Daniel Fernald award fees for meeting early VITPP milestones included in DOE’s semiannual performance evaluation plans. In fiscal year 1994, the contractor completed a VITPP safety analysis report on time and earned the full $135,000 in an agreed-upon award fee for the milestone. Similarly, in the first half of fiscal year 1995, the contractor met the agreed-upon milestone for completing construction of a prefabricated VITPP auxiliary building and earned the full $270,000 associated with the milestone. The second half of fiscal year 1995, ending October 31, 1995, was the first period in which the contractor did not earn the full amount of potential award fee. The contractor could have earned $675,000 for meeting VITPP’s start-up milestones. However, DOE determined that because of the missed milestones and general deficiencies in managing the project and controlling schedules, the contractor would not receive any of the fee. Furthermore, Fluor Daniel Fernald could have earned an additional $1.62 million in award fees for satisfactory performance at the entire site. DOE determined that because of project delays at VITPP, the contractor should receive $1.2 million—$405,000 less than the contractor could have earned. During fiscal year 1996, DOE determined that the contractor would not receive $2.16 million in potential award fees for missing VITPP milestones and for experiencing excessive cost and schedule overruns on the project. When production ended at Fernald in 1989, about 200,000 gallons of UNH (uranium ore dissolved in nitric acid) remained in 18 stainless steel tanks in various locations at the Fernald complex. The tanks and their contents were a concern because (1) UNH was a mixed hazardous waste; (2) the tanks, valves, and other equipment used to store the solution were approximately 40 years old and were subject to periodic leaking; and (3) DOE’s surveillance of the tanks cost about $100,000 per year. Consequently, in 1991, DOE approved a contractor-proposed project for the removal of the UNH solution. The UNH project consisted of several steps, including (1) precipitating the uranium from the solution by the addition of certain chemicals, (2) filtering the residual material from the solution, (3) loading the residual material into drums, and (4) shipping the drums off-site. According to the DOE UNH project manager, the nonhazardous solution remaining from the project was discharged from the site in accordance with a discharge permit issued under the Clean Water Act. DOE, Fluor Daniel Fernald, and the Ohio EPA consider the UNH project a completed success. Filtration of the residual material from the last UNH batch was completed on August 30, 1995. The Ohio EPA had mandated that the UNH solution be removed from the storage tanks by September 25, 1995. The shipment of the drummed UNH residual material to the Nevada Test Site began in April and was completed in September 1996. However, the project has taken about $16.8 million and about 5 years to complete. When the project was initially proposed in fiscal year 1991, Westinghouse—the Fernald on-site contractor at the time—estimated that by using existing equipment and former operating procedures with minor modifications, it would take $750,000 and about 7 months to remove the UNH solution from the tanks and put the residual material in drums. An April 1993 spill of UNH solution led to a determination that a more structured approach and new systems were needed to move forward. Allegation: Fluor Daniel Fernald Used Defective Leakproof Pumps to Transfer UNH Solution Between Tanks During the Project. Fluor Daniel Fernald did not use defective leakproof pumps to transfer UNH solution during the project. However, Fluor Daniel Fernald did install initial and then substitute styles of transfer pumps that were defective and leaked filtrate water during hydrostatic testing. Fluor Daniel Fernald’s failure to inspect and/or review the two styles of pumps beforehand contributed to the installation of the leaking pumps and the associated delay to the UNH project. Specifically, DOE records show that Fluor Daniel Fernald waived its right to witness a factory performance test on the initial style of pumps used on the project. Fluor Daniel Fernald gave the waiver, in part, because the pumps would also be examined on-site. When the pumps arrived in September 1994, Fluor Daniel Fernald installed the pumps but found that they leaked because of cracked casings. The pumps were removed and sent back to the manufacturer for replacement or repair. DOE records further show that Fluor Daniel Fernald then installed substitute pumps without conducting an engineering review of the pumps. According to Fluor Daniel Fernald memoranda, the substitute pumps were installed because they were already available on-site and their installation would keep the UNH project on schedule. However, the substitute pumps also leaked during testing; had vibration problems; were found to be incompatible with system supports, piping, and control instrumentation; and also had to be removed. Ultimately, Fluor Daniel Fernald and DOE made the decision in January 1995 to reinstall the initial pumps, after repair, and found that they worked properly. Allegation: UNH Leaked From the System Because of Defective Equipment. During 1993 through 1995, Fluor Daniel Fernald reported eight UNH project leaks to DOE through the Department’s occurrence-reporting system. Two of those reported leaks, involving filtrate water, can be attributed either directly or indirectly to defective equipment. In one case, in December 1994, about 500 gallons of filtrate water leaked from the system in large part because of a defective weld in system piping. A Fluor Daniel Fernald analysis of the defective weld revealed that the weld had cracked because of improper weld installation. The weld lacked adequate penetration as well as adequate thickness. Subsequently, Fluor Daniel Fernald also identified and corrected three other defective welds. In a second case, also in December 1994, about 10 to 15 gallons of filtrate water leaked from the system while one of the transfer pumps was being tested. Defective pipe line valves had previously been detected and removed so that the valves could be repaired. According to a DOE daily report on the UNH project, however, Fluor Daniel Fernald directed its construction contractor to reinstall the defective valves so that scheduled pump testing could continue. When pump testing continued, one of the defective valves had still not been reinstalled and the line had not been closed off. With the pump running, filtrate water poured out of the line where the defective valve had been removed and onto the plant floor. Allegation: Fluor Daniel Fernald Eliminated and/or Reduced the Inspection Requirements of Equipment Being Built for the UNH Project. Three cases were identified in which Fluor Daniel Fernald eliminated and/or reduced the inspection requirements associated with the UNH project. In each case, the elimination and/or reduction of the inspection requirements led to further UNH project problems. For example, in one case previously discussed, Fluor Daniel Fernald waived its right to witness a factory performance test on the transfer pumps prior to their shipment to Fernald. In a second case, Fluor Daniel Fernald eliminated the requirement to perform a dye penetrant test on in-process welds. The dye penetrant test is designed to ensure that the welds are being done properly. According to a Fluor Daniel Fernald quality assurance inspector on the UNH project, Fluor Daniel Fernald eliminated the dye penetrant test so that the UNH project could stay on schedule. DOE’s special project team report on the Fernald allegations indicated that this test may have detected the defective weld that caused the leakage of about 500 gallons of filtrate water in December 1994. In a third case, Fluor Daniel Fernald elected not to test the acceptability of UNH construction that had been completed by one of its subcontractors. According to DOE’s UNH project manager, DOE expected the contractor to perform the testing. Subsequently, numerous problems were identified. Those problems included the following: a portion of the piping was built without secondary containment; there were cracked and substandard welds; pumps leaked upon installation; and defective valves (valves that either leaked or could not be easily opened and closed) had been installed. According to the DOE UNH project manager, Fluor Daniel Fernald elected to forego the acceptance testing so that further UNH project testing could begin on schedule. After it was determined that removal of UNH would not begin on January 17, 1995, as mandated by the Ohio EPA, the DOE UNH project manager said that DOE required Fluor Daniel Fernald to conduct the construction acceptance testing before proceeding any further. This official added that DOE also realized it needed to pay closer attention to Fluor Daniel Fernald’s activities. Allegation: While the UNH Cleanup Was Completed in August 1995, It Initially Was Delayed and Then Riddled With Design, Equipment, and Radiation Contamination Problems. A February 1995 Fluor Daniel Fernald report on the UNH project confirmed much of this allegation. According to that report, there were discrepancies between key UNH documents regarding the project’s design and description; certain piping systems had been installed in an improper manner; and a UNH project leak had occurred because of a defective weld. Site officials also acknowledged that during 1991-94, there were certain delays and a myriad of problems associated with this project, which DOE initially estimated would be completed in November 1991. For instance, according to Fluor Daniel Fernald’s deputy project manager on the UNH project, initially there was poor process control, inadequate documentation, and poor labeling of the existing tank and system components. This Fluor Daniel Fernald official added, however, that Fluor Daniel Fernald made tremendous strides in correcting these problems during 1995. Our review confirmed that Fluor Daniel Fernald made progress on the UNH project in 1995, particularly after Fluor Daniel Fernald made certain personnel changes. Those changes consisted of adding additional and better qualified personnel to the project. Allegation: Fluor Daniel Fernald Repeatedly Made False Performance Claims to DOE Regarding the Project by Stating That It Had Successfully Completed Various Studies and Equipment Testing. In Turn, DOE Failed to Review Fluor Daniel’s Fernald Performance Claims. No incidents were identified where Fluor Daniel Fernald made false performance claims to DOE. On the contrary, Fluor Daniel Fernald’s status reports on the UNH project seem to accurately present the progress or lack of progress being made on the project. In addition, DOE’s records indicate that the Department was well aware of the many problems associated with the project. Allegation: Fluor Daniel Fernald Was Not Financially Penalized for Its Poor Performance or the Deceptive Performance Reports. Although Fluor Daniel Fernald was not financially penalized during the UNH project, it did not receive $540,000 in award fees that it could have earned, had its performance been better. In a somewhat related matter, DOE/Fernald officials have submitted 18 UNH-related requests to the site’s Avoidable Cost Committee that would compel Fluor Daniel Fernald to return certain funds to DOE under the Department’s avoidable cost rule. Under this rule, as provided in the contract between DOE and Fluor Daniel Fernald, the contractor is responsible for any direct costs that were avoidable and were incurred by Fluor Daniel Fernald, without any fault of DOE, exclusively as a result of negligence or willful misconduct on the part of contractor or subcontractor personnel in performing work under the contract. Included in the 18 requests were requests related to (1) the removal and reinstallation of the UNH transfer pumps; (2) the leakage of filtrate water because of a defective weld; and (3) the leakage of filtrate water because of a missing pipe line valve (see our earlier assessment of these incidents). As of November 1, 1996, the first two requests had not been closed. DOE was performing an independent evaluation of the requests to determine the incidents’ impact on the UNH project’s cost and schedule. Regarding the third request involving the leakage of filtrate water because of a missing pipe line valve, DOE closed the case because the incident had no significant impact on the project. Allegation: The Identities and Medical Conditions of Three Workers Who Were Splashed and Contaminated With UNH Were Not Disclosed. In April 1995, three workers were splashed as a result of a UNH spill. DOE redacted the names of the individuals involved in the spill from information provided to the press because of Privacy Act considerations. According to DOE’s Director of Public Affairs, representatives of the press were not provided with medical information on the workers because they did not request the information. During our review, we interviewed two of the three workers involved and were told that neither they nor the other worker was harmed by the spill. According to our DOE audit liaison, the third worker involved in the spill had quit his employment at Fernald and was not available for interview. During our review, we identified other project management problems that affected the UNH project. Specifically, contrary to DOE’s requirements, many project management documents key to the success of the UNH project were not prepared until late in the life of the project or not prepared at all. The unavailability of these documents in the early stages of the project contributed to the project’s cost growth and schedule delay. In addition, UNH lessons learned were not always shared with other Fernald projects. As a result, certain pipe line valves known to be defective on the UNH project were subsequently installed on the Vitrification Pilot Plant. According to a September 30, 1996, memorandum from Fluor Daniel Fernald to DOE, some of those valves were being replaced. DOE’s project management order considers the preparation of certain documentation to be key to the success of any project. This documentation explains, among other things, what is going to be done, how it shall be accomplished, and who will be responsible for carrying out the project. According to information obtained from site officials, certain key documents were not prepared until late in the life of the project or not prepared at all. One such document is the Technical Information Plan. The plan identifies all DOE and other requirements that Fluor Daniel Fernald had to comply with in the removal of the UNH and that should have been prepared at the fiscal year 1990 outset of the project. However, it was not prepared until November 1994. According to a Fluor Daniel Fernald evaluation report on the UNH project, the technical information plan was prepared late because the UNH project was perceived to be a simple project. The Fluor Daniel Fernald evaluation report added that because of the delay in publishing this plan, significant UNH work was not done according to DOE’s requirements, delays occurred in accomplishing work because of unclear lines of responsibility, and a full understanding of the project’s obligations was lacking. Other documents also prepared late include a quality assurance plan and a critical path schedule. A project management plan was not prepared at all. The quality assurance plan, which was prepared in January 1995, describes the processes that will be used to detect, control, correct, and prevent UNH project problems. The critical path schedule, which was prepared in February 1995, shows the interrelationships with all phases of the project including transfer pump redesign and construction, weld inspection and repair, operator training, and the removal of UNH. The project management plan, which was not prepared, is supposed to contain, among other things, a master milestone schedule, project budget, and a listing of key project personnel by name and oversight responsibility. Site officials offered us various reasons why the preceding documents were prepared late or not at all. According to a Fluor Daniel Fernald official involved in doing an evaluation of the UNH project, Fluor Daniel Fernald personnel at the outset of the project did not know what documents were required by DOE. According to the DOE project manager on the UNH project, from March 1993 to July 1994, Fluor Daniel Fernald viewed the UNH project as an extension of Fernald’s production operations. The manager added that Fluor Daniel Fernald believed that if the procedures in place were good enough for production, then the procedures were also good enough for the removal of UNH. The manager further said that DOE did not insist on the preparation of certain key documents because it was believed that the emergency nature of the UNH removal took precedence over other matters, such as the preparation of documents. DOE’s project management order also emphasizes the importance of sharing lessons learned. This order stresses that when problems occur on a project, those problems should be reported so that similar problems do not occur on other DOE projects. We found one instance in which UNH lessons-learned information about defective pipe line valves was not shared with another Fernald project. During the testing on the UNH project in December 1994, several problems were encountered with the performance of certain pipe line valves. Specifically, the valves were found to leak and were difficult to open and close, and the handles failed with limited operation. After further evaluation of the valves, Fluor Daniel Fernald abandoned their use on the UNH project in January 1995 and replaced them with another style of valve. Subsequently, the same type of defective valves was installed and experienced problems on VITPP. According to a September 30, 1996, memorandum from the Fluor Daniel Fernald Vice President for Waste Management Technology and Silo Projects to DOE, some of these defective valves on the VITPP were being replaced. This official said that the valves in question were determined to have a design deficiency and should not be used in systems transferring radioactive and/or hazardous materials. This official added that no root cause analysis was done on the defective valves that would have alerted site officials against the valves’ further use. This Fluor Daniel Fernald official subsequently told us that such an analysis was not done because the defective valves on the UNH project were not placed into operation. The following discusses DOE’s processes for ensuring that Fluor Daniel Fernald adheres to safety and health requirements and information relevant to the allegations published by the Cincinnati Enquirer about safety and health conditions at the site. The operations at DOE’s Fernald site pose a variety of potential hazards to workers and the public located nearby. Although the production of uranium metal has ended, a large amount of nuclear materials and chemicals is stored at the site. Radioactive hazards include contaminated facilities and nearly 16 million pounds of stored uranium, while chemical hazards include acids and process waste. Furthermore, ongoing decontamination and decommissioning activities pose a variety of hazards to workers. Site activities include the decontamination and dismantlement of production facilities, construction activities related to environmental cleanup, and waste management. DOE requires Fluor Daniel Fernald to comply with numerous safety and health standards aimed at minimizing the risks posed by site operations. Such standards include DOE orders and regulations pertaining to a range of functional areas, such as the protection of workers and the public from radiation, nuclear criticality safety, maintenance, quality assurance, operations, fire protection, and occupational safety and health. The Fernald Area Office’s Office of Safety and Assessment is primarily responsible for performing the area office’s oversight of the contractor to ensure compliance with these requirements. The Area Office’s safety management performance has been subject, in turn, to oversight by the Defense Nuclear Facilities Safety Board (DNFSB) and by DOE’s headquarters offices of Environmental Management (EM) and Environment, Safety, and Health (ES&H). From 1993 through 1995, the officials representing DNFSB, EM, and ES&H raised serious concerns regarding the Fernald Area Office’s capability to ensure the contractor’s compliance with DOE’s safety and health requirements. The actions taken by the Fernald Area Office in response to these concerns have improved its ability to oversee the contractor’s safety and health performance. The Fernald Area Office’s level of oversight in fiscal year 1996 was significantly higher than the level of oversight it exercised in previous years. In reviewing the site’s operations, DNFSB found that the Fernald Area Office had inadequate plans and preparations to supervise the contractor’s activities, did not have adequate technical staff to ensure that safety requirements were adhered to, and did not stay on top of the daily activities of the contractor. In their Recommendation 93-4, issued in June 1993, DNFSB recommended, among other things, that DOE develop and implement a technical management plan for Fernald. This plan would define the responsibilities and necessary qualifications of the DOE staff at the site and outline a detailed program for ensuring Fernald’s compliance with applicable standards related to public and worker safety. DNFSB also recommended that DOE “immediately establish a group of technically qualified Facility Representatives at Fernald to monitor the ongoing activities of daily operations at the site.” In response, the Fernald Area Office developed a Technical Management Plan for the site, established a Facility Representative Program, and initiated a qualification program for the facility representatives. However, in July 1994, EM reviewed the Fernald Area Office’s program for assessing operations at the site and found it to be unsatisfactory. Specifically, EM found that the Fernald Area Office was not conducting required assessments, did not systematically follow up on prior assessments, did not transmit the assessment reports to the contractor, and was not considering assessment results in the award fee process. In response, the Fernald Area Office developed a plan for its Conduct of Operations assessment program, developed and implemented a schedule of assessments, started reporting the assessment results to the contractor and following up to ensure that the contractor corrected identified problems, and started considering the assessment results in award fee decisions. In spite of this progress, in February 1995, site residents from DOE’s ES&H Office reported that the Fernald Area Office’s oversight program lacked “the structure and resources necessary to validate the adequacy of the contractor’s operational safety and health programs.” Specifically, they reported that the Fernald Area Office did not have a formalized system in place to track and show trends in the status of safety and health deficiencies it had identified, that the Fernald Area Office’s line managers did not conduct routine walk-throughs of Fernald facilities, and that the Fernald Area Office had not developed procedures for implementing its safety and health responsibilities. To address these problems, the Fernald Area Office started to develop a computerized tracking and trending system, set up a program requiring the Fernald Area Office’s personnel to conduct formal documented walk-throughs of Fernald facilities, and issued procedures regarding its safety and health oversight programs. It was not until May 1995, when EM performed a follow-on review, that the area office’s program for assessing operations was found to be satisfactory. To determine the extent to which the Fernald Area Office’s oversight activity has changed over time, we obtained data on the number of reviews of the contractor’s safety and health performance that the Fernald Area Office formally transmitted to the contractor from fiscal year 1993 through fiscal year 1996. (See table II.1.) The contractor is expected to take appropriate action on all review results that the Fernald Area Office formally submits to the contractor. These reviews can be formal assessments of the contractor’s operations or less rigorous surveillances.We found that the Fernald Area Office transmitted few assessments and surveillances to the contractor in 1993 and 1994 but significantly increased the number transmitted by fiscal year 1996. These covered such topics as the conduct of operations, compliance with the Occupational Safety and Health Administration’s construction asbestos regulation, radiological control practices, implementation of DOE’s nuclear safety regulations, and quality assurance. According to the Fernald Area Office’s Associate Director for Safety and Assessment, the low level of oversight activity in 1993 and 1994 is attributable in part to confusion during that period over the level of oversight that DOE should exercise over an environmental restoration management contractor. Furthermore, since the Oak Ridge Field Office had the primary responsibility for oversight at Fernald prior to 1993, the Fernald Area Office needed time to develop programs and procedures for oversight. Finally, the Fernald Area Office lost a number of its technical staff to the Ohio Field Office when that office was established in 1994. Although the Fernald Area Office’s oversight programs have improved, they still have weaknesses that limit DOE’s ability to ensure that Fluor Daniel Fernald is fulfilling applicable safety and health requirements. Problems include weak planning of assessment activities, slow progress in ensuring that some key oversight staff are properly qualified, and weak processes for ensuring that identified safety problems are adequately corrected. The Fernald Area Office is initiating or planning a number of improvements to address these weaknesses, but it is too early to determine whether these actions will completely eliminate them. Although a May 1996 report on environment, safety, and health programs at Fernald by DOE’s ES&H Office found the safety management at Fernald to be effective, it found several areas where improvements were needed.One of these areas is the Fernald Area Office’s planning of its assessment activities that have not been integrated or systematic. For example, the Fernald Area Office has not fully implemented its Compliance Assurance Plan—the section of the Technical Management Plan which outlines what assessments it must perform. Some areas, such as radiation protection and the conduct of operations, have been covered well. Others, however, such as waste management and occupational medical program performance, were not covered until the fiscal year 1997 plan, according to DOE. Furthermore, we found that the Fernald Area Office has not planned the oversight activities of its facility representatives well. DOE’s facility representatives are responsible for monitoring the performance of their facility and its operations and serve as DOE’s primary points of contact with the contractor. Despite their important role, the Fernald Area Office has no rigorous process in place to ensure that its facility representatives cover various functional areas as they carry out their monitoring responsibilities. For example, the Fernald Area Office’s program does not have an assessment schedule to govern the work of its representatives as called for by DOE’s Standard on Facility Representative Programs, the Ohio Field Office’s procedures regarding facility representative programs, and the Fernald Area Office’s own plan for its facility representative program. The purpose of such a schedule is to ensure that the facility representatives conduct a comprehensive and systematic review, through assessments and surveillances, of all aspects of the facility’s operations over an established period of time. According to the head of the Fernald Area Office’s Safety and Assessment Office, the facility representatives have primarily conducted walk-throughs of facilities rather than more formal assessments and surveillances because, as of August 1996, four of the six representatives had not yet fulfilled basic qualification requirements and were not yet ready to conduct these types of reviews. Instead, other Safety and Assessment Office staff have performed assessments and surveillances of the contractor. The Fernald Area Office has developed an assessment schedule that delineates what assessments these other staff must perform, but it has not developed a schedule for surveillances. According to the head of the Safety and Assessment Office, the Fernald Area Office does reactive surveillances in response to problems that arise instead of planning them in advance. Although the Fernald Area Office’s facility representatives focus on conducting walk-throughs of their assigned facilities, these walk-throughs are unstructured because the representatives have not developed guidelines for performing them, as called for by the Ohio Field Office’s procedures on facility representative programs. The purpose of such guidelines is to ensure that information is gathered systematically throughout a facility. According to the head of the Fernald Area Office’s Facility Representative Program, the level of formality of the program has not yet evolved to that level. We found that the Fernald Area Office has been slow in ensuring that its facility representatives complete basic qualification requirements. In spite of the Defense Nuclear Facility Safety Board’s recommendation in June 1993 that DOE immediately establish a group of technically qualified facility representatives at Fernald, as of October 1996, only two of the agency’s six representatives had completed qualification requirements. The qualification process involves the completion of a minimum of 6 months on-site, training regarding the site and specific projects/facilities, required reading, and one written and one oral examination. According to staff of the Defense Nuclear Facilities Safety Board, the effectiveness of unqualified facility representatives could be hampered by their lack of familiarity with their facility or its processes. The head of the Safety and Assessment Office explained to us that when he assumed direct responsibility for the facility representatives in January 1996, he had found that two of the facility representatives who had started in February and March 1995 were not very far along in fulfilling their qualification requirements. He then hired three more in January and February 1996. He has concentrated on correcting delays in training since taking responsibility for the program. After we completed our fieldwork, the Fernald Area Office told us that as of November 1996, five of the six facility representatives had completed their qualification requirements. Although the Fernald Area Office has increased the number of assessments and surveillances that it produces and transmits these to the contractor for action, the office has not yet instituted processes that ensure that the contractor adequately corrects problems that the Fernald Area Office has identified in these reviews. For example, the Fernald Area Office has lacked a system for tracking the status of assessment and surveillance findings and showing trends in identified deficiencies. Consequently, the office has not had readily available information on what safety and health problems it has identified and the current status of these problems. The May 1996 report on Fernald by the ES&H Office also identified weaknesses, such as the inadequate verification of corrective actions and inadequacies in the oversight of the contractor’s corrective action processes. Furthermore, the Fernald Area Office’s facility representatives generally do not formally document their findings. The representatives usually relay their findings to the contractor verbally rather than in formal reports. The representatives are instructed to record their daily or weekly observations in their log books, which are informal records of their activities and are not transmitted to the contractor. According to the Fernald Area Office’s Associate Director for Safety and Assessment, although the facility representatives are not required to prepare field observation reports, they have recently been doing so to a greater extent. The Fernald Area Office’s Office of Safety and Assessment intends to document these field observation reports in its new tracking and trending system, once it is implemented. The lack of formal reporting by the Fernald Area Office’s facility representatives is contrary DOE’s Standard on Facility Representative Programs and the Ohio Field Office’s procedures on facility representative programs, which both call for periodic formal reporting by facility representatives. The purpose of this reporting is to transmit findings and follow-up items from surveillances and walk-throughs to the contractor and area office management. Such reporting helps DOE realize the maximum benefit from its facility representative programs. As a result of the above weaknesses, the Fernald Area Office’s ability to ensure that identified problems are adequately corrected has been limited. For example, in the case of maintenance activities, the Fernald Area Office found in April 1995 that the contractor had problems in maintaining compliance with procedures and maintenance controls throughout the site and requested that these problems be corrected prior to the next assessment. During the next assessment in November 1995, however, the Fernald Area Office found that these problems continued. Although the Fernald Area Office again requested that the contractor correct these problems, the ES&H Office found in May 1996 that the site still had significant and pervasive problems with maintenance. Problems included nonadherence to procedures and deficient procedures. In some cases, continuing problems have or could have adversely affected operations, safety equipment, and workers. For example, two sitewide power outages in January 1996 (one of which resulted from a fire) were attributable to inadequate maintenance of facilities at the site. The consequences of these events included damage to equipment and delays in work activities. Our examination of DOE’s performance evaluations of Fluor Daniel Fernald for determining award fees has shown that the Fernald Area Office has used this mechanism to hold Fluor Daniel Fernald accountable for improving its performance in protecting workers from radiation. However, the office has not effectively used award fees to hold the contractor accountable in some other key areas. For example, the performance evaluation for the period October 1995 to March 1996 rated Fluor Daniel Fernald’s overall safety performance as excellent but did not include the contractor’s performance in correcting maintenance problems as a criterion. In addition, although the May 1996 ES&H Office’s report cited electrical safety as another area needing improvement, the performance evaluation of the contractor’s safety performance for the period October 1995 to March 1996 did not include electrical safety as a criterion in rating the contractor. An emphasis in the award fee process on meeting deadlines, combined with an inadequate emphasis on safety performance, can lead the contractor to develop a “rush mentality” that could compromise safety. This problem has been noted in two reports on Fernald. A September 1995 report by DOE, Fluor Daniel Fernald, and consultants reported that an emphasis on meeting project target dates at Fernald contributed to a breakdown in contamination control and an increase in personnel contaminations in July and August 1995. In its May 1996 report on Fernald, ES&H noted that “Due to the strong emphasis on cost and schedule . . . items not directly identifiable in the critical path, such as maintenance activities, are being assigned a low priority and given minimal funding. Deferral of these items may have a negative synergistic impact on site safety and infrastructure and, therefore, on the ten-year baseline.” The Fernald Area Office is continuing its efforts to strengthen its oversight programs and is in the process of instituting or planning improvements aimed at addressing the weaknesses cited above. The office initiated several of these efforts in response to the May 1996 ES&H Office report. It is not yet clear, however, whether these actions will fully resolve the problems discussed here. Actions underway or planned include the following: To plan its assessment activities in a more integrated manner, the Fernald Area Office is revising its Technical Management Plan to include a new master schedule of its assessment activities. This schedule will specify what assessments are required for each functional area. The office plans to assess each functional area at least once per year. Regarding the planning of the facility representatives’ oversight activities, the Fernald Area Office’s Associate Director for Safety and Assessment has told us that the office plans to develop a more formalized schedule for the representatives’ work. This schedule would indicate what areas they should be covering during their walk-throughs as well as through surveillances and assessments. To accelerate the formal qualification of its facility representatives, the Ohio Field Office set a goal of qualifying all of them by November 30, 1996. The Fernald Area Office has been working toward this goal, and by December 31, five out of the six representatives were qualified. To improve its oversight of Fluor Daniel Fernald’s corrective action processes, the Fernald Area Office audited the contractor’s corrective action program in August 1996. The office found that in responding to assessments, Fluor Daniel Fernald had failed to identify the root causes of problems and actions taken to prevent their recurrence. To improve its ability to track and show trends in safety and health problems that it identified, the Fernald Area Office is implementing a new tracking database. According to the Fernald Area Office’s Associate Director for Safety and Assessment, this database will allow the Fernald Area Office to document and track the status of findings generated by its staff and to show trends in observations of deficiencies to identify adverse performance trends. Field observation reports generated by the facility representatives will be included in this database. Regarding the use of the award fee process to hold the contractor accountable for weak safety performance, the Fernald Area Office included new detailed criteria pertaining to Fluor Daniel Fernald’s maintenance performance and corrective action processes in its performance-based fee determination plan for the period October 1, 1996, through March 31, 1997. For example, the plan includes as a criterion the extent to which occurrence reports identify the root causes of problems and effective corrective actions. An occurrence is an abnormal event or condition at a DOE owned or operated facility that has the potential to significantly affect safety and health or the environment. Because the above initiatives are still either in the planning or early implementation stages, it is too early to determine whether they will be successful in eliminating the remaining weaknesses in the Fernald Area Office’s safety and health oversight programs. However, in some areas, it appears that the actions taken so far by the Fernald Area Office have been limited and may not be adequate to resolve existing problems. In particular, the Fernald Area Office’s actions with regard to the planning and documentation of its facility representatives’ work and the use of its award fee process to motivate improvements in the contractor’s safety performance may not go far enough to eliminate past weaknesses in these areas. From February through May 1996, the Cincinnati Enquirer made numerous allegations about health and safety problems that occurred at the Fernald site since January 1993. Many of these were taken from DOE’s Occurrence Reporting and Processing System (ORPS). As a method of monitoring the safety of the workplace, DOE requires its contractors to establish a reporting program for the timely identification, categorization, notification, and reporting of occurrences at DOE facilities. DOE’s ORPS was developed for this purpose. Allegation: More Than 1,000 Serious Safety-Related Problems Have Occurred Since January 1, 1993. Although Fluor Daniel Fernald reported many safety-related occurrences, we did not find evidence to support the number stated in the allegation. According to the Cincinnati Enquirer reporter responsible for writing the allegations, the number of safety-related problems was based on occurrence reports, workers’ reports of injuries through medical offices, and Fluor Daniel Fernald’s internal reports, such as electronic mail and radiation technical reports. He said he could not provide the documentation to support the number because that would endanger his sources. To determine the number of serious safety-related problems at Fernald, we used DOE’s ORPS because the system contains the most safety-significant events that have occurred at Fernald and other DOE sites. The ORPS system contains 317 occurrence reports from January 1, 1993, to February 12, 1996 (the day of the Cincinnati Enquirer article), which are categorized as either emergencies, unusual occurrences, or off-normal occurrences. Of these 317, only 1 was categorized as an emergency. Emergency occurrences are the most serious events that could endanger or adversely affect people, property, or the environment. The one emergency occurred in October 1994, when a tractor trailer carrying low-level waste from Fernald to the Nevada Test Site was involved in a traffic accident and overturned. The accident occurred in Missouri, and no contamination was released. Fifty-seven occurrences were categorized as unusual. An unusual occurrence has a significant or potential impact on safety, environment, health, security, or operations, such as releases of radioactive or hazardous materials above established limits, fatalities, or significant injuries. Two hundred fifty-nine occurrences were categorized as off-normal. An off-normal occurrence adversely or potentially affects the safety, security, environment or health of a facility, such as contamination of personnel or their exposure to contaminants, operational procedural violations, or identification of actual or potential defective items, material, or services that could impose a substantial safety hazard. Allegation: Seventy-Eight Contamination Incidents Occurred. Although Fluor Daniel Fernald was having problems with contamination, the allegation overstated the number of contaminations. According to ORPS, Fernald had a total of 69 contamination occurrences from January 1, 1993, to February 12, 1996, the date of the allegation. They included 51 personnel contaminations, which can be contamination of the skin or clothing. The remaining 18 were other types of radioactive contamination, such as the lost control of radioactive material or the spread of contamination. The practices for conducting DOE radiological operations are contained in DOE’s Radiological Control Manual. Radiation protection standards, limits, and program requirements for protecting individuals from radiation are contained in 10 C.F.R. 835. During 1995, Fernald was experiencing problems with radiological control, according to several DOE assessments. For the period April 1 through September 30, 1995, Fluor Daniel Fernald received a rating of unsatisfactory from DOE for the performance criteria of reducing the number of radiological occurrences. Also, in April 1995, site residents of DOE’s ES&H found that the failure to properly control radioactive material was an ongoing problem at Fernald and in July 1995 noted that the incidence of personnel contamination events increased, including contamination on the soles of employees’ shoes and contractor-issued pants. As a result of the increased personnel contamination events in 1995, a team of radiation professionals, including DOE, Fluor Daniel Fernald, and consultants investigated and reported on the site’s contamination control program. The team found that among other things, the workforce’s knowledge of the limitations of personal protective clothing (also called anticontamination clothing) was poor. In addition, the team reported that during July and August, when personnel contamination events were determined to be related to the wearing of single anticontamination clothing, Fluor Daniel Fernald was reluctant to react quickly to use double anticontamination clothing. The team believed that the reluctance was due to Fluor Daniel Fernald’s concern that it might jeopardize meeting an award fee milestone because of the work-rest regimen that employees must use when wearing double anticontamination clothing. According to several assessments in 1996, the program had improved. For the period October 1, 1995, through March 31, 1996, Fluor Daniel Fernald received a rating of satisfactory from DOE for the performance criterion of reducing radiological occurrences. When a February 15, 1996, ES&H report looked at personnel contamination events per 100 staff years at Fernald compared with that of other comparable DOE remediation sites, it concluded that while the type and number of occurrences indicated weaknesses in Fernald’s Radiological Controls Program, the rate of occurrences was not excessive when compared with that of those remediation sites. The May 1996 ES&H Oversight report found that although clear safety policies and goals have been established at Fernald, an area that required strengthening was a continued policy emphasis on occupational and environmental as low as reasonably achievable (ALARA) goals and objectives. The Fernald Area Office’s and Fluor Daniel Fernald’s response to this was that DOE and Fluor Daniel Fernald would improve management’s involvement and commitment to ALARA. The Fernald Special Project Team’s report stated that it found all of the elements of a comprehensive radiation safety program to be in place and functioning. The report also stated that 9 of the 78 incidents did not include contaminants and that workers were primarily exposed to low-level “nuisance” contamination left over from the early days of the site’s operations. Allegation: Seven Criticality Incidents Occurred Where Drums of Radioactive Waste Were Stored Too Closely Together. ORPS contains seven occurrence reports on criticality safety violations from September 1993 through June 1995, two of which related to drum storage spacing. None of these were criticality incidents as defined by DOE. A criticality incident is the release of energy as a result of accidentally producing a self-sustaining or divergent neutron chain reaction.According to a June 1995 ES&H assessment, the likelihood of an inadvertent criticality incident at Fernald, while possible, was small because of the physical nature of the enriched nuclear material there. The seven violations of criticality safety procedures include: two occurrences of drums being stored too close together, two in which drums were missing, one in which the drum was in an unapproved storage location, one in which the drums were stored so that they blocked a radiation detection alarm, and one in which the drums were mislabeled and as a result stored in an inappropriate place. Audits and assessments of the criticality safety program at Fernald, conducted during 1994 and 1995, repeatedly found the program to be deficient. Fluor Daniel Fernald received an unsatisfactory rating from DOE for its nuclear criticality program for the period April 1 through September 30, 1994. For the next period, October 1, 1994, through March 31, 1995, DOE stated that substantial improvements were required across this entire program before it could reach a satisfactory level of performance. In addition, a March 1994 independent audit of Fernald’s nuclear criticality safety found that the nuclear criticality safety program was well documented but that the implementation was less than adequate. The Fernald Area Office also found problems with Fluor Daniel Fernald’s criticality safety program in October 1994 and concluded that timely and rigorous corrective actions for improving the conduct of operations in the criticality safety program were not being aggressively undertaken. In June 1995, the Fernald Area Office again found major shortcomings in this program; for example, required criticality safe-operating limits were not properly posted at access points for several buildings, and contractor personnel lacked knowledge about criticality areas. By 1996, several assessments of Fluor Daniel Fernald’s nuclear criticality safety program reported improvements in the program. For the period April 1 through September 30, 1995, DOE found that Fluor Daniel Fernald took effective actions to address specific concerns with the criticality program on-site and by the end of the reporting period, improvements were observed. Furthermore, for the period October 1, 1995, through March 31, 1996, the DOE performance evaluation committee’s report stated that Fluor Daniel Fernald demonstrated excellence in the criticality safety program following external assessments. Furthermore, a February 2, 1996, Fernald Area Office’s report found that the criticality safety program had moved beyond the inadequate rating and currently met DOE’s requirements. In addition, the May 1996 ES&H oversight evaluation report stated that Fluor Daniel Fernald’s criticality safety program was strong and well documented but that improvement in training and technical competence is needed. Also, the Fernald Special Project Team Report stated that the criticality safety program of Fluor Daniel Fernald has been transformed in the last 6 months into a satisfactory and functional program and found that the improved storage of enriched uranium effectively mitigates the potential for a criticality accident and minimizes the potential to violate control procedures. Allegation: Using Thousands of Counterfeit or Substandard Fasteners and Bolts Created a Life-Threatening Situation. Although Fluor Daniel Fernald identified many suspect and/or counterfeit parts, these parts have been a concern in the United States since the middle of the 1980s, when they were found in such places as aircraft, nuclear weapon production facilities, and buildings. These bolts do not possess the capabilities of the genuine bolts that they counterfeit and can threaten the reliability of the industrial and consumer products, national security, or human lives. In August 1992, DOE issued a quality alert bulletin that highlighted the concerns associated with such parts, provided guidance on their identification, and directed its field offices to take certain actions. According to DOE in a May 1996 report, there have been no reported instances of accidents or near-misses within DOE as a result of suspect/counterfeit parts. By September 1995, Fluor Daniel Fernald completed all of its inspections of facilities and mobile equipment. Out of a total of 37,527 parts inspected, 3,935 were considered suspect/counterfeit and 2,232 of these needed to be replaced. The contractor issued 56 work orders to replace the parts. As of November 1996, the contractor had completed 26 work orders and canceled 9 after doing engineering reevaluations. The 21 remaining work orders are for 321 parts. In November 1995, the ES&H site representatives assessed the Fernald suspect and counterfeit parts policy and found that it was developed as instructed by DOE’s Office of Environmental Management. However, the May 1996 ES&H oversight report found that the suspect/counterfeit parts program has not been adequately implemented because remedial work orders were not performed. Fluor Daniel Fernald responded that the remaining work orders will be scheduled and done as resources are available. Fluor Daniel Fernald expects to complete replacement activities by September 1, 1997. The Fernald Special Project Team Report stated that the team was confident that the current counterfeit bolt inspection program implemented by Fluor Daniel Fernald was effective. The team stated that in the past 2 years, crews at Fernald have been inspecting the site and looking for suspect bolts. When counterfeit bolts are found in load-bearing or structural applications, the bolts have been replaced. Also, no safety events or equipment failures related to counterfeit bolts have occurred at the Fernald site. Allegation: Workers Who Were Impaired by Drugs or Alcohol and Repeat Offenders Were Allowed to Keep Their Jobs. Although some employees have tested positive for drugs and alcohol, Fluor Daniel Fernald’s records show that repeat offenders are terminated. In September 1994, the Fernald Area Office approved Fluor Daniel Fernald’s substance abuse program. The program included random testing for controlled substances and alcohol, testing for reasonable suspicion, and preemployment testing. Fluor Daniel Fernald’s substance abuse policy is that if a person tests positive for the use of controlled substances, an appointment is made for the employee to enter the employee assistance program. After the employee completes the program’s treatment and upon receipt of a negative substance abuse test, the person is permitted to return to work. Later, the employee is tested on an unannounced basis. If this test is positive and the person is a Fluor Daniel Fernald employee, the person’s employment is terminated. If the person is a subcontractor employee, that person’s access to the Fernald site is permanently denied. Fluor Daniel Fernald’s reporting system indicates that some workers tested positive for substance abuse in random testing, testing for reasonable cause, and testing after an accident. However, workers testing positive after completing the rehabilitation program and returning to work were terminated. In April 1995, Fluor Daniel Fernald started reporting occurrences of substance abuse in ORPS when it realized that a positive drug test result was considered an off-normal event. From April 1995 to February 1996, Fluor Daniel Fernald reported 32 occurrences of substance abuse. After a second positive drug test, 11 workers were either terminated or permanently denied access to the site. As of July 1995, Fluor Daniel Fernald revised its employment procedures to require its new employees and subcontractor applicants to receive a confirmed negative result for drug testing before being issued a badge and reporting for work. In October 1995, Fluor Daniel Fernald reported to the Fernald Area Office on the increased trends in substance abuse reports at Fernald. It attributed the increased reporting to the following: (1) the positive drug-screening results were to be reported in the ORPS system and (2) the number of positive results from pre-access drug screens increased. In 1995, of the 894 subcontractor people tested, 39 (4.4 percent) tested positive. From January through October 1996, of the 697 subcontractor people tested, 22 individuals (3.2 percent) tested positive. The Fernald Special Project Team Report provided information on the Fluor Daniel Fernald substance abuse program as we described above and concluded that the employees identified are the positive result of an effective substance abuse program. The Fernald Area Office plans to do an assessment of Fluor Daniel Fernald’s substance abuse program in the spring of 1997. Allegation: Fluor Daniel Fernald Has Intimidated Workers to Prevent Them From Reporting Safety Concerns. We did not find evidence to support this allegation. Both DOE and Fluor Daniel Fernald have employee concern programs to identify and resolve safety, health, and environmental concerns raised by employees, and some employees are reporting such concerns. The programs consist of hotline numbers for the employees to call to report concerns and forms that employees can complete and submit anonymously. From January 1995 through September 1996, Fluor Daniel Fernald received 85 hotline calls and 51 written concerns that were recorded in the safety suggestion log. For the same period, the Fernald Area Office received three hotline calls and eight written concerns. A Fernald Area Office September 5, 1995, assessment found that the employee concerns hotline phone had a caller identification feature which did not protect the caller’s anonymity. According to the Fluor Daniel Fernald official responsible for its safety concerns program, this situation was corrected in October 1995 with the installation of a conventional phone without caller identification and a conventional add-on answering machine that eliminated the potential for identification of the caller. In addition to having employee concerns programs for reporting safety concerns, employee involvement in safety is available through the Safety First program. The Safety First program is an ongoing initiative that was created in 1994 to improve the safety culture at Fernald through creating an atmosphere that encourages employees at all levels of the organization to take ownership of safety. A part of the Safety First initiative is the work group concept, which consists of a group of workers working on a task with a common supervisor that meet at the beginning of each day for 5 to 15 minutes to discuss safety issues and work concerns. The May 1996 DOE-ES&H Independent Oversight Evaluation Report concluded that the Safety First initiative and the associated safety work groups promote worker participation and empowerment and are operating effectively. In addition, Fluor Daniel Fernald has conducted several surveys of employees’ attitudes toward safety at Fernald—two in 1994 and one in 1995. The first survey was conducted during a May 1994 safety stand-down when employees stopped routine activities to examine their work areas and identify risky operations and unsafe conditions. The second and third surveys were conducted during August and September 1994 and from April through September 1995, respectively, as follow-ups and to satisfy a Fluor Daniel Fernald performance objective criterion established by the Fernald Area Office. Fluor Daniel Fernald is continuing to survey workers; however, it does not plan to analyze and report the results until 1997. Two questions in the employee attitude surveys related to workers’ attitude. Table II.2 shows how wage employees, i.e., union workers, responded to the questions. Allegation: Workers Were Forced to Wear Torn, Ill-Fitting, or Improper Protective Clothing. Although DOE’s assessment found some personal protective clothing in poor condition, we did not find evidence to support that workers were forced to wear this. According to the DOE Radiological Control Manual anticontamination clothing is worn when workers handle materials contaminated with removable contamination in excess of certain levels and for work in contaminated, highly-contaminated, and airborne-radioactivity areas. The clothing consists of such items as coveralls, gloves, rubber overshoes, and hoods. Both DOE’s manual and Fluor Daniel Fernald’s procedures require that individuals inspect their anticontamination clothing prior to use for tears, holes, or split seams that would diminish protection and replace defective items with intact clothing. Also, contractor-issued clothing, such as work coveralls and shoes, should be considered the same as personal clothing and should not be used for radiological purposes. During a walk-through of a pilot plant in April 1996, the Fernald Area Office’s support contractor observed that much of the anticontamination clothing was in unsatisfactory condition with tears and missing buttons. As stated above, workers are to inspect the anticontamination clothing for defects and to reject unacceptable clothing. As a follow-up, the support contractor visited several other plants at the site and inspected the anticontamination clothing for general condition and integrity. The support contractor found that all other anticontamination clothing was in satisfactory condition with no observed defects and that a significant amount of the clothing appeared new. The support contractor concluded that the condition of the anticontamination clothing at the pilot plant was an isolated case. From January 1995 through September 1996, four complaints in the Fluor Daniel Fernald safety suggestion log dealt with clothing. In one case, the person wanted larger-sized clothing of a particular type. The person was informed that this type of clothing did not come in a larger size than was already available. In another case, the laundry erroneously sent bags of contaminated shoe covers back to the user. According to Fluor Daniel Fernald, the problem was addressed by the supervisor to prevent this from happening in the future. In the two other cases, the complaints were about contractor-issued clothing, including a complaint that employees cannot get correct sizes and the clothing is a hazard to wear. Contractor-issued clothing is not considered anticontamination clothing by DOE or Fluor Daniel Fernald. Fluor Daniel Fernald responded that it has bought over 300 sets of coveralls for employees to use and that the quantities and types of clothing are continuously under review. Fluor Daniel Fernald considers each of these employee concerns to be closed. Allegation: Radiation Safety Training Decreased and Full Radiation Training Was Eliminated for Most Subcontractor Employees. The radiation safety training requirements have not changed, nor has full radiation training been eliminated for subcontractor employees. However, Fluor Daniel Fernald did eliminate redundancies in the training courses, which resulted in a reduction in the number of hours of training. The May 1996 ES&H Oversight report stated that Fluor Daniel Fernald’s training programs met applicable requirements. In addition, a DOE official told us that Fluor Daniel Fernald’s training was sufficient under DOE orders and the DOE Radiological Control Manual. Chapter 6 of the DOE Radiological Control Manual establishes the requirements to ensure that personnel have the training to work safely in and around radiological areas. The training requirements apply to all personnel entering DOE sites. The manual establishes standardized core course training and the required hours, including general employee radiological training (1 hour), radiological worker I training (8 hours), and radiological worker II training (16 hours). The required number of hours of course work has not changed since DOE issued the Radiological Manual in 1992, revised it in 1994, and revised it again in 1996. Fluor Daniel Fernald has adopted the DOE Radiological Control Manual requirements for training its workers. In addition, an Occupational Safety and Health Administration requirement for employees working at hazardous waste clean-up sites is hazardous waste operations and emergency response training. Workers receiving radiological worker I and radiological worker II training also receive the requisite number of hazardous waste operations and emergency response training hours. According to Fluor Daniel Fernald, when it took over the Fernald site in December 1992, it evaluated the requirements for access to the site and as a result streamlined the compliance training. Where compliance training amounted to nearly 90 hours per employee working in restricted areas, the number of hours was reduced to 40. According to a Fluor Daniel Fernald official, previously there were separate courses for hazardous waste operations and emergency response and radiological control. Fluor Daniel Fernald looked at these two training programs and saw much commonality in such areas as hazard recognition and personal protective equipment. With the removal of the redundancies, the courses were pared down to their current number of days. According to Fluor Daniel Fernald’s radiological control requirements, everyone entering the controlled area is to be trained in the aspects of radiation protection to a level commensurate with their potential for exposure to radiological hazards. The training requirements also apply to subcontractor employees. According to Fluor Daniel Fernald, as of October 1996, 63 percent of workers employed by subcontractors received radiological worker II training, 17 percent received radiological worker I training, and 20 percent received the general employee radiological training only. This compares with Fluor Daniel Fernald’s wage workers, of whom 82 percent received radiological worker II training, 9 percent received radiological worker I training, and 9 percent received the general employee radiological training only. Allegation: Fluor Daniel Fernald Failed to Keep Inspection Records of Hazardous and Radioactive Wastes. A Fluor Daniel Fernald environmental compliance surveillance found problems with inspection records for hazardous waste management units. The Ohio EPA requires that owners or operators inspect areas where containers of waste are stored or were formerly stored. The owners or operators are to look for leaks and for deterioration caused by corrosion or other factors. They are also required to record inspections in an inspection log and keep these records for at least 3 years from the date of inspection. Fluor Daniel Fernald has inspection procedures and record keeping requirements for hazardous waste management units. The procedures are for completing the inspection logs and performing inspections of container storage areas, equipment, above-ground storage tanks, and landfills that contain such wastes. The site has 32 hazardous waste management units that are inspected on a daily, weekly, monthly, or quarterly basis. In a February 1996 environmental compliance surveillance of its hazardous waste management unit program, Fluor Daniel Fernald’s Office of Environmental Compliance found, among other things, missing inspection logs, a lack of corrective actions being performed or noted, and inspectors who did not have the required training conducting the inspections. For the active storage units, 47 of the 627 (7 percent) required inspection logs were missing; for the inactive storage areas, 93 of the 2,031 (5 percent) required inspection logs were missing. After further investigation, Fluor Daniel Fernald found that although many of the inspections had actually been completed, the logs had not been submitted for filing in the operating record. As a result, the Fluor Daniel Fernald Environmental Compliance office required the person responsible for the facility to provide the missing inspection records and to follow up to ensure that corrective actions were taken. After trying to recover the missing inspection logs, some were recovered but a number will probably never be. Also, hazardous waste management unit inspectors were required to complete hazardous waste management unit training. In a March 15, 1996, letter, Fluor Daniel Fernald informed the Ohio EPA of the results of the surveillance and its actions to correct the deficiencies. In an April 5, 1996, letter, the Ohio EPA stated that the Fernald Environmental Management Project was in violation of the Ohio administrative code and DOE’s agreement with the state. The Ohio EPA also stated that while it was concerned with the violation, the situation did not appear to result in a threat to site workers, the public, or the environment. DOE and Fluor Daniel Fernald responded in an April 19, 1996, letter, that compliance personnel would perform weekly checks of the hazardous waste management unit areas and examine the operating records to ensure that inspections were being performed and that the documentation was placed in the operating record. A Fluor Daniel Fernald environmental compliance official told us that the contractor is continuing to review the inspection records. Allegation: Drums of Radioactive and Other Toxic Liquids Leaked During Weekends. The Number of Leaks Was Underreported. According to Fluor Daniel Fernald, drums found to be leaking on the weekends were mitigated within the 24 hours required by the Ohio EPA. However, the number of leaky drums was underreported. The plant 1 pad is a storage area that was used for storing uranium-bearing material destined for recycling into production. In the mid-1980s, the drum population on the pad increased because material that had formerly been sent to waste pits was drummed and stored at plant 1. The outside storage resulted in significant deterioration of the steel drums because of weathering. Fluor Daniel Fernald has been overpacking the deteriorated drums into new containers. According to the Ohio EPA, all containers on the plant 1 pad are to be inspected daily for leakage. Type I drums—those having a leak through the container to the pallet and/or ground—are recorded on the container inspection form. For any drums that are actually leaking, DOE is required to immediately contain the release or spill after detection but not more than 24 hours after discovery. Mitigation can include patching the leak if possible, transferring the materials from the leaking drum, and overpacking the leaking drum. Any spill is controlled with dikes of sorbent materials. Fluor Daniel Fernald admitted that it underreported the number of leaky drums to the Cincinnati Enquirer. For calendar year 1995, out of 84 type I drums that should have been reported to the Assistant Emergency Duty Officer (AEDO) 33 were reported. From January through March 6, 1996, 24 out of 28 type I drums were reported. Fluor Daniel Fernald stated that it took corrective actions, such as conducting training for supervisors and developing a checklist for tracking follow-up actions. According to Fluor Daniel Fernald, of the 84 type I drums that should have been reported in 1995, 10 occurred on the weekends. For nine of these, weekend drivers were scheduled and available to move the drums. For the remaining one, no drivers were scheduled or called in, but Fluor Daniel Fernald stated that the drum was moved within 24 hours. Of the 28 type I drums found from January through March 6, 1996, 1 occurred on the weekend. Fluor Daniel Fernald states that the leak was mitigated the same day, Saturday, and that the drum was moved to the overpack area on Monday. On Saturday, March 9, 1996, Ohio EPA visited the site to investigate allegations regarding leaky drums. The review was directed primarily at container storage on the plant 1 pad. The Ohio EPA stated that visual observation of both mixed waste and radiological waste containers stored indoors and outdoors on the plant 1 pad did not reveal any leaking containers. Because of alleged deficiencies in Fluor Daniel Fernald’s performance reporting and financial management systems, we were asked to review certain practices in these systems, including whether key aspects of the contractor’s systems were functioning properly and, if not, how such weaknesses could affect DOE’s oversight. Because the allegations were generally broad and lacking specificity, we did not investigate specific allegations. Rather, we grouped the allegations into two major areas of concern: (1) control of the changes in the cost and schedule of projects against which the contractor’s performance is measured, called the performance measurement baseline, and (2) key practices in the contractor’s financial management system in which all of the costs are accumulated. We also provided numerous opportunities for workers and individuals from the Fernald area to provide us with information about possible financial or performance reporting improprieties. (See app. VI for more information on our methodology.) We did not receive specific evidence from workers and other concerned individuals that provided enough detail to warrant expanding our investigation. Fluor Daniel Fernald complied with some of the financial and performance reporting procedures that we reviewed, but was not in compliance with some others, which makes it difficult for both DOE and contractor managers to exercise effective control and/or oversight of the contractor’s costs and performance. In controlling the performance measurement baseline, proposals for changes that did not represent new or additional work were appropriately disapproved. The documentation in the contractor’s proposals to change the baseline was usually adequate to support the change. However, the impact of changes on work at the site was not as well documented, and the required funding information was not always present. Furthermore, some procedures are not clearly written and do not require certain information that would make review more efficient. In part, these occurrences may be due to a heavy reliance by the DOE Fernald Area Office’s managers on less-formal channels of communication with the contractor, such as verbal presentations and phone calls rather than formal documentation of all actions. The financial system will accept charges against accounts that have been properly closed. In addition, the financial system allows closed accounts to be reopened without the approval of the control account managers. Such actions hamper the effective control of accounts by these managers. Because DOE relies on both the baseline and financial information in these systems, such weaknesses complicate DOE’s oversight task. Managers of DOE’s Fernald Area Office rely on the data from the contractor’s Project Control System to monitor progress on projects, environmental studies, and other activities. Key components of the Project Control System include control of the performance measurement baseline and financial management. Project Control System data, as well as work activities at the site, are organized around eight activity data sheets. They are basically project planning documents that contain summary technical, cost, and schedule information for controlling DOE’s funding. Examples of activities on activity data sheets are a soils remediation project, groundwater remediation, and K-65 silos. Each activity data sheet is the responsibility of a DOE Fernald Area Office activity data sheet manager or team leader in the Office of Environmental Management. At the contractor level, Fluor Daniel Fernald’s line managers, called control account managers, handle the day-to-day financial management and reporting processes. The activity data sheet work is further broken down into control accounts that involve detailed tasks generally scheduled in the next 1 to 3 years. Examples of control accounts are remedial construction of the active flyash pile and silo remediation. Each control account is broken down into one or more charge numbers that represent specific tasks or units of work and constitute the lowest measurement level in the Project Control System. Examples of charge numbers include soil washing, waste water treatment, transportation and burial, and silo content remediation construction. Costs for work at the site are accounted for under the appropriate charge number within a specific control account. These charges are then accumulated into higher-level summaries, such as a summary of charges incurred at the activity data sheet level. The current baseline change control procedures, as implemented, do not provide DOE with appropriate information to effectively oversee execution of the baseline. First, the documentation that we reviewed of changes to the baseline usually met the contractor’s own requirements for clarity and completeness, except that the impact of changes is sometimes not well documented and that some funding information is missing. Second, procedures related to changes in the baseline are not clearly written and do not require some documentation that would make review more efficient. This may make it difficult for DOE to oversee the cost and schedule performance of projects affected by such changes. Although DOE’s Fernald Area Office obtains additional oral explanation from the contractor to fill the gaps in data, the formal documentation of such items as the impact of baseline changes is sometimes insufficient to support any later review. The performance measurement baseline governs the expenditure of the site’s budget, which was about $266 million in fiscal year 1997, and defines what work has been authorized. It is the standard against which DOE assesses the contractor’s cost and schedule performance. The baseline, which is approved by the Fernald Area Office, can be adjusted to reflect changes that are not under the contractor’s control, such as a change in the authorized level of funding, the addition or deletion of the scope of work in a project or activity, or changes in costs due to amended labor rates. However, the baseline should not be adjusted when cost or schedule changes occur as a result of the contractor’s actions, such as the contractor’s failure to meet the approved schedule because of poor performance. DOE’s and the contractor’s procedures define when and how the baseline should be adjusted. Change proposals fall into one of five categories—approved, canceled, disapproved, in process, or tabled. From October 1, 1993, to May 31, 1996, Fluor Daniel Fernald processed 985 proposals to change the baseline, of which 699 were approved. Table III.1 shows the number of change proposals in each category by fiscal year. Fluor Daniel Fernald was in compliance with most of the written site procedures and policies for controlling the baseline but did not always comply with some information requirements. The contractor maintains records of all proposals to change the baseline and their dispositions. We found those records to be accurate and reliable. Fluor Daniel Fernald had the required documentation for all but one of the randomly selected baseline change proposals we reviewed, and the documentation was usually adequate to support the need for changing the baseline. Of the 114 change proposals we reviewed, we found 4 instances in which the documentation indicated that the change did not represent new or additional work. All four of those proposals were appropriately disapproved. In those instances, the baseline change approval process was functioning properly. However, on the basis of our sample, we estimate that about 12 percent of the baseline change proposals were missing some of the required funding information. The change proposal form is the formal record of the proposed change, although the manager requesting the change normally appears before the approving board to defend the proposal and answer questions. Site procedures require that each proposal to change the baseline contain clear and concise statements of the scope of the change, the justification or purpose of the change, and the impact of the change on activities at the site. The procedures also require that the sources of funds for additional work be identified on the change form. We estimate that a few of the baseline change proposals did not contain sufficient narrative for a reviewer to understand the scope (about 3 percent), justification (about 8 percent), and/or impact (about 16 percent) of the change without additional explanation. In general, the documentation was better on change proposals that were approved than on those that had been disapproved. As previously stated, we estimate that about 12 percent of the proposals did not include all of the required funding information. However, we noted that documentation of the impact of changes and of funding sources was improved in the proposals for fiscal year 1996. Some written procedures are unclear, such as the approval level required for certain changes to the baseline, and do not require some documentation that would make review more efficient. For example, neither the contractor’s nor the Fernald Area Office’s written procedures require that the reasons for disapproval of proposals to change the baseline be formally documented on the proposal form or that changes to supporting documents be clearly marked. When the baseline needs to be adjusted, a baseline change proposal is prepared by the responsible control account manager. The responsible party for approving a change proposal depends on the cost or schedule impact of the change. Currently, baseline changes within an activity data sheet with a net impact of less than $25,000 can be prepared and approved by the control account manager in charge of the activity. However, the control account managers cannot make changes that affect more than one activity data sheet without the contractor’s and/or DOE’s approval. Baseline changes with a net cost impact of less than $250,000 or less than 30 days schedule impact can be approved and implemented without DOE’s concurrence. (See table III.2.) Baseline changes over those thresholds can only be approved by DOE, either at Fernald or headquarters. Baseline changes below the threshold for DOE’s approval are not formally reviewed by DOE personnel but are made available to them and can be questioned. However, Fernald Area Office officials were not able to identify any instances in which they had instructed the contractor not to implement a change on the basis of these “informational” copies. New or changed work scope is generally approved once the baseline change proposal has been approved at the highest level necessary. As a result of a recommendation made by the Special Project Team, the Fernald Area Office is in the process of revising the threshold levels, as shown in table III.2. ADS = Activity Data Sheet FDF = Fluor Daniel Fernald FEMP = Fernald Environmental Management Project For fiscal year 1993 through August 1994, this threshold was $1 million. The site’s written procedures for determining the approval level are not clear, and Fernald Area Office and contractor officials agree. In general, the approval level is determined by the net change in costs for all fiscal years covered by the proposal, although there are exceptions. For example, if a change proposes moving $50,000 from a management reserve account, which is not part of the baseline, to support the added scope of work to the baseline, the transfer is not categorized by the net change. On the other hand, if a change proposes moving the same amount from one charge number within a control account to another charge number in the same control account, the net change is used to determine approval. Similarly, if a change proposal lists costs for more than 1 fiscal year, approval is usually determined by adding the impact across all fiscal years. However, in some cases, the cost information for future fiscal years is presented only for informational purposes, and approval is determined by the cost change for only the current fiscal year. Because the criterion used to determine which level of approval is needed is not fully documented in the site’s written procedures, change proposals moving similar amounts of resources may be approved at different levels of review. The current procedures do not require that supporting documentation attached to the change proposals have the changes clearly marked to facilitate review. For example, when the scope of work for a project is being changed, forms detailing what work would be authorized if the proposal were approved are revised. However, the work scope forms had no indication of what was being changed. The identification of the change could only be made by comparing the revised form with the previous version. For some proposed changes, that task would not be onerous. However, for others affecting large segments of the site’s work, the task could involve reviewing a large volume of documents (e.g., one rebaselining proposal had over 1,000 pages of supporting documentation). On occasion, one DOE manager has asked the contractor to mark the changes for rebaselining proposals. Current procedures also do not require that the reasons for disapproval be documented on the change proposal. However, proposals that are disapproved at one level can be appealed to the next higher level board. In addition, without such information, the official record is incomplete and less useful for internal and external reviewers who are not present at board meetings. The Fernald Area Office agrees that documenting the reasons for disapproval would aid the review of appealed proposals. DOE’s Fernald Area Office officials agreed that clear and complete information on the change proposals would facilitate review. The incompleteness of the formal documentation highlights the degree to which the Fernald Area Office’s managers rely on informal and verbal communications to support decision-making. However, the information provided through these informal channels is not part of the official record and, therefore, is not readily available for subsequent internal or external review. Improved procedures and quality of the documentation would facilitate DOE’s oversight process and result in less reliance on informal communication for decision-making. Such changes would also provide a more complete official record of the changes that are made to the baseline. We did not find evidence in the accounts we reviewed to substantiate the allegation that charges were made against accounts that had no budget. Allegations were made that the contractor was performing unauthorized work on the basis of internal performance reports that showed actual charges against accounts that appeared to have no budget or in which actual charges exceeded the amounts budgeted. Although we identified accounts in such reports that may appear to have no budget, the figures in the reports do not represent the amount of funds available in a given account. Rather, they reflect the agreed-upon performance goal for a given activity in a particular fiscal year. Therefore, the figures provide information on how the contractor performed against the goals, rather than evidence of unauthorized charges in accounts that have no funds. All of the accounts that we reviewed that appeared to have no budget (48 of 503) in fiscal years 1994, 1995, and 1996 through May 31, 1996, did, in fact, have budget. The contractor complied with most of the financial procedures and controls that we reviewed but did not comply with some others. In compliance with standard procedures, nearly all charges in the contractor’s financial system occurred when accounts were properly opened for such charges. However, the contractor’s financial system has accepted some charges against accounts that the control account manager had closed and has allowed some accounts to be reopened without the required control account manager’s approval. Thus, control account managers, who are responsible for managing accounts and verifying the accuracy of charges, may not always be knowledgeable about the costs for which they are responsible for controlling. This can make it difficult for the managers to exercise effective control over costs, and thus ensure the accuracy of the data that DOE uses to assess the contractor’s performance. Accounts at Fernald relate to discrete segments of work, such as the treatment of waste water in a soil remediation project. When the work is scheduled to begin on such a segment, a control account manager requests that an account be opened, thus allowing costs for the work to be charged against the account. When the work on the segment is completed and the control account manager determines that all related charges have been made, the control account manager closes the account. This procedure is meant to ensure that a person knowledgeable about the scope of work and the related costs monitors and controls the charges that are made against the account. Control account managers discharge their duties by day-to-day oversight of work performed; by reviewing standard reports on labor, materials, and subcontract charges incurred to perform the work covered by their accounts; and by verifying charges against their accounts. Nearly all charges in the contractor’s financial system occurred when accounts were properly opened in compliance with standard procedures. However, a small percentage of charges were routinely made to accounts after the control account managers had closed them, making effective control of the accounts difficult. This percentage averaged from 1 to 2 percent of the several hundred thousand charges that Fluor Daniel Fernald processes annually to accumulate costs in its authorized accounts. The contractor recorded about 504,000 charges in fiscal year 1994, more than 650,000 in fiscal year 1995, and more than 512,000 in fiscal year 1996 through July, all of which we reviewed. According to our analysis, from 0.9 percent of the charges in fiscal year 1994 to 2.4 percent in fiscal year 1996 occurred when the accounts were not properly opened to accept charges. Although the percentage of such charges is low, the charges have occurred on a regular basis. The dollar value of these charges ranged from a charge of $905,902 to a credit of $8 million. Furthermore, accounts can have multiple openings and closings as well as numerous charges after they have been closed. For example, two accounts that we judgmentally selected had multiple openings and closings (three in one case and five in the other) and showed numerous charges after the accounts were closed (363 charges in one case and 178 in the other). Therefore, once an account has been entered into the system, it requires constant monitoring to ensure that only appropriate charges are added to it after the control account manager has closed out the account. The system will accept charges to closed accounts because, according to contractor officials, accounts are not considered permanently closed in order to allow for adjustments to be made. According to Fluor Daniel Fernald accounting personnel, charges might be made to a closed account when sales tax is allocated to accounts, which is done monthly rather than after each invoice is posted, and when employee benefits are periodically allocated. In addition, invoices may be entered into the system when they are received but not charged against the accounts until the invoice due date when they are paid. The type of transactions posted to closed accounts has not changed over the period. Charges are categorized in one of three ways—labor, materials, or subcontract costs. The highest error rates in each year occurred in transactions for the purchase of materials. Although most of the 18 control account managers we interviewed told us that they focus on monitoring labor charges, the error rate for labor transactions has risen slightly during the period. However, control account managers were generally satisfied with the timeliness of corrections to their accounts when they identified erroneous charges. Although the financial system accepted charges against closed accounts, our tests showed that it appropriately did not accept charges against accounts that were not in the system. That is, although the system would accept charges against an authorized account that had been closed, it would not do so against an unauthorized or fictitious account that had not been properly entered into the system. In addition to allowing charges to be made to closed accounts—without reopening them—the contractor’s financial system at times allowed accounts to be reopened for charges without the required control account manager’s approval. Fluor Daniel Fernald’s procedures require that the responsible control account manager sign an open/close form for accounts to be opened or closed. Because control account managers are responsible for maintaining control over the performance of their accounts, they need to be aware of any charges to their accounts that affect the cost, scope, or schedule of work. On the basis of our review of a sample of documents to open and close accounts, we estimate that 46 percent were missing at least one of the required documents. In addition, an account was occasionally reopened solely on the basis of an electronic mail message from the Accounting Division requesting that the account be reopened. According to Fluor Daniel Fernald officials, this was done to facilitate the process of making corrections to charges already in the system, such as a labor charge posted to an incorrect account. Three of the 18 control account managers we interviewed told us that, contrary to procedures, their accounts were reopened without their approval after they had determined that all charges had been received and formally requested that the account be closed. Several control account managers told us that they were not aware that their accounts had been reopened until after they saw new charges appear in their reports. The reopening of accounts without the control account managers’ awareness and approval may make it difficult for the managers to effectively control what is charged to their accounts. Recent reviews at Fernald made numerous recommendations and also identified some recurring weaknesses. DOE’s managers have updated their procedures and directed Fluor Daniel Fernald to make changes to address the weaknesses identified by the reviews. However, the impact of some actions will take time to assess, and other actions are not yet complete. DOE’s Special Project Team and the DOE Chief Financial Officer’s team, reporting in March 1996, made more than 40 recommendations for improving financial and performance management. The Fernald Area Office’s managers have been tracking progress in implementing the recommendations, which included the development of an integrated oversight plan for the site, strengthening the Fernald Area Office’s oversight of baseline changes, and more effective use of the Ohio Field Office’s financial oversight resources. Some of the recommendations have not yet been implemented. Furthermore, we, the Special Project Team, and the Office of the Chief Financial Officer found that some previously identified problems have continued to occur. For example, a functional assessment of the contractor’s Project Control System performed in October 1994 by DOE’s Office of Field Management found that the system generally met DOE’s requirements but made a number of recommendations for improvements to the system. However, several of these recommendations have not been effectively implemented. We found that the Fernald Area Office did not require the contractor to prepare a formal corrective action plan and has not performed a follow-up review to ensure that the recommendations from the 1994 assessment were acted upon. Contractor officials stated that most of the recommendations have been addressed through their continuous improvement program. However, because there was no formal corrective action plan, it is difficult to assess directly exactly what was done or how effective the actions were in resolving the problems cited. For example, the Office of Field Management recommended that the Fernald Area Office conduct comprehensive assessments of the contractor’s accounting system and compliance with applicable procedures. While the Fernald Area Office has ascertained that the contractor has written procedures governing key components of the Project Control System, such as opening and closing control accounts and charge numbers, it has not assessed the logic or implementation of those procedures. The Chief Financial Officer’s Review reiterated this recommendation in March 1996. However, the Fernald Area Office has not performed the assessments and does not plan to do so until fiscal year 1998 at the earliest. Thus, the review will occur considerably after the date on which DOE will have to decide whether to offer Fluor Daniel Fernald’s contract for competition or renew it. Furthermore, one recommendation was to follow the baseline change control procedure that calls for the prompt updating of the baseline when fixed-price subcontracts are negotiated. The Ohio Field Office’s Office of the Chief Financial Officer has been conducting an audit of how well the contractor has followed that written procedure in general and has issued a report on one instance in which it was not followed. In that case, the contractor entered into a subcontract to dismantle Plant 7 at a cost of about $5 million less than the estimated amount included in the baseline. Subsequently, the contractor did not process a proposal to change the baseline. As a result, the contractor’s award fee for the period was based on the higher amount. The contractor later agreed to pay back $135,000 of the fee received in that period. DOE prepared a plan in early 1996, on the basis of future budget projections, for cleaning up the Fernald site in 10 years (ending in fiscal year 2005) and at a cost of about $2.387 billion. Subsequently, because of reduced budget projections, DOE prepared and approved a replan that concluded that the Fernald cleanup will take 13 years and cost about $2.374 billion (or about $13 million less). A number of assumptions account for the $13 million difference, such as a substantial cost reduction if more Fernald waste is disposed of on-site. The 3-year slippage will require renegotiation of certain EPA-mandated cleanup deadlines. As recently as early 1995, DOE estimated that it would take 25 years to clean up the Fernald site. Later in 1995, however, DOE headquarters proposed the possibility of accelerating the Fernald cleanup. Specifically, DOE headquarters advised Fernald Area Office managers to assume a budget of $256 million for fiscal year 1996 and $276 million for years thereafter, using a funding growth equal to inflation. In response to that guidance, Area Office managers prepared a plan in early 1996 that estimated that the site could be cleaned up in 10 years at a cost of about $2.387 billion. Subsequently, DOE headquarters staff reviewed and approved the plan. In June 1996, DOE advised Fluor Daniel Fernald that funding for Fernald cleanup may be less than anticipated. Specifically, DOE indicated that actual funding levels for fiscal years 1997 and 1998 may be $266 million and $264 million, respectively. On the basis of that information, DOE requested that Fluor Daniel Fernald prepare an analysis that would identify any potential impacts to the 10-year plan. In response, Fluor Daniel Fernald initially estimated in July 1996 that it would require an additional year and approximately $120 million more to clean up the Fernald site. In August 1996, Fluor Daniel Fernald provided DOE with more specific recommendations on a 10-year replan strategy based on the lower funding levels provided. Specifically, the contractor recommended a path that called for the completion of work on four of the five operable units by the end of fiscal year 2005. Fluor Daniel Fernald estimated that the completion of work on operable unit 4 would take an additional 2 to 5 years. In October 1996, DOE approved Fluor Daniel Fernald’s recommendations with one modification. The approved replan extends work completion on operable unit 4 by 3 years to a total of 13 years, or to mid-fiscal year 2008. Work on operable unit 4 was extended because of technical uncertainties associated with on-site waste vitrification. In November 1996, Fluor Daniel Fernald provided us with a preliminary analysis of the cost to clean up Fernald under the approved 10-year replan. The analysis showed that the total cost to clean up Fernald by fiscal year 2008 will be about $2.374 billion (or about $13 million less than under the original 10-year plan). A number of assumptions, some representing cost increases and others representing cost decreases, account for the $13 million difference. (See the discussion below.) Fluor Daniel Fernald officials also advised us that more definitive cost information, particularly for fiscal years 1999 and beyond, will be available in early 1997. DOE officials said that they are still committed to completing Fernald’s cleanup by 2005, which could be accomplished by using advanced technologies or other means to improve the current schedule. Several different assumptions exist between the original 10-year plan and the 10-year replan. For instance, the original 10-year plan assumed compliance with all EPA-mandated deadlines to bring the site into compliance with the Resource, Conservation and Recovery Act and other regulatory requirements. However, the 10-year replan reflects a 3-year slippage in the cleanup of operable unit 4. According to DOE officials, this slippage will result, in the need to renegotiate certain EPA deadlines. In addition, the original 10-year plan assumed the design and construction of a single full-scale vitrification plant in parallel with pilot plant operations. (See app. I.) The approved 10-year replan assumes that rather than having a single full-scale plant, several smaller-capacity vitrification units will be built after pilot plant operations are concluded. Fluor Daniel Fernald officials estimated that this approach will add about $38 million to the cost of Fernald’s cleanup. Furthermore, the original 10-year plan assumed that all of the soil and debris associated with the former production area, also known as operable unit 3, would be shipped to DOE’s Nevada Test Site. The approved 10-year replan assumes, instead, that most of this soil and debris will meet the waste acceptance criteria for the planned on-site soil disposal facility and will be placed in the on-site facility. Fluor Daniel Fernald officials estimated about a $48 million reduction in the Nevada Test Site’s disposal costs if that occurs. Finally, the original 10-year plan omitted the costs associated with groundwater collection and treatment beyond 2005. A June 1996 DOE complexwide cleanup report estimated that Fernald groundwater collection and treatment beyond 2005 would continue for another 13 years and cost about $128 million. The approved 10-year replan assumes that because of aggressive extraction and reinjection, groundwater collection and treatment can be completed by 2005. To obtain information on the major allegations reported by the Cincinnati Enquirer and the status of the investigations of these allegations, we began our work by grouping the allegations under general categories and interviewing the newspaper’s staff to develop a perspective on the significance of these categories. We also interviewed DOE officials and Fluor Daniel Fernald officials responsible for investigating the allegations to determine the extent to which some potential problems had already been studied and the status of their investigations. Furthermore, we discussed the potential problem areas with state regulatory officials and with representatives of citizen advisory groups and Fernald trade unions to assess the general state of affairs at the site. Using this information, we proposed and obtained approval from our congressional requesters to focus the review on the allegations concerning (1) the vitrification pilot plant and uranyl nitrate hexahydrate projects, (2) safety and health incidents and DOE’s oversight of the contractor’s safety and health activities, and (3) the integrity of the major financial and performance management information systems used by DOE managers. We then obtained detailed information on these allegations and on DOE’s and the contractor’s programs in these areas to assess how DOE’s management and oversight ensured that the contractor is effectively implementing cleanup activities and fulfilling DOE’s safety and health requirements at the site. As agreed with our congressional requesters, in focusing our work, we included only information contained in newspaper articles printed on or before May 31, 1996. In addition, we excluded several areas of allegations from further examination, primarily because those areas had already been investigated by an independent organization, such as DOE’s Office of Inspector General, or because there was a general consensus among those we interviewed that the area was not a major problem. These areas included allegations concerning (1) DOE’s workforce reduction activities and the reimbursement of the contractor’s travel costs, (2) the contractor’s plan to build a full-scale vitrification plant and the contractor’s studies of the use of radium contained in waste that DOE planned to vitrify, (3) modifications to the contractor’s computer programs used to report performance statistics, and (4) support and overhead costs at the site. Throughout the review we invited individuals who might know about mismanagement at Fernald to confidentially provide us with supporting information. For example, we rented a post office box and met with representatives of employee groups to identify individuals who might have information for us. The Cincinnati Enquirer also published information about our review and ways to contact us by phone or mail. As a result of these efforts, we met in Cincinnati with individuals who had been quoted by the newspaper and met with several contractor employees at Fernald. These individuals generally presented anecdotal information that helped explain the background for many of the allegations or information about grievances and other employee relations problems that directly involved them. We used this information to the extent possible to ask follow-on questions and obtain documents about the allegations from DOE and Fluor Daniel Fernald. The following provides additional detail on the scope and methodology of our work concerning DOE’s VITPP and UNH projects, the Department’s safety and health program and alleged incidents at the site, and the Department’s oversight of financial and performance management systems at Fernald. We performed this work from March 1, 1996, to January 31, 1997, in accordance with generally accepted government auditing standards. To obtain detailed information on DOE’s management and oversight of the VITPP project, we reviewed DOE’s December 1995 investigation of operable unit 4 activities, which focused on the pilot plant project, and interviewed DOE officials who had either participated in the investigation or were responsible for managing past and current activities at VITPP. We tested the validity of this information by reviewing DOE’s and Fluor Daniel Fernald’s summaries of progress reports and briefings provided to DOE and the contractor’s management during the design and construction of the pilot plant and by reviewing correspondence from DOE site managers, the contractor, state and federal regulators, and DOE headquarters managers during this time. We also reviewed (1) the findings of DOE’s March 1996 special project team report on VITPP and other site activities discussed by the Cincinnati Enquirer, (2) the DOE-sponsored January 1996 value engineering study that discussed alternatives to DOE’s current plans for the pilot and full-scale vitrification plants, and (3) the Department’s correspondence to state and federal regulators that identified schedule delays at the pilot plant and DOE’s response to these delays. We discussed the relationship between the pilot plant’s current problems and those reported by the newspaper with DOE’s program manager for VITPP and with senior DOE site managers. To obtain detailed information concerning the UNH project, we reviewed the project-related findings of DOE’s March 1996 report on the allegations and project files maintained by DOE and Fluor Daniel Fernald. We also interviewed key managers and construction workers involved in the project. These included (1) DOE’s and Fluor Daniel Fernald’s principal project managers; (2) the contractor’s deputy project manager, construction contracts manager, and quality assurance inspector who had worked on the project; and (3) construction pipe fitters having experience with UNH. To determine how DOE’s management and oversight processes at Fernald ensure that Fluor Daniel Fernald is fulfilling DOE’s safety and health requirements, we obtained and reviewed (1) DOE’s safety and health procedures and guidelines applicable to the site, (2) the assessments of Fluor Daniel Fernald’s safety and health activities done by DOE’s Fernald Area Office, and (3) the assessments of the Fernald Area Office’s safety- and health-related programs done by the Defense Nuclear Facilities Safety Board and by DOE headquarters’ Office of Environment, Safety and Health and Office of Environmental Management. We also interviewed officials of the Defense Nuclear Facilities Safety Board, DOE’s Ohio Field and Fernald Area Offices, and DOE headquarters’ ES&H and EM about the management and oversight processes. To determine the number of significant safety and health problems at the Fernald site, we reviewed reports from DOE’s Occurrence Reporting and Processing System that Fluor Daniel Fernald prepared from January 1, 1993, to February 12, 1996. To obtain additional information about safety and health problems at the site, we obtained and reviewed (1) assessments, procedures, orders, surveys, and other documents prepared by DOE’s ES&H, DOE’s Fernald Area Office, Fluor Daniel Fernald, and outside consultants and (2) the safety-related findings of DOE’s March 1996 investigation of the allegations. We also interviewed the Fernald Area Office’s safety and health officials at Fernald about their safety and health activities. To assess Fluor Daniel Fernald’s performance and financial systems at Fernald, we focused on three major areas: (1) the control of the performance measurement baseline against which Fluor Daniel Fernald’s performance is measured, (2) internal controls applicable to financial management practices, and (3) how these aspects of Fluor Daniel Fernald’s internal controls could affect the effectiveness of the Fernald Area Office’s oversight of the contractor’s activities and performance. To conduct this work and to gather information on DOE’s and the contractor’s response to previous studies, we interviewed numerous senior DOE and Fluor Daniel Fernald officials. These officials included the Manager, Acting Chief Financial Officer, and Team Leader of the Chief Financial Officer’s Financial Review Group within DOE’s Ohio Field Office and the Director, Deputy Director, Associate Director for Environmental Management, Associate Director for Safety and Assessment, and several Activity Data Sheet Managers of DOE’s Fernald Area Office. At Fluor Daniel Fernald, we interviewed the president, director and staff of the project integration and controls division, the director of the environmental management division, senior officials in the accounting division, the change control manager, and several control account managers. We identified a universe of 985 baseline change proposals from fiscal year 1994 through May 31, 1996. We selected a stratified random sample of 176 baseline change proposals for a detailed review of compliance with Fluor Daniel Fernald’s and the Fernald Area Office’s written procedures for the preparation and processing of baseline changes. Our sample was stratified by fiscal year and type. (See table VI.1.) The sample included all of the disapproved change proposals in each year and all of the change proposals still in process as of May 31, 1996. We sampled 176 baseline change proposals. Of the 176 proposals in the sample, 115 had completed forms. Fifty-nine of the proposals in the canceled and in-process categories had no completed forms at the time our sample was drawn. Lastly, we identified two proposals as missing from our data set. However, because of the lapse of time before we discovered they were missing, we determined that we would not be able to get data that would be comparable to the data from the rest of the sample and dropped them from the analysis. Since we used a sample (called a probability sample) of baseline change proposals to develop our estimates, each estimate has a measurable precision or sampling error, which may be expressed as a plus/minus figure. A sampling error indicates how closely we can reproduce from a sample the results that we would obtain if we were to take a complete count of the universe using the same measurement methods. By adding the sampling error to and subtracting it from the estimate, we can develop upper and lower bounds for each estimate. This range is called a confidence interval. Sampling errors and confidence intervals are stated at a certain confidence level—in this case, 95 percent. For example, a confidence interval at the 95-percent confidence level means that in 95 out of 100 instances, the sampling procedure that we used would produce a confidence interval containing the universe that we are estimating. (See table VI.2.) In addition, we reviewed the entire database of 985 change proposals for indications that several small proposals may have been processed instead of submitting one larger proposal that would have required DOE’s approval. We examined our sample of baseline change proposals to assess whether the narrative description of the change, justification for the change, and impact of the change were clear and understandable without additional verbal explanation. To do this, we examined the formal documentation for these changes, including any supporting documents. We also checked whether the source of additional funding was identified on the documents as required in Fluor Daniel Fernald’s Change Control Procedure (SSOP-5030). Finally, we compared the data shown on the sample change proposals with the data recorded in Fluor Daniel Fernald’s change proposal database for their accuracy and completeness. To determine whether actual costs were being charged to accounts without associated budget allocations, we examined the contractor’s cost performance report data from fiscal year 1994 through May 31, 1996. We identified all accounts with charges of at least $10,000 for which the budget at the completion field was zero and discussed the reasons for these occurrences with Fluor Daniel Fernald’s project controls management personnel. To test Fluor Daniel Fernald’s procedures for opening and closing control accounts and charge numbers, we reviewed the available documentation of account openings and closings. We selected a random sample of 87 control accounts and reviewed all of the 239 associated charge numbers. Since we used a sample (called a probability sample) of control accounts to develop our estimates, each estimate has a measurable precision, or sampling error, which may be expressed as a plus/minus figure. Our estimate of 46 percent of the charge numbers missing at least one of the required open or close documents has an associated sampling error of 12 percent. In addition, we compared the available documentation with the contractor’s computerized charge master file (a record of every time that each account was opened or closed) to determine if the documentation that should have been present under the contractor’s procedures for opening and closing was complete. On two occasions, we observed the contractor’s personnel locating the required documentation for specific accounts. On another occasion, we observed contractor officials at our request attempting to enter transaction data to erroneous accounts to verify that the system would not accept charges to accounts not already in the system. Finally, we interviewed 18 of the contractor’s control account managers about their experiences with opening, closing, reopening, and correcting accounts. We selected the control account managers for our interview on the basis of the number of open accounts that they were responsible for as of May 1996 as reported in the contractor’s charge master file. We did this to ensure that we interviewed control account managers from each activity data sheet (or major work area) at the site. To test the contractor’s internal control procedures for accumulating actual costs in their accounting and performance reporting systems, we examined a database of Fluor Daniel Fernald’s accounting transactions from fiscal year 1994 through July 31, 1996. The database originally contained 737,055 records for fiscal year 1994, 882,965 records for fiscal year 1995, and 650,189 records for fiscal year 1996. We dropped 233,201 of the fiscal year 1994 records, 228,723 fiscal year 1995 records, and 138,168 fiscal year 1996 records that represented general ledger accounting transactions rather than actual costs from the database. This left 503,854 records for fiscal year 1994, 654,242 records for fiscal year 1995, and 512,021 records for fiscal year 1996. We compared each of those records against the charge master data detailing when each control account and charge number was properly opened to accept charges and identified all instances in which the transaction date fell outside of the valid time period for charges to be processed against each account. We interviewed Fluor Daniel Fernald personnel in the project controls and integration and accounting divisions to ascertain how and why charges were made to accounts that were closed. To assess management support for following internal control procedures, we interviewed 18 control account managers. We asked them questions about their experience, their training, their overall management support for following procedures, their tools and techniques for reviewing charges to their accounts and resolving mischarges, and areas for improvement in project management; whether problems identified by the company in fiscal year 1994 with mischarges to accounts continue; and whether they have been asked to do work in advance of formal authorization. Robert E. L. Allen, Jr., Assistant Director Robert J. Baney, Senior Evaluator Jacqueline Bell, Senior Evaluator Judith L. Guilliams-Tapia, Senior Evaluator Casandra D. Joseph, Senior Evaluator Robert P. Lilly, Senior Evaluator Anne M. McCaffrey, Senior Evaluator Delores E. Parrett, Senior Evaluator Ilene M. Pollack, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the extent to which the Department of Energy (DOE) is providing effective management and oversight of two key cleanup projects at its Fernald site--the vitrification pilot plant project and the uranyl nitrate hexahydrate project--that were reported on in the Cincinnati Enquirer; (2) DOE's oversight of safety and health activities at the site; (3) the contractor's compliance with certain performance and financial system procedures; and (4) DOE's overall contracting and management initiatives and how they may resolve any problems identified at Fernald. GAO noted that: (1) DOE has not exercised adequate management and oversight of the vitrification and uranyl projects or of the contractor's safety and health activities; (2) for example, DOE provided limited oversight during the early stages of the two projects and did not prepare many of the required project management documents for the uranyl project; (3) these and other DOE oversight weaknesses contributed to a total of $65 million in estimated cost overruns and almost 6 years of schedule slippages for the two projects; (4) in the safety and health area, from 1993 to 1995, serious concerns were raised about DOE's ability to ensure the contractor's compliance with safety and health requirements; (5) for example, DOE did not have adequate plans to supervise the contractor's activities and was not conducting the required safety and health assessments; (6) in the performance and financial management area, some of the contractor's practices for maintaining key systems make it difficult for DOE and the contractor to exercise effective control and oversight of the contractor's costs and activities; (7) DOE recognizes that contracting and management problems exist throughout the Department and is implementing major reforms to change the way it does business at Fernald and other sites; (8) at Fernald, DOE has made some improvements in the areas that GAO reviewed; (9) for example, in project management, DOE has increased the frequency with which it meets with the contractor to discuss the status of its most important projects; (10) in the safety and health area, DOE has increased the number of assessments and is making other changes that are not far enough along to evaluate; (11) DOE has directed the contractor to make changes to address weaknesses identified in recent reviews of the contractor's financial and performance management; (12) these actions address some of the weaknesses GAO identified; (13) however, it is too soon to assess the overall effectiveness of these improvements and reforms; and (14) their implementation at Fernald will be a real test of DOE's reforms.
The Army has around 97,000 “medium tactical wheeled vehicles” (about 57,000 5-ton trucks and 41,000 2-1/2-ton trucks) in its fleet. The M939 accounts for more than half its 5-ton trucks. The truck is used to carry personnel or pull equipment under all weather and road conditions, including rain, snow, ice, unpaved roads, sand, and mud (see fig. 1). The active Army uses formal and informal programs to train 5-ton truck drivers. The formal program is aimed at military personnel whose official primary occupation will be “88M Motor Transport Operator”—or truck driver. The program lasts 6 weeks and is taught in schools at Fort Leonard Wood, Missouri, and Fort Bliss, Texas. Fort Leonard Wood trains about 90 percent of all 88M students. Fort Bliss for the most part trains the “overflow” of students that Fort Leonard Wood cannot accommodate. The formal instruction program calls for about 1 week in the classroom and 5 weeks of hands-on training. Students who complete the program do not immediately receive a license to drive a 5-ton truck; they are licensed at their next duty station after undergoing additional training and testing there. The Army Transportation Center and School at Fort Eustis, Virginia, is responsible for the content of the instruction program used by the formal training schools. It aligns under the Army Training and Doctrine Command at Ft. Monroe, Virginia. According to Army officials, informal programs are taught at installations or units that need occasional truck drivers but are not authorized any or enough 88M drivers to handle their needs. Occasional drivers do not drive trucks as their primary occupation; they do so on a part-time or as-needed basis. Informal programs are usually 40 to 120 hours long and combine classroom and driving time. Graduates are not automatically licensed and must usually meet additional driving and testing requirements by their units. Occasional drivers receive the same license as 88M drivers and, accordingly, may be required to perform the same driving maneuvers. The Army Reserve trains both Reserve and National Guard 88M drivers using a two-part program that contains the same instructional material as the formal program. The first part (81 hours) is conducted at the soldier’s home station during weekend drills. The second part (120 hours) is usually conducted at a Reserve training center during a 2-week active duty session. Like active Army truck drivers, program graduates must undergo additional training and testing by their units before being licensed. Graduates of the Army’s truck driver training programs are not skilled enough to safely handle 5-ton trucks in some situations for which they should have received training. This is because of instructor shortages and limited training conditions. Graduates are either partially trained or untrained in some skills found in the instruction program. In addition, the schools do not teach driving skills that are essential to performing the 5-ton truck’s primary mission. One of the Army’s two formal truck driver training schools, the school at Fort Leonard Wood, Missouri, operates with sizable instructor shortages. Because of this Fort Leonard Wood operates at a higher student-instructor ratio than called for in the instruction program. In fiscal year 2000, the Fort Leonard Wood facility trained nearly 90 percent of the Army’s 88M drivers in spite of these shortages. Instructors at the informal and Reserve programs also said that their programs suffer from instructor shortages. During the first 9 months of 2000, Fort Leonard Wood operated with an average of 53 percent of its authorized instructors on-hand to teach the program. The main reasons were 1) fewer personnel were assigned to teach than were authorized and 2) even fewer were available (on-hand) than were assigned due to other commitments (such as bus driving, funeral and parade duty, leave, etc.). Authorized refers to the number of instructors the Army determines are needed to teach a program; assigned refers to the number of instructors the Army allocates to teach a program; and on-hand refers to the number of instructors that are present and teaching a program. Figure 2 shows the number of instructors authorized, assigned, and on-hand at Fort Leonard Wood in the first 9 months of 2000, when on average about 45 of 84 authorized instructors were available. Assuming that (1) the Army continues assigning instructors at about 85 percent of authorized levels and that (2) the number of instructors on-hand remains constant at about 53 percent of those assigned, the Army would have to increase its present authorized level of instructors from 84 to158, an increase of 88 percent, in order to have a full complement on-hand. The formal instruction program calls for a 6-to-1 student-instructor ratio— and Fort Leonard Wood is structured to operate at this ratio when staffed at 100 percent of its authorized level. In the first 9 months of 2000, our review showed that Fort Leonard Wood operated overall at a higher ratio of about 9 to 1. Nonetheless, training officials stated that the school has been conducting the behind-the-wheel (hands-on) training portion of the program at the 6-to-1 ratio the instruction program calls for. This means one instructor overseeing 3 trucks with two students per truck. However, Army regulations stipulate a 1-to-1 truck-instructor ratio when a student driver is behind the wheel. In December 1998, Fort Leonard Wood requested a waiver to allow the 6-to-1 ratio when students were driving trucks. While the request has yet to be officially approved, school officials claim that if required to maintain the 1-to-1 ratio, each student might drive as little as 30 miles during the entire course, instead of the present target of about 100 miles per student on average. Instructor shortages affect the quantity and quality of training. Students do not get sufficient hands-on driving experience and are not trained in all the skills required by the instruction program. Program officials at Fort Leonard Wood said that at times, instructors could fully teach only about three-quarters of the instruction program’s required tasks. For example, in the second half of fiscal year 1999 two training modules—driving off-road and basic vehicle control—were often carried out only in part or demonstrated but not practiced. These two modules account for almost 93 percent of the 85.5 hours students are supposed to spend driving trucks. Because of instructor shortages during these two quarters, the average number of miles driven by each student at Fort Leonard Wood dropped from nearly 100 to less than 50. In addition, hands-on training is presently limited to mostly driving in controlled settings only. Students drive in convoys on unpaved but graded and regularly maintained training routes at no more than 25 mph – receiving almost no training in how to drive on public highways or in suburban settings. One group of trainers stated that with more instructors, they could take students on some realistic training rather than the “follow-the- leader” driving students now receive. Students are also not being taught all the tasks that 5-ton-truck drivers are expected to perform. Training officials at the two formal programs stated they thought drivers should be trained in hauling loads or pulling equipment—the primary mission of 5-ton trucks. While the instruction program calls for 20 percent of all vehicles to operate with a load in the cargo area, this is not being done, according to training officials, because of logistical problems that make it difficult to train this skill. Pulling equipment is not taught because it is not specified in the instruction program. Therefore, students must learn these essential skills after graduation and rotation to their next duty stations. Neither the Marine Corps, which co-trains its 5-ton truck drivers with the Army at Fort Leonard Wood, nor the smaller Fort Bliss school, which mostly trains the overflow from Fort Leonard Wood, experience as severe instructor shortages as Fort Leonard Wood. Thus, neither encounters problems teaching the instruction program in its entirety. According to Marine Corps training officials, its detachment is authorized 76 instructors, and in the first 9 months of 2000, averaged having 70 instructors assigned and 65 on-hand (93 percent). During that same period of time, Fort Bliss training officials stated its school was authorized 17 instructors but actually had 18 assigned and on-hand (106 percent). During the first 9 months of 2000, the Marine Corps program averaged a higher percentage of its assigned instructors on-hand than the Fort Leonard Wood Army program – 93 percent versus 63 percent (see fig. 3). This, according to Marine Corps training officials, was mostly because their instructors did not have other commitments or assignments as did Army instructors. Also, the average class size for the Marine Corps was much smaller than that for the Army (44 versus 70 students), and they had more instructors available to teach (65 on average versus the Army’s 45). Because of the smaller class size and larger number of on-hand instructors, the Marine Corps can staff each truck at the 1-to-1 instructor- to-truck ratio regulations call for. This, according to them, allows students to gain driving skills in uncontrolled settings such as driving off-post, on public highways, and in various urban settings. On the other hand, the Fort Bliss school actually had a surplus of instructors: it had 106 percent of its assigned instructors on-hand (see fig. 3). According to program officials, their instructors also did not have other commitments and assignments as did Fort Leonard Wood Army instructors. During fiscal year 2000, Fort Bliss also graduated fewer students, utilized less of its overall available classroom capacity, averaged smaller class sizes, and conducted about one-third the classes that Fort Leonard Wood conducted (see fig. 4). Student Opinions Show Varied Satisfaction With Training Received We surveyed 139 students at the two formal school programs, 72 students at 10 informal programs, and 98 students at 1 Army Reserve training program. We asked them to rate their satisfaction with the type of training there were receiving in various driving techniques and conditions. As table 1 shows, students at Fort Bliss felt better about the training they received in many driving skills than their counterparts at Fort Leonard Wood. Students in the Reserve program were the most satisfied overall with the training they received, while students in the informal programs were generally the least satisfied. According to the instruction program, the majority of driving training time (about 65 hours) should be dedicated to driving on and off roads through woods, streams, brush, sand, mud, snow, ice, rocky terrain, ditches, gullies, and ravines. However, we found that neither of the two formal schools provides all these conditions in its training routes. Students at Fort Bliss are well trained to drive in sand because the school’s training routes have sand. But the school seldom sees snow or ice because these conditions seldom occur there. And the school’s training routes we observed were for the most part flat and unchallenging. One route we drove offered few or no opportunities to drive through woods and brush, over rocky terrain, or through gullies and ravines. The problem, according to school officials, is that the land the training routes are on is too flat and lacking in undergrowth. Training officials also told us that money constraints and the fact that Fort Bliss’ mission is to handle the overflow of students from Fort Leonard Wood impede the development of more challenging driving routes. Training routes at Fort Leonard Wood also offered limited obstacles or challenges. We drove what school officials said was the most difficult training route and found that it did go through some woods and rocky terrain and over some hills and inclines. However, it contained no sand and engineering units maintained the surface the trucks drove on by routinely smoothing out bumps, ruts, and other obstacles. When adverse weather, dangerous road conditions, or other problems arise, the formal schools hesitate to allow students to drive because of safety concerns. However, the Army has determined that simulators can be used to teach some driving skills that cannot be taught in high risk driving conditions because of the dangers involved. Because of safety concerns, the Fort Leonard Wood command has issued an oral directive prohibiting students from driving off the installation. As a result, students do not learn to drive trucks in traffic at highway speeds or in urban settings. Furthermore, the training command frequently cancels hands-on driver training in the presence of ice, snow, or fog because it believes the risk of student drivers having a serious accident outweighs the benefits of the driving experience. Not training under adverse weather and road conditions limits the ability of drivers to handle a truck safely in these situations when they rotate to their new duty stations and begin to drive. In May 2000 the Analysis Center at the Army Training and Doctrine Command completed a study that concluded, among other things, that students graduating from the formal schools were only about 15-percent proficient in skills needed to drive in fog, ice, or snow and 27-percent proficient in skills needed to drive on sand. The study concluded that simulators could overcome these and other shortcomings in driver training. It reviewed 31 critical driving tasks taught at the formal schools and concluded that simulators could help students obtain higher proficiency levels in as many as 22 of them. The study also concluded that simulators might help reduce the potential for accidents both during training and—most importantly—during the first year after training by increasing driving proficiency in fog, snow, or ice. Formal training program personnel agreed, stating that they cannot teach students to drive under some of the more common hazardous conditionsbecause it is too dangerous. Other Army officials also said that simulators, especially more advanced ones, can recreate such situations and give students a sense of driving under these conditions without putting lives at risk. Training personnel at both formal schools, Army Transportation School officials, as well as the simulator study itself strongly cautioned, however, that simulators should not replace actual behind-the-wheel driving time. The private sector uses simulators in its truck driving schools and considers them very useful. Officials at two commercial driving schools stated that their simulators help students learn to drive under various high-risk driving and weather conditions, including braking with a load on steep inclines or on wet and icy surfaces. Some safety rules relating to M939 trucks are not being communicated effectively. Moreover, many informal training programs seem to be unaware of available assistance from the Army Transportation School. Better communication is key to improving the flow of this type of information. The M939 series trucks are not supposed to be driven over 40 mph, even under ideal conditions. However, we found that some licensed drivers, students, instructors, and supervisors alike were either unaware of the speed limit, had forgotten about it, or did not know this restriction is still in effect for M939s without anti-lock brake systems. Two-thirds of licensed drivers we interviewed, as well as about one-third of student drivers in formal training programs and over two-thirds of student drivers in informal training programs, did not know or could not recall the 40-mph limit. And none in a group we interviewed from a recently graduated formal program class were able to tell us the correct maximum speed limit. Although nearly all the 65 formal and Reserve program instructors we interviewed could state the correct speed limit, only about two-thirds of informal program instructors and driver supervisors could do so. By contrast, all of the nearly 100 students we interviewed at the Army Reserve training program knew of the speed limit, and for a simple reason: all the M939 trucks used for training had a dashboard sticker to remind the driver of the speed limit. (See fig. 5.) There also appears to be a communication problem between informal program instructors and the Army Transportation School. Although the instructors believe their training programs are good ones, they also stated they do not have enough time to focus on improving and upgrading these programs and would like more input from “knowledgeable personnel,” such as those at the Fort Eustis Transportation School who developed the formal training program. Some said they could have avoided difficulties they encountered in developing a high-quality informal program if such expertise had been available. Many suggested that standardized, Army- wide training packages tailored for each type of vehicle would be an efficient and economical way of training informal drivers. However, none of the instructors we interviewed knew that the Transportation School has a program available designed specifically for informal training of M939 drivers. In November 1999, the Transportation School distributed a CD-ROM driver training program, which includes lessons on driving and performing operator maintenance on the M939 to Army standards. Transportation School officials stated that the program was sent to around 1,800 different Army locations (according to the number and location of M939 trucks) and is also available through the Army’s web site. While facing similar instructor shortages and limited driving conditions, the informal and Reserve training programs we reviewed must also try to train drivers in a shorter time than the formal programs. The reserves also have problems with their equipment. The 10 informal programs we reviewed ranged between 40 and 120 hours (compared to 6 weeks for the formal program). As a result, instructors focus mostly on teaching the basics (driving on surfaced roads, backing up on flat surfaces, and performing some required maintenance and service). Instructors teach more difficult skills only if time and circumstances allow. Several instructors questioned how their 40 to 80 hour programs could possibly teach as much as was taught in the 6-week formal course. The reserves have problems not only with instructor shortages, but also with training equipment. Reserve officials said their 5-ton truck driver training programs are generally understaffed because of a lack of available senior noncommissioned officers to teach. Also, because programs are usually not authorized a fleet of trucks exclusively for training, units must borrow trucks from the installation where training is taking place or from other nearby Army installations. The training unit is responsible for picking up and returning the trucks or for paying to have the trucks delivered and returned. They also pay an established usage fee to the units that lend the trucks. This is costly, especially if a borrowed vehicle needs repair work before it can pass the required safety inspection so that it can be used for training. Reserve training officials told us that this happens frequently and adversely impacts training. Army regulations require that truck drivers undergo a so-called “check ride” and “sustainment training” once a year (once every 2 years for the Army Reserve and National Guard). Performing these procedures-which are aimed at identifying and correcting poor driving habits, maintaining high driving proficiency levels, and ensuring safe driving-is the responsibility of the driver’s assigned unit. Both procedures must also be documented in personnel driving records. However, we found that they are either not being performed or are not being recorded as required. We reviewed over 450 driving records and found that over 80 percent did not contain an entry indicating a check ride had been performed every year and for each type of vehicle in which the driver was licensed to drive. Eighty-five percent of records also did not have an entry documenting that sustainment training had been given annually as required. Seventy percent of the drivers we interviewed (both 88M drivers and occasional drivers) stated they either did not know what a check ride was or had not been given one annually. Three-quarters of the drivers we interviewed also said they had not attended an annual sustainment training course. Supervisors are responsible for administering check rides to assess a driver’s capabilities and overall driving habits. According to Army officials, unit commanders and supervisors must also develop and implement annual sustainment training programs, in part, on the basis of the results of check rides. A number of supervisors told us that they do not always conduct formal check rides because of personnel shortages and high operating tempo; rather, they try to assess drivers’ skills and give correctional guidance—a sort of “informal” check ride—whenever they ride with a driver. None of them knew about the Transportation School’s informal driver training program, which includes guidelines for sustainment training. The Army Safety Center maintains a ground accident database that has been used in the past to identify accident anomalies that in turn led to safety improvements involving the operation of M939 series 5-ton trucks. The database, however, is not complete because not all data fields in accident investigation reports are always filled in. The database is also not being analyzed on a regular basis to identify trends or recurring problems. One of the purposes of the ground database is to provide demographic information that can be used for statistical comparisons. The Army Safety Center did so in 1998 when it compared accident rates of different Army trucks and found that the M939 series trucks had a much higher serious accident rate than other similar trucks. In other, earlier studies, the Center reviewed M939 accident data and found a series of recurring accident conditions. On the basis of these studies, the Army Tank-automotive and Armaments Command in December 1992 issued the first of several Army-wide messages warning of these problems and imposing the 40-mph speed limit on the M939. Also on the basis of these studies, the Command conducted additional studies on the M939, which in turn led to an estimated $122.4 million in recommended design modifications. We analyzed nearly 400 M939 accident reports dating from 1988 through 1999 contained in the Safety Center’s database and found that four of the 36 data fields of information we requested for our analysis were often not filled in. Safety Center personnel acknowledged that the missing data could weaken any conclusions reached using these fields. Two fields – Was the Driver Licensed at the Time of the Accident and What was the Driver’s Total Accumulated Army Motor Vehicle Mileage – contained no information 45 and 50 percent of the time respectively, and because of this, could not be included in the analyses we performed. Two other Fields –What Was the Mistake Made and Why Was the Mistake Made-were also often left blank. Our analysis also revealed patterns that, if studied further, might be useful in improving training programs. For example, many of the reported accidents occurred on wet or slippery surfaces or when the truck was hauling cargo or pulling equipment. Furthermore, three-quarters of accidents involved occasional drivers (those trained at informal schools). Some patterns we identified are illustrated in figure 6. Instructor shortages are affecting the quality and quantity of truck driver training, especially at Fort Leonard Wood. The end result is that student drivers are not fully trained in all aspects of the instruction program when they graduate. This places an additional burden on the drivers’ assigned units, which must further train these drivers, and on supervisors, who must be more vigilant in identifying drivers’ shortcomings. If formal schools had enough instructors on-hand, they would presumably be able to teach the entire instruction program. The student imbalance between the schools at Fort Leonard Wood, which is understaffed, and Fort Bliss, which has smaller class sizes and a lower student-instructor ratio, creates an ineffective use of resources. This imbalance places an unnecessarily heavy burden on Fort Leonard Wood. If the annual student load were more equally distributed between the two schools, student graduates from Fort Leonard Wood might receive more complete training. The formal schools are not adhering to the instruction program, which calls for some training with trucks carrying cargo. Further, no training is provided in how to pull equipment. With a high percentage of M939 accidents taking place under these two conditions, the formal schools should provide some training in these areas. Similarly, students are not being trained to drive under different weather and surface conditions. While it is understandable why formal schools hesitate to take the risk of having students drive under hazardous or high- risk conditions, it is also necessary that students receive such training. An army study concluded that simulators can provide an effective means of safely training drivers in high-risk weather and different road-surface situations. Because annual check rides and sustainment training are not always being performed, unsafe driving habits may go undetected. Further, if corrective oversight or training is not recorded, unit commanders and supervisors cannot know which drivers need attention. Although performing and recording check rides and sustainment training may be time-consuming, these procedures can save lives. Some important safety information, such as M939 speed limit restrictions, is not always being passed on to or remembered by drivers, supervisors, and trainers. Using inexpensive devices, such as dashboard stickers, is a simple way to remind these personnel of the speed restrictions. The Safety Center’s accident database could be used to identify trends that may show the need for greater training emphasis in certain driving maneuvers. A periodic analysis of the database could assist school officials, instructors, and supervisors in adjusting instruction programs or mentoring drivers. However, such analysis would prove more useful if all fields of information contained in the database were complete. We recommend that the Secretary of the Army direct the Commander of the Training and Doctrine Command to review and modify, as needed, instructor levels for the formal training programs to ensure that the programs are adequately staffed to teach the anticipated class size; balance the student load between the two schools by bringing the Fort Bliss school up to fuller capacity and/or increasing the number of classes annually taught there, thereby reducing the student load and associated problems created by such at Fort Leonard Wood; enforce the instruction program used by the two formal schools to ensure that students receive hands-on training in driving trucks loaded with cargo and also modify the program to include driving when pulling equipment— two essential skills in performing the primary mission of the 5-ton tactical fleet; and consider using simulators at the two formal schools to safely teach known training shortfalls such as driving under hazardous conditions, with the understanding that simulators not be used to replace hands-on driving conducted under less risky conditions. We also recommend that the Secretary of the Army issue instructions to all applicable major army commands to require adherence to Army regulations on check rides and sustainment training of licensed truck drivers and require that warning stickers indicating speed restrictions be prominently displayed in the cabs of all M939 trucks not equipped with anti-skid brake systems. We further recommend that the Secretary of the Army direct the Commander of the Army Safety Center to ensure that all information fields in accident reports are properly filled in periodically review accident data for the presence of trends or anomalies for the purposes of informing trainers and supervisors of any information that may help them perform their duties or help improve safety. In oral comments on a draft of this report, Department of Defense officials concurred with all our recommendations. We are providing copies of this report to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Joseph W. Westphal, Ph.D., Acting Secretary of the Army; and interested congressional committees. Copies will also be made available to other interested parties upon request. If you or your staff have questions concerning the report, please call me at (202) 512-5559. Our scope and methodology is explained in appendix I. GAO contacts and staff acknowledgments to this report are listed in appendix II. Our objectives were to (1) evaluate the capacity of the Army’s 5-ton truck driver training programs to fully train drivers, (2) determine whether oversight procedures and processes for these drivers are being followed, and (3) determine whether and how the Army uses accident data to improve training, supervision, and safety. To evaluate the capacity of the Army’s 5-ton truck driver training programs to fully train drivers, we reviewed applicable training programs in terms of compliance and completeness at both of the Army’s formal schools (Fort Leonard Wood and Fort Bliss) and 10 different informal training facilities located at 4 installations. We also reviewed the training provided at one of eight Army Reserve training centers. Reserve training centers all use the same Program of Instruction. We reviewed these programs for compliance with existing regulations and standard operating procedures established by the various training components. To assess the completeness of training, we made observations and collected documentation relating to the actual training being conducted and compared that documentation to the training specified in each training schools/program’s instruction program and also in relation to the primary mission of the 5-ton truck fleet. We also discussed these issues with officials responsible for designing the training programs, training command personnel, driving instructors, and student drivers to gain their perspectives. Lastly we compared the formal Marine Corps 5-ton training program and two commercial sector training programs to the Army’s formal program to identify any training techniques and/or devices that might benefit 5-ton training curriculums. To determine whether oversight procedures and processes for these drivers are being followed, we documented the duties of supervisors of medium tactical vehicles as found in Department of Defense and Army guidance, instructions, procedures, and regulations. Through observations and discussions with nearly 80 driver supervisors and nearly 200 truck drivers stationed at 12 different Army and National Guard units, we then assessed the degree to which they accomplished these responsibilities or followed required documentary procedures. In addition, at the units visited we collected over 450 historical driving records for truck operators and reviewed them for required annual supervisory annotations relating to check ride and sustainment training specified in Army regulations. To ensure we collected information representative of the universe of existing 5-ton truck informal training programs and the administering of driver supervision responsibilities, we selected—for review and observation purposes—four installations aligned under the U.S. Army Forces Command. This major command, according to the Army Materiel Command’s Logistic Support Activity, controls 94 percent of the active army’s M939 series 5-ton trucks in the continental United States. Because Army automated record-keeping systems cannot provide 5-ton truck densities or locations below the major command level, we engaged the services of Army Internal Review personnel to assist us. Within the four installations, we requested that Internal Review personnel set up meetings with subordinate commands conducting the majority of 5-ton truck driver training and with commands maintaining the largest concentrations of 5- ton trucks and/or drivers. In discussing accident data with Army Safety Center personnel, we learned of Army notifications currently in effect and relevant to the safe handling of 5-ton trucks that resulted from past analyses performed on the Center’s ground accident database. We reviewed these notifications, including existing Army regulations and procedures pertaining to how this information is to be disseminated Army-wide. We then queried 5-ton truck driver-trainers, student drivers, supervisors, and licensed drivers to gain an understanding of how knowledgeable they were of restrictions imposed by these notifications. To determine whether and how the Army uses accident data to improve training, supervision, and safety, we interviewed safety center personnel and obtained and reviewed past studies and analyses conducted by the Center. In addition to identifying data that could be useful in improving training or supervision, we analyzed 12 years of demographic accident information pertaining to M939 series 5-ton tactical cargo trucks. Our analysis of this information, compiled for us by Army Safety Center personnel, included Class A, B, and C accidents occurring from January 1988 through December 1999 and for which some degree of fault was attributable to an M939 driver. This truck series accounts for about one- half of the Army’s 5-ton fleet and is the series specifically mentioned in the request letter. We focused on identifying the presence of any demographic anomalies or commonality factors that, when compiled statistically, might prove beneficial to trainers, supervisors, or the safer operation of M939 series trucks. We also discussed the results of our accident analysis with Army Safety Center officials, trainers, and supervisors to obtain their input and/or concurrence. We performed our work from May 1999 through July 2000 in accordance with generally accepted government auditing standards. In addition to those named above, Aisha A.Mahmood, Stefano Petrucci, William R. Simerl, Lorelei St. James, and Gerald L. Winterlin made key contributions to this report.
Instructor shortages are affecting the quality and quantity of Army truck driver training. Fort Leonard Wood, which trains about 90 percent of truck drivers, is especially affected by the instructor shortage. The result is that student drivers are not fully trained in all aspects of the instruction program when they graduate. If formal schools had enough instructors, they would presumably be able to teach the entire instruction program. The student imbalance between the schools at Fort Leonard Wood and Fort Bliss creates an ineffective use of resources. If the annual student load were more equally distributed between the two schools, student graduates from Fort Leonard Wood might receive more complete training. The formal schools are not adhering to the instruction program, which calls for some training with trucks carrying cargo. Furthermore, no training is provided on how to pull equipment. Similarly, students are not being trained to drive under different weather and surface conditions. Because annual check rides and sustainment are not always being performed, unsafe driving habits may go undetected. Although performing and recording check rides and sustainment may be time-consuming, these procedures can save lives. The Army Safety Center's accident database could be used to identify trends that may show the need for greater training emphasis in certain driving maneuvers. A periodic analysis of the database could assist school officials, instructors, and supervisors to adjust instruction programs or mentor drivers. However, such analysis would be more useful if information in the database were complete.
Since 1994, when the Dietary Supplement Health and Education Act (DSHEA) was enacted, sales of dietary supplements have soared. In 2000, total U.S. sales for herbal and specialty supplements reached $5.8 billion. Surveys have found that many older Americans use these supplements to maintain overall health, increase energy, improve memory, and prevent and treat serious illness, as well as to slow the aging process, among other purposes. Products frequently used by seniors to address aging concerns include herbal supplements such as evening primrose, ginkgo biloba, ginseng, kava kava, saw palmetto, St. John’s wort, and valerian, and specialty supplements such as chondroitin, coenzyme Q10, dehydroepiandrosterone (DHEA), glucosamine, melatonin, omega-3 fatty acids (fish oil), shark cartilage, and soy proteins. (See the appendix for details regarding these substances.) FDA, FTC, and state government agencies all have oversight responsibility for products marketed as anti-aging therapies. In general, the law permits FDA to remove from the market products under its regulatory authority that are deemed dangerous or illegally marketed. FDA’s regulation of dietary supplements is governed by the Federal Food, Drug, and Cosmetic Act as amended by DSHEA in 1994. DSHEA does not require manufacturers of dietary supplements to demonstrate either safety or efficacy to FDA prior to marketing them. However, if FDA subsequently determines that a dietary supplement is unsafe, the agency can ask a court to halt its sale. For dietary supplements, the Secretary of the Department of Health and Human Services may declare the existence of an imminent hazard from a dietary supplement, after which the Secretary must initiate an administrative hearing to determine the matter, which may then be reviewed in court. DSHEA does not require dietary supplement manufacturers to register with FDA, or to identify to FDA the products they manufacture, and dietary supplement manufacturers are not required to provide the adverse event reports they receive to FDA. However, FDA does regulate nutritional and health claims made in conjunction with dietary supplements. FTC has responsibility for ensuring that advertising for anti-aging health products and dietary supplements is truthful and can be substantiated. FTC can ask companies to remove misleading or unsubstantiated claims from their advertising, and it can seek monetary redress for conduct injurious to consumers in appropriate cases. FTC published an advertising guide for the dietary supplements industry in November 1998, which reminded the industry that advertising must be truthful and that objective product claims must be substantiated. State agencies can take action against firms that fraudulently market anti-aging and other health products. Health risks associated with dietary supplements come in a number of forms. First, some dietary supplements have been associated with adverse effects, some of which can be serious. Second, individuals with certain underlying medical conditions should avoid some dietary supplements. Third, some frequently used dietary supplements can have dangerous interactions with prescription or over-the-counter drugs that are being taken concurrently. Fourth, dietary supplements may contain harmful contaminants. Finally, dietary supplements may contain more active ingredient than indicated on the product label. Research suggests that among healthy adults, most dietary supplements, when taken alone, have been associated with only rare and minor adverse effects. Other supplements are associated with more serious adverse effects. For example, research suggests that DHEA may increase the risk of breast, prostate, and endometrial cancer, and shark cartilage has been associated with thyroid hormone toxicity. Adverse event reports can also signal possible risks from dietary supplements. FDA publishes lists of dietary supplements for which evidence of harm exists. In 1998, the agency published a guide to dietary supplements, which included a list of supplements associated with illnesses and injuries. FDA has also issued warnings and alerts for dietary supplements and posted them to its Web site. For example, the most recent alert reiterated the agency’s concern, first noted in 1993, that the herbal product comfrey represents a serious safety risk to consumers from liver toxicity. Consumption of some substances has been shown to be inadvisable, or contraindicated, for persons with some preexisting medical conditions. For example, ginseng is not recommended for persons with hypoglycemia. Kava kava may worsen symptoms of Parkinson’s disease. Saw palmetto is contraindicated for patients with breast cancer, and valerian should not be used by those with liver or kidney disease without first consulting a physician. A recent study also suggested that echinacea (promoted to help fight colds and flu), ephedra (promoted as an energy booster and diet aid), garlic, ginkgo biloba, ginseng, kava kava, St. John’s wort, and valerian may pose particular risks to people during surgery, with complications including bleeding, cardiovascular instability, and hypoglycemia. According to a recent survey, about half of seniors who use a dietary supplement do not inform their doctor. Another survey found that seniors often used dietary supplements with a prescription medication. Since seniors take more prescription medicines on average than do younger adults, the risk of drug-supplement interactions may be higher. For example, evening primrose, ginkgo biloba, ginseng, glucosamine, and St. John’s wort magnify the effect of blood-thinning drugs such as warfarin or coumadin. We also identified reports suggesting that ginkgo biloba may reduce the effects of seizure medications and glucosamine may have a harmful effect on insulin resistance. Contaminanted products can also pose significant health risks to consumers. For example, supplements have been found to be contaminated with pesticides or heavy metals, some of which are probable carcinogens and may be toxic to the liver and kidney or impair oxygen transport in the blood. One commercial laboratory found contamination in samples from echinacea, ginseng, and St. John’s wort products. As much as 20 times the level of pesticides allowable by the U.S. Pharmacopeia was found in two samples of ginseng. Overall, 11 percent of the herbal products and 3 percent of the specialty supplements tested were contaminated in some way. Amounts of active ingredients that exceed what is indicated on a product label may increase the risk of overdose for some patients. Some scientific studies have found that there may be significantly more active ingredient in some herbal and specialty supplement products than is indicated on the label. Studies of DHEA, ephedra, feverfew (promoted as a migraine prophylaxis), ginseng, SAM-e (promoted as an antidepressant and in the treatment of symptoms associated with osteoarthritis), and St. John’s wort have found that a number of products have substantially more active ingredient than indicated on the label. One study of DHEA found one brand contained 150 percent of the amount of active ingredient indicated on the label. In a study of ephedra, one product was shown to have as much as 154 percent of the active ingredient indicated on the label.Studies of ginseng have found some products contained more than twice as much active ingredient as indicated on the product label. Recognizing that there are some safety risks, trade associations that represent manufacturers, suppliers, and distributors of dietary supplements have created and adopted voluntary programs to reduce the risks of potentially harmful products by standardizing manufacturing practices. Some unproven anti-aging products can cost hundreds or thousands of dollars apiece. For example, rife machines, which emit light or electrical frequencies and claim to kill viruses and parasites, are frequently advertised on the Internet and can cost up to $5,000. Some herbal product packages for cancer cures can cost nearly $1,000. FTC provided us with a partial estimate of economic harm based on 20 cases involving companies that fraudulently marketed unproven health care products commonly used by seniors and for which national sales data were available. FTC estimated the average annual sales for those products at nearly $1.8 million per company. Consumers may be purchasing products that contain much less active ingredient than indicated on the label. Results of commercial laboratory tests and scientific studies that analyzed product contents for active ingredient levels have shown that some dietary supplement products contain far less active ingredient than labeled. For some products, analyses have found no active ingredient. Academic studies have shown similar results. In an analysis of DHEA products, nearly one-fifth contained only trace amounts or no active ingredient. In analyses of garlic products, most were found to release less than 20 percent of their active ingredient.One study of ginseng found that 35 percent of the products tested contained no detectable levels of an active ingredient, and another found no detectable levels in 12 percent of the tested products. Studies of SAM- e and St. John’s wort products also found that tested samples often contained less active ingredient than indicated on the label. Federal efforts to protect seniors from health fraud include providing educational materials on avoiding health fraud, funding research to evaluate popular anti-aging therapies, and carrying out enforcement activities against companies that have violated regulations. At the state level, agencies are working to protect consumers of health products by enforcing state consumer protection and public health laws, although anti- aging and alternative products have received limited attention. Both FDA and FTC sponsor educational activities that focus on health fraud and seniors. For example, public affairs specialists in several FDA district offices had exhibits at senior health fairs and health conferences where they distributed educational materials on how to avoid health fraud, as well as cautionary guidance on purchasing medicines and medical products online. To help seniors discriminate between legitimate and fraudulent claims, FTC publishes a range of consumer education materials on certain frequently promoted products and services, including hearing aids and varicose vein treatments. The agency also publishes guidelines on how to spot false claims and how to differentiate television shows from “infomercials.” Federal support of research on alternative therapies is provided by NIH’s National Center for Complementary and Alternative Medicine (NCCAM). It has developed research programs to fund clinical trials to evaluate the safety and efficacy of some popular products and therapies for conditions such as arthritis, cardiovascular diseases, and neurological disorders. There are studies, either ongoing or planned, to examine the effects of glucosamine/chondroitin, melatonin, St. John’s wort, ginkgo biloba, and others. In addition, the agency funds specialized, multidisciplinary research centers on alternative medicine in such areas as cardiovascular disease, neurological disorders, aging, and arthritis. FDA enforcement actions taken against products that it judged to be unapproved drugs or medical devices include court cases filed to halt the distribution of laetrile products that claimed to cure cancer and to halt the sale of “Cholestin,” a red yeast rice product with lovastatin that was marketed with cholesterol-lowering claims. FDA also took action to halt the marketing of the “Stimulator,” a device that the manufacturer claimed would relieve pain from sciatica, swollen joints, carpal tunnel syndrome, and other chronic conditions. According to FDA officials, an estimated 800,000 of these devices were sold between 1994 and 1997, with many purchased by senior citizens. FDA has notified some dietary supplement manufacturers that their promotional materials illegally claimed that their products cure disease. For example, some manufacturers of colloidal silver products have claimed efficacy in treating HIV and other diseases and conditions. Even though FDA banned colloidal silver products as a U.S. over-the-counter drug in September 1999, after concluding that it was not aware of any substantial scientific evidence that supported the advertised disease claims, colloidal silver products may still be marketed as dietary supplements as long as they are not promoted with claims that they treat or cure disease. FDA notified several dozen Internet-based companies making such claims that their therapeutic claims may be illegal. Despite these oversight activities, colloidal silver products claiming “natural antibiotic” properties to address numerous health conditions remain available. FDA has not initiated any administrative rulemaking activities to remove from the market certain substances that its analysis suggests pose health risks, but has sought voluntary restrictions and attempted to warn consumers. For example, aristolochic acid, a known potent carcinogen and nephrotoxin, is believed to be present in certain traditional herbal remedies as well as a number of dietary supplement products. Following reports of aristolochic-acid-associated renal failure cases in Europe, FDA has recently taken several steps. In May 2000, FDA issued a “letter to industry” urging leading dietary supplement trade associations to alert member companies that aristolochic acid had been reported to cause “severe nephropathy in consumers consuming dietary supplements containing aristolochic acid.” In this letter, FDA concluded that any dietary supplement that contained aristolochic acids was adulterated under the law and that it was unlawful to market such a product. FDA has also announced that herbal comfrey products containing pyrrolizidine alkaloids may cause liver damage. The agency’s letter to eight leading dietary supplement trade associations urged them to advise their members to stop distributing comfrey products containing pyrrolizidine alkaloids. However, even though FDA has told firms that market dietary supplements that products containing comfrey are adulterated and unlawful, some firms continue to market them, and the agency is left to identify and take action to remove them on a case-by-case basis as it becomes aware of them. FDA can also monitor dietary supplements by conducting inspections of manufacturing facilities, during which its inspectors look at sanitation, buildings and facilities, equipment, production, and process controls. However, the agency inspects less than 5 percent of facilities annually. Publication of good manufacturing practice (GMP) regulations would improve FDA’s enforcement capabilities, since DSHEA provides that dietary supplements not manufactured under conditions that meet GMPs would be considered adulterated and unlawful. A proposed GMP rule has been developed and is under review by the Office of Management and Budget. In 1997, FTC launched an effort to find companies making questionable claims for health products on the Internet, as well as in other media. This initiative, “Operation Cure.All,” primarily involved conducting Web-based searches on specified dates to identify Web sites making unsubstantiated claims that use of their products would prevent, treat, or cure serious diseases and conditions. The searches were conducted with the participation of FDA, CDC, and some state attorneys general and other organizations. Evaluations of “Operation Cure.All” have found that some companies have made changes in their Web advertising as a result of receiving e-mail alerts from FTC about potentially unsupported advertising claims. In 1997, an estimated 13 percent of notified companies withdrew their claims or Web site, while 10 percent made some changes. In 1998, an estimated 28 percent of companies withdrew their claims or Web site, while 10 percent made some changes. By comparison, the percentage of companies that made no changes in both years exceeded 60 percent. FTC has brought over 30 dietary supplement cases, including those from “Operation Cure.All,” against companies making unsupported claims since the agency released guidelines on its approach to substantiation of advertised claims in 1998. The states we contacted varied in their efforts to protect consumers from fraudulent or harmful health products, but in general focused little attention on anti-aging and alternative medicine products. State agencies reported that they receive relatively few complaints regarding these products. However, many officials said that consumers are being harmed in ways that are unlikely to be reported to state agencies and that misleading advertising and questionable health products are serious problems. States have identified a number of questionable health care products, services, and advertising claims that may affect older consumers. States can protect consumers from fraudulent or harmful health products through two approaches. The first is enforcement of state consumer protection laws against false or misleading advertising. The second is through their public health authority to ensure food, drug, and medical device safety. With some exceptions, the states we contacted take action only if there is a pattern of complaints or an acute health problem associated with a particular substance or device. Seven of the fourteen states we contacted were involved to some degree in monitoring or enforcement activity, and three have ongoing efforts to review advertising, labels, or products to enforce their health and consumer protection laws. The risk of harm to seniors from anti-aging and alternative health products has not been specifically identified as a top public health priority or a leading enforcement target for federal and state regulators. However, evidence demonstrates that many senior citizens use anti-aging products and that consumers who suffer from aging-related health conditions may be at risk of physical and economic harm from some anti-aging and alternative health products, including dietary supplements, that make misleading advertising and labeling claims. The medical literature has identified products that are safe under most conditions, but can be harmful for consumers with certain health conditions. Other products, such as St. John’s wort, are promising for some conditions, but are also associated with adverse interactions with some prescription medications. Senior citizens may have a higher risk of physical harm from the use of anti-aging alternative medicine products because they have a high prevalence of chronic health conditions and consume a disproportionate share of prescription medications compared to younger adults. This concludes my prepared statement, Mr. Chairman. I will be happy to respond to any questions that you or Members of the Committee may have. For more information regarding this testimony, please call me at (202) 512- 7119. Key contributors include Martin T. Gahart, Carolyn Feis Korman, Anne Montgomery, Mark Patterson, Roseanne Price, and Suzanne Rubins. We focused our review on those herbal and specialty supplements that a recent survey by Prevention Magazine found were most frequently used by senior citizens for conditions associated with aging. For each supplement, we have listed in table 1 the health claims frequently associated with the products, although we have not attempted to validate the merits of any of the claims. We also list adverse effects that have been associated with the supplements, conditions for which the supplements might be contraindicated, and prescription medications with which the supplements might have dangerous interactions.
Dietary supplements marketed as anti-aging therapies may pose a potential for physical harm to senior citizens. Evidence from the medical literature shows that a variety of frequently used dietary supplements can have serious health consequences for seniors. Particularly risky are products that may be used by seniors who have underlying diseases or health conditions that make the use of the product medically inadvisable or supplements that interact with medications that are being taken concurrently. Studies have also found that these products sometimes contain harmful contaminants or much more of an active ingredient than is indicated on the label. Although GAO was unable to find any recent, reliable estimates of the overall economic harm to seniors from these products, it did uncover several examples that illustrate the risk of economic harm. The Food and Drug Administration (FDA) and the Federal Trade Commission (FTC) have identified several products that make advertising or labeling claims with insufficient substantiation, some costing consumers hundreds or thousands of dollars apiece. The potential for harm to senior citizens from health products making questionable claims has been a concern for public health and law enforcement officials. FDA and FTC sponsor programs and provide educational materials for senior citizens to help them avoid health fraud. At the state level, agencies are working to protect consumers of health products by enforcing state consumer protection and public health laws, although anti-aging and alternative products are receiving limited attention. This testimony summarized a September report (GAO-01-1129).
SNPs, including D-SNPs, have been reauthorized several times since their establishment was first authorized in 2003. For example, the Medicare Improvements for Patients and Providers Act of 2008 (MIPPA) and the Patient Protection and Affordable Care Act (PPACA) both contained provisions reauthorizing and modifying SNPs. See table 1 for a summary of legislation establishing and modifying SNPs. In 2012, 322 D-SNPs are operating in 38 states and the District of Columbia. CMS pays D-SNPs the same way that it pays other MA plans; that is, a monthly amount determined by the plan bid—the plan’s estimated cost of providing Medicare Part A and Part B benefits—in relation to a benchmark, which is the maximum amount the Medicare program will pay MA plans in a given locality. CMS then adjusts the monthly payments to MA plans on the basis of beneficiaries’ risk scores. If an MA plan’s bid exceeds the benchmark, the plan must charge each of its beneficiaries an additional premium to make up the difference. If a plan’s bid is less than the benchmark, a proportion of the difference is returned to the plan as additional Medicare payments called rebates, which must be used to reduce premiums, reduce cost sharing, or provide mandatory supplemental benefits, such as vision and dental care. Beginning in 2012, CMS has begun to phase in PPACA-mandated modifications in the rebate amount and introduced varied rebate amounts For 2012, rebates ranged based on CMS’s assessments of plan quality.from 66.67 percent of the difference between a plan’s bid and benchmark for plans with the lowest quality ratings to 73.33 percent of the difference for plans with the highest quality ratings. D-SNPs must meet the same requirements as other MA plans, such as submitting an application to CMS. And like other MA plans, D-SNPs that meet minimum enrollment requirements are also required to submit data, such as the Health Effectiveness Data and Information Set (HEDIS) quality measures. In addition, they must conduct quality improvement activities, which include the reporting of certain structure and process measures, such as describing how they manage medication reconciliation associated with patient transitions between care settings. CMS requires D-SNPs to develop a model of care that describes their approach to caring for their target population. The model of care must describe how the plan will address 11 clinical and nonclinical elements established in CMS guidance: (1) describing the specific target population, (2) tracking measurable goals, (3) describing the staff structure and care management goals, (4) providing an interdisciplinary care team, (5) establishing a provider network that has specialized expertise and describing the use of clinical practice guidance and protocols, (6) training plan employees and the provider network on the model of care, (7) performing health risk assessment, (8) creating individualized care plans, (9) establishing a communications network, (10) providing care management for the most vulnerable subpopulations, and (11) measuring plan performance and health outcomes. These models of care are reviewed and approved by NCQA—a private health care quality organization—on the basis of scoring criteria developed with CMS that emphasized the inclusion of in-depth descriptions or case studies. In their MA applications, D-SNPs must also “attest” that they meet a total of 251 subelements related to the 11 elements in their model of care. New and expanding D-SNPs are required to contract with state Medicaid agencies in 2012, and beginning in 2013, all D-SNPs will be required to have contracts with state Medicaid agencies. According to CMS, the contracts are an opportunity to improve the integration of Medicare and Medicaid benefits, and the agency has implemented this requirement with the goal of “increased integration and coordination” for dual-eligible beneficiaries. D-SNPs do not cover the same categories of dual-eligible beneficiaries, but their chosen category(ies) must correspond to those under the Medicaid program in the state in which the D-SNP is being offered. Dual- eligible beneficiaries fall into two main categories. One group, termed full- benefit dual-eligible beneficiaries, may receive the entire range of Medicaid benefits, including long-term care. The other group, partial- benefit dual-eligible beneficiaries, does not receive Medicaid-covered health care services, but Medicaid covers Medicare premiums or cost- sharing, or both, for these beneficiaries. Some D-SNPs limit enrollment to full-benefit dual-eligible beneficiaries, while others are open to all dual- eligible beneficiaries. Additionally, some D-SNPs are open only to disabled beneficiaries under age 65, whereas others are open only to those aged 65 and over. Recently, the federal government, states, researchers, and advocates have focused increased attention on care coordination for dual-eligible beneficiaries. PPACA required HHS to establish the Federal Coordinated Health Care Office (generally known as the Medicare-Medicaid Coordination Office) within CMS to more-effectively integrate Medicare and Medicaid benefits and to improve federal-state coordination for dual- eligible beneficiaries to ensure that they receive full access to the items Experts believe that, in addition and services to which they are entitled.to benefiting dual-eligible beneficiaries, more-effective benefit integration and care coordination can generate significant savings by, for example, lowering emergency room use. The Medicare-Medicaid Coordination Office, working with the Center for Medicare & Medicaid Innovation (called the Innovation Center), is beginning a financial alignment initiative that is expected to enroll up to 2 million beneficiaries in 26 states and is intended to align Medicare and Medicaid services and funding so as to reduce costs while improving beneficiaries’ care. governments expect to realize savings from aligning the payments and integrating care. Under existing coordination efforts, integrating benefits requires an investment of resources from states to work with D-SNPs, or other stakeholders, but according to experts most of the financial savings accrue to Medicare, because most savings result from services that are largely paid for by Medicare, such as reductions in the number or length of hospital stays. Under the financial alignment initiative, savings will be shared by Medicare and Medicaid without reference to whether the savings are achieved in Medicare or Medicaid services, although the allocation of these savings between Medicare and Medicaid has not yet been finalized. Two models will be tested: a managed FFS model, under which payments are adjusted retrospectively, and a capitated model under which one payment is made to an MA plan under a three-way contract among Medicare, the state Medicaid agency, and the plan. In 2011, the Medicare-Medicaid Coordination Office, in partnership with the Innovation Center, entered into contracts with 15 states for up to $1 million each to design state demonstrations. Furthermore, in July 2011 CMS issued a letter calling for additional state Medicaid agencies to submit letters of intent to participate in the demonstrations to better align Medicare and Medicaid funding. The initiative is being conducted under the demonstration authority of the Innovation Center, under which the Secretary of HHS may conduct evaluations that analyze both quality of care and changes in spending. For purposes of testing models under this authority, budget neutrality— which would require that no more be spent under the demonstration than is currently being spent on care for dual-eligible beneficiaries—does not apply. The Secretary can expand the demonstrations nationwide if the demonstrations are determined to reduce spending without reducing the quality of care or improve the quality of care without increasing spending. The state demonstrations under the financial alignment initiative do not necessarily include D-SNPs, and in some cases may replace D-SNPs. As of June 2012, all 26 states had submitted their proposals for the demonstrations. Decisions about implementation of these designs had not been announced as of July 2012 yet implementation of these models is expected to begin by January 2013 and continue into 2014. The demographic and mental health characteristics of dual-eligible beneficiaries enrolled in D-SNPs in 2011 differed from those of dual- eligible beneficiaries in other MA plans and, to a lesser extent, from those of dual-eligible beneficiaries in FFS. Despite these differences, dual- eligible beneficiaries in D-SNPs and dual-eligible beneficiaries in FFS and other MA plans had very similar health status in 2010, the year for which the most recent data were available, as measured by Medicare risk scores. Dual-eligible beneficiaries in D-SNPs (9 percent of all dual-eligible beneficiaries in 2011, as shown in fig. 1) were most similar to dual-eligible beneficiaries in FFS, but differed substantially from dual-eligible beneficiaries in other MA plans on certain demographic and mental health measures. A larger proportion of dual-eligible beneficiaries in D-SNPs, as well as dual-eligible beneficiaries in FFS, were under age 65 and disabled in 2011 compared with those in other MA plans (see fig. 2). Additionally, similar proportions of dual-eligible beneficiaries in D-SNPs and dual- eligible beneficiaries in FFS (15 and 16 percent, respectively) were diagnosed with a chronic or disabling mental health condition such as major depressive disorder or schizophrenia, compared with just 10 percent of dual-eligible beneficiaries in other MA plans. Among the characteristics in our analysis, the largest difference between D-SNPs and other MA plans was the proportion of full-benefit beneficiaries in each plan type: 80 percent of dual-eligible beneficiaries in D-SNPs and 75 percent of dual-eligible beneficiaries in FFS were eligible for full Medicaid benefits, compared with just 34 percent of dual-eligible beneficiaries in other MA plans. While dual-eligible beneficiaries in D-SNPs were generally similar to those in FFS, there were several demographic measures on which dual- eligible beneficiaries in D-SNPs differed from both those in FFS and other MA plans. A smaller proportion of dual-eligible beneficiaries in D-SNPs lived in institutions (e.g., nursing facilities, intermediate care facilities, or inpatient psychiatric hospitals) in July 2011 compared with dual-eligible beneficiaries in FFS, and, to a lesser extent, other MA plans. D-SNPs also enrolled a smaller proportion of dual-eligible beneficiaries who were 85 or older compared with the other plan types, as well as a larger proportion of beneficiaries who were racial or ethnic minorities. Dual-eligible beneficiaries in D-SNPs had very similar health status as measured by their 2010 risk scores, the year for which the most recent data were available, when compared with dual-eligible beneficiaries in FFS and other MA plans. As shown in figure 3, the average risk score— which predicts Medicare costs—of dual-eligible beneficiaries in D-SNPs (1.29) was similar to the average scores for dual-eligible beneficiaries in FFS (1.35) and other MA plans (1.34).eligible beneficiaries in D-SNPs and dual-eligible beneficiaries in FFS and other MA plans were expected to cost Medicare at least twice as much as the average Medicare FFS beneficiary, and less than 10 percent in each plan type were expected to cost at least three times the average. Dual-eligible beneficiaries in D-SNPs do not necessarily get more benefits than those in other MA plans, although D-SNP representatives told us their care coordination services are more comprehensive than other MA plans. D-SNPs and other MA plans varied in how frequently they offered supplemental benefits—benefits not covered by FFS—and MA plans offered more of these supplemental benefits than D-SNPs. While the models of care we reviewed described in varying detail how the D-SNPs plan to provide other services, such as health risk assessments, to beneficiaries, most D-SNPs did not provide—and are not required to provide—estimates of the number of dual-eligible beneficiaries that would receive the services. D-SNPs provide fewer supplemental benefits, on average, than other MA plans. Of the 10 supplemental benefits offered by more than half of D-SNPs, 7 were offered more frequently by other MA plans and 3 were offered more frequently by D-SNPs.benefits were offered much more frequently by D-SNPs compared to other MA plans: they offered dental benefits one-and-a-half times more (See fig. 4.) These 3 supplemental often, over-the-counter drugs nearly twice as often, and transportation benefits almost three times more often. However, a smaller proportion of D-SNPs compared to other MA plans offered hearing benefits, as well as benefits for certain inpatient settings and outpatient services. For some of the services D-SNPs offered less frequently, dual-eligible beneficiaries may receive some coverage through Medicaid. In addition, according to CMS, some of the benefits offered more frequently by other MA plans (e.g. international outpatient emergency) are not necessarily as useful a benefit for D-SNPs. For the three most-common D-SNP supplemental benefits—vision, prevention, and dental—we analyzed the individual services covered under these benefits and found that D-SNPs’ vision and dental benefits were generally more comprehensive than those offered by other MA plans. For example, in their vision benefit, a larger proportion of D-SNPs compared to other MA plans covered contact lenses and eyeglasses. In addition, a larger proportion of D-SNPs compared to other MA plans included in their dental benefit coverage of oral surgery, extractions, and restorative services. However, D-SNPs were less likely than other MA plans to include membership in health clubs as part of their preventive health care benefits. Despite offering these supplemental benefits somewhat less often than other MA plans, D-SNPs allocated a larger percentage of their rebates to supplemental benefits than other MA plans. (See table 2.) They were able to do so largely because they allocated a smaller percentage of rebates to reducing cost-sharing. Most dual-eligible beneficiaries will have their cost-sharing covered by Medicaid, so D-SNPs have less need than other MA plans to cover cost-sharing. We also found that D-SNPs tended to receive smaller rebates than other MA plans ($70 per member per month on average compared to $108). Although the 15 models of care we reviewed described the types of services D-SNPs intended to provide, D-SNPs generally did not state in their models of care how many of their enrolled beneficiaries were expected to receive these services. The criteria D-SNPs are evaluated on in the approval process emphasize the inclusion of in-depth descriptions and case studies rather than details about how many beneficiaries would likely receive these services, for example, the number of beneficiaries that will use additional services targeted to the most vulnerable. CMS does not require D-SNPs to report that information in the models of care, although such information could be useful for future evaluations of whether D-SNPs met their intended goals, as well as for comparisons among D-SNPs.specificity in the model-of-care scoring criteria confused some D-SNPs; having more specific scoring criteria may also eliminate some uncertainty in the approval process. Three D-SNPs we interviewed told us that a lack of Knowing the extent of the special services D-SNPs expect to provide would assist future evaluations of whether they met their goals, but most models of care did not include this information. For example, all 15 D-SNPs stated in their models of care that they planned to conduct health risk assessments for beneficiaries within 90 days of enrollment and an annual reassessment, as they are required to do by CMS. However, only 4 provided information on how many members had actually completed a health risk assessment or reassessment in prior years, with cited completion rates for 2010 ranging from 52 to 98 percent. In addition, none of the D-SNPs we reviewed indicated in their models of care how many beneficiaries were expected to receive add-on services such as social support services that were intended for the most-vulnerable beneficiaries. The models of care we reviewed did include, as required in the model-of- care scoring criteria, information about how the D-SNP identifies the most vulnerable beneficiaries in the plan and the add-on services and benefits that would be delivered to these beneficiaries. D-SNPs’ models of care described a variety of methods used to identify these beneficiaries: health risk assessments (10 D-SNPs); provider referrals (9); and hospital admissions or discharges (7). However, the models of care generally did not indicate how many or what proportion of beneficiaries were expected to be among the most vulnerable, although one D-SNP’s model of care stated that complex-care patients constituted over one-third of its membership. Furthermore, it was sometimes unclear whether the services described as targeted to these beneficiaries were in addition to those available to all dual-eligible beneficiaries in the D-SNP. D-SNPs also described the services that they plan to offer to the most-vulnerable beneficiaries, the most frequent being complex/intensive case management (6 D-SNPs). Other services D-SNPs planned to offer the most-vulnerable beneficiaries included 24-hour hotlines, social support services, and supplemental benefits beyond what was planned to be offered to all dual-eligible beneficiaries in the D-SNP. CMS guidance also requires D-SNPs to describe how they intend to evaluate their performance and measure outcomes in achieving goals identified in their models of care, but CMS does not stipulate the use of standard outcome or performance measures in the model of care, such as measures of patient health status and cognitive functioning. As a result, it would be difficult for CMS to use any data it might collect on these measures to compare D-SNPs’ effectiveness or evaluate how well they have done in meeting their goals. Furthermore, without standard measures, it would not be possible for CMS to fully evaluate the relative performance of D-SNP models of care. While it is not required, an evaluation of D-SNPs could both help to improve the D-SNP program and inform other initiatives to better coordinate care for dual-eligible beneficiaries. The models of care we reviewed had little uniformity in the measures plans selected. Four D-SNPs discussed their approach to performance and health outcome measurement largely in general terms, such as describing which datasets they would use or the categories of outcomes that would be measured. The other 11 D-SNPs provided specific measurements, which included measuring items such as readmissions, emergency room utilization, and receipt of follow-up calls after inpatient stays. Were CMS to move to a standard set of performance and outcome measures, it could be less burdensome and no more costly than what some D-SNPs currently collect. Using standard measures could also streamline the models-of-care review process. Of the 15 D-SNPs we interviewed, 9 were in organizations that offered both D-SNPs and other MA plans, and representatives from 7 of those D-SNPs told us that their care coordination services are different from those in their organization’s other MA plan offerings. For example, a representative from one D-SNP told us that while care coordination and case management were available in both types of plans offered by that organization, dual-eligible beneficiaries in the D-SNP are continuously enrolled in case management, whereas dual-eligible beneficiaries in other MA plans who need these services receive them for only a limited time. A representative from another D-SNP said that the plan provides care coordination services similar to those of other MA plans offered by its organization but that dual-eligible beneficiaries in the D-SNP who need these services are identified faster than are dual-eligible beneficiaries in the other MA plans. A representative of a third D-SNP said it has a community resource unit that is not available in other MA plans offered by its organization, which works with local agencies such as long-term care providers and adult protective services. Multiple representatives of the 15 D-SNPs we interviewed described their care coordination services as being “high touch”—meaning that the plans, particularly the case managers, have frequent interaction with dual- eligible beneficiaries in the D-SNP. For example, representatives from one D-SNP told us that its plan includes in-person meetings with case managers. Representatives from another D-SNP described several specific examples of care coordination successes, such as when a case manager followed up on a beneficiary’s Medicaid reenrollment application to ensure that the beneficiary did not lose eligibility, and another situation in which a case manager worked with the complex social and housing needs of a beneficiary who had both physical and mental health issues. Representatives from a third D-SNP noted that they have providers who conduct home visits to help prevent hospitalization. CMS stated that contracts between D-SNPs and state Medicaid agencies are an opportunity to increase benefit integration and care coordination.However, only about one-third of the 2012 contracts we reviewed contained any provisions expressly providing for D-SNPs to deliver Medicaid benefits, thereby achieving benefit integration. Only about one- fifth of the contracts expressly provided for active care coordination between D-SNPs and Medicaid agencies, which indicates that most care coordination was done exclusively by D-SNPs, without any involvement of state Medicaid agencies. Further, D-SNP representatives and state Medicaid officials expressed concerns about resources needed to contract with D-SNPs, and uncertainty about the future of D-SNPs. The 2012 D-SNP contracts with state Medicaid agencies we reviewed varied considerably in their provisions for integration of benefits and state payments to D-SNPs for covering specific services. According to CMS, “his variability is to be expected, as States and MA organizations can develop agreements for to assume responsibility for providing or arranging for a wide range of Medicaid services based on each State’s ability and interest in integrating its Medicaid program with Medicare via a SNP.” Thirty-three percent of the 124 D-SNP contracts with state Medicaid agencies for 2012 that we reviewed expressly provided for the delivery of at least some portion of Medicaid benefits, thereby integrating Medicare and Medicaid benefits. The contracts varied in the extent of the Medicaid benefits for which a plan was responsible. About 10 percent provided a limited number of Medicaid services, such as dental or vision benefits. In contracts where there was some integration of Medicare and Medicaid benefits, states contracted for the different services, making comparisons among the contracts difficult. Of the 23 percent that integrated most or all Medicaid benefits, 64 percent of D-SNPs provided all Medicaid benefits, including long-term care support services in community settings and institutional care; 25 percent provided most Medicaid benefits, including long-term support services in community settings but not institutional care; and 11 percent provided most Medicaid benefits but did not provide any long-term support services or institutional care. (See fig. 5.) Sixty-seven percent of contracts between D-SNPs and state Medicaid agencies did not expressly provide for D-SNPs to cover Medicaid benefits. To carry out MIPPA’s requirement that each D-SNP contract provide or arrange for Medicaid benefits to be provided, CMS guidance has required that contracts list the Medicaid benefits that dual-eligible beneficiaries could receive directly from the state Medicaid agency or the state’s Medicaid managed care contractor(s). For D-SNPs contracting with state Medicaid agencies to provide all or some Medicaid benefits, the capitated payment reflected variation in coverage and conditions. One state that contracts for all Medicaid benefits except a limited number of services including long-term care services paid the D-SNP at a rate of $423 per member per month. Another state, which contracted for a limited number of benefits, including Medicare-excluded drugs, expanded dental coverage, and case- management services, paid the D-SNP $132 per member per month. This state’s Medicaid agency retained responsibility for inpatient hospital services and long-term care coverage. Some contracts, rather than stating a single capitation rate, gave payment rates for different categories, including risk or acuity level, beneficiary age, and service location, as well as whether the beneficiary was designated as nursing home eligible and whether services for these beneficiaries were provided in the community or facility setting. Within one state, payment rates ranged from just under $170 per month for dual-eligible beneficiaries who were neither nursing home eligible nor had a chronic mental health condition and were living in the community to over $8,600 per month for dual-eligible beneficiaries residing in a nursing facility and requiring the highest level of care. Some of the 2012 contracts providing for payments from state Medicaid agencies to D-SNPs did not address the direct provision of benefits, often providing for payments to the D-SNP for assuming the state’s responsibility for paying dual-eligible beneficiaries’ Medicare copayments, coinsurance, and deductibles. These payments ranged from $10 to $60 per member per month. While all contracts between D-SNPs and state Medicaid agencies for 2012 provided for some level of care coordination to beneficiaries, approximately 19 percent expressly provided for active coordination of beneficiary services between the D-SNP and the state Medicaid agency. Most active coordination occurs when dual-eligible beneficiaries transition between care settings or between Medicare and Medicaid. Thirteen percent of all contracts contained provisions requiring D-SNPs and the state Medicaid agency to coordinate the transition of beneficiaries between care settings (such as hospital to nursing home) within a given time frame. For example, one state’s D-SNP contracts directed the plans to notify the Medicaid service coordinators or agency caseworker, as applicable, no later than 5 business days after a dual-eligible beneficiary had been admitted to a nursing facility. The other 6 percent of contracts included provisions for providing different coordination activities such as requiring the plan to work with Medicaid staff to coordinate delivery of wrap-around Medicaid benefits. The remaining 81 percent of all contracts did not specifically address D-SNPs’ coordination with state Medicaid staff, such as case managers. Rather, these contracts indicated that the D-SNP would coordinate Medicaid and Medicare services but did not specify the role of the state Medicaid agency in coordinating those services. Because D-SNPs are required by Medicare to provide care coordination services to dual-eligible beneficiaries, these services are often provided without reimbursement or payment from the state Medicaid agency. D-SNP representatives and state Medicaid officials we spoke with reported that contract development and submission to CMS are resource- intensive. State officials reported that because they had limited resources, they needed to balance the benefits of the contract with the time and resources needed to develop and oversee it. As one state Medicaid official said, the state “bandwidth”—resources—was a challenge, and she was concerned about contracting with the large number of D-SNPs in her state. This official added that the state did not want to be in the position of making contractual commitments that could not be honored because of limited funds or other resources. In contrast, the plan representatives we interviewed expressed interest in continuing to operate D-SNPs and were therefore eager to contract with states despite any challenges that might exist. Beginning in 2013, D-SNPs will not be permitted to operate without state contracts. Representatives from 12 of the 15 plans and officials from 3 of the 5 state Medicaid agencies we spoke with pointed out that establishing a contract between Medicaid and a Medicare plan highlights conflicts between federal and state requirements. A representative from one D-SNP told us that it was challenging for plans and state Medicaid agencies to agree about the characterization of dual-eligible beneficiaries because Medicare and some states have different definitions. Officials from one state Medicaid agency and D-SNP representatives reported difficulty reconciling the difference between the Medicare contracting cycle, which is based on the calendar year, and the fiscal year contracting cycle for their states. They reported that, if a contract would not cover the entire calendar year, CMS would not approve it. In one case, a state Medicaid official reported that CMS’s deadline of July 1, 2012, for 2013 contracts would occur before the state signed contracts for 2013. Sometimes non- Medicaid structures conflict with CMS’s contracting requirements for D-SNPs. A representative of one D-SNP told us that Medicaid benefits for individuals with developmental disabilities were managed through a contract with the state’s family services agency, not the state Medicaid agency. Therefore, to provide services to this population, the D-SNP had to become a subcontractor to the family services agency. The official said that the D-SNP and the state need to work with CMS to develop a subcontracting relationship that is acceptable. State Medicaid officials and D-SNP representatives reported that they did not always have the resources or the administrative ability to resolve these types of issues before entering into a contract. Beginning in 2013, D-SNPs must secure a contract with the state Medicaid agency in each state in their service area. To do this, D-SNPs may need to establish new relationships with state officials who, according to the D-SNP representatives we interviewed, sometimes have very limited knowledge of Medicare and its requirements. However, some states have experience with Medicaid managed care and in some cases, D-SNP representatives had previously worked with the state on Medicaid contracts, thereby somewhat easing the transition to working with D-SNPs. Plan representatives and state Medicaid officials told us that uncertainty about the future made them cautious in contracting. Authority for SNPs to restrict enrollment to special needs populations (such as dual-eligible beneficiaries) currently expires at the end of 2013; SNPs may not continue as a unique type of MA plan if Congress does not extend this authority. Were this to occur, states would lose any advantages they might have gained from investing their resources to work with D-SNPs to integrate benefits and coordinate care. Furthermore, uncertainty regarding the future of D-SNPs creates uncertainty for the states about how to continue to serve dual-eligible beneficiaries currently enrolled in D-SNPs. Uncertainty about the implementation of state demonstrations under the CMS initiative to align Medicare and Medicaid services—financial alignment initiative—has made some states hesitant to enter into contracts with D-SNPs. As of June 2012, all proposals had been made available for public comment but CMS had not finalized agreements with the states. Medicaid officials from two states told us that if their proposed financial alignment demonstrations were implemented, D-SNPs in their states would cease to exist. Some states were moving forward with D-SNP contracts while concurrently preparing to shift D-SNPs to a different type of managed care plan if their demonstration proposal is implemented. However, officials from one state told us that they did not have sufficient clarity about the direction of the state Medicaid program in relation to its proposed demonstration to enter into contracts. Even in those states where demonstrations would not eliminate D-SNPs, contracting challenges as well as potential financial incentives associated with the demonstrations from the financial alignment initiatives create disincentives for states to work with D-SNPs outside of the financial alignment initiatives and, therefore, leaves the future of D-SNPs in question in these states as well. D-SNPs have the potential to help beneficiaries who are eligible for both Medicare and Medicaid navigate these two different systems and receive the health services that meet their individual needs. However, CMS has not required D-SNPs to report information that is critical to better holding plans accountable and determining whether they have realized their potential. Although the models of care D-SNPs must submit to CMS generally state what these plans intend to do, they do not all report the number of services they intend to provide. For example, plans are not required to report the number of enrollees they expect to designate as most vulnerable, or how many and which additional services they will provide to these enrollees. Although D-SNPs are required to collect performance and outcome measures, they are not required to use standard measures such as existing measures of hospital readmission or patient health status and cognitive functioning. Further they are not required to report these measures to CMS, and, lacking standard measures, it would in any case be difficult to compare D-SNPs’ effectiveness. Standardizing these measures should have a minimal effect on D-SNPs’ administrative efforts, because additional measures could replace some or all of the measures currently used as well as much of the narrative in models of care. Standardizing measures could also reduce CMS’s administrative efforts by streamlining review of D-SNPs. Additional standardized information would allow CMS to meet its goals for accountability for effective and efficient use of resources. Further, CMS has neither evaluated the sufficiency and appropriateness of the care that D-SNPs provide nor assessed their effectiveness in integrating benefits and coordinating care for dual-eligible beneficiaries. Nonetheless, CMS is embarking on a new demonstration in up to 26 states with as many as 2 million beneficiaries to financially realign Medicare and Medicaid services so as to serve dual-eligible beneficiaries more effectively. If CMS systematically evaluates D-SNP performance, it can use information from the evaluation to inform the implementation and reporting requirements of this major new initiative. To increase D-SNPs’ accountability and ensure that CMS has the information it needs to determine whether D-SNPs are providing the services needed by dual-eligible beneficiaries, especially those who are most vulnerable, the Administrator of CMS should take the following four actions: require D-SNPs to state explicitly in their models of care the extent of services they expect to provide, to increase accountability and to facilitate evaluation; require D-SNPs to collect and report to CMS standard performance and outcome measures to be outlined in their models of care that are relevant to the population they serve, including measures of beneficiary health risk, beneficiary vulnerability, and plan performance; systematically analyze these data and make the results routinely available to the public; and conduct an evaluation of the extent to which D-SNPs have provided sufficient and appropriate care to the population they serve, and report the results in a timely manner. We obtained comments on a draft of this report from HHS and the SNP Alliance, which represents 32 companies that offer more than 200 SNPs. CMS provided written comments, which are reprinted in appendix I, and technical comments that we incorporated where appropriate. Representatives from the SNP Alliance provided us with oral comments. HHS concurred with our recommendation that CMS should require plans to explicitly state in their models of care the extent of services they expect to provide, and agrees that information about the extent to which D-SNPs provide certain services would increase accountability and facilitate evaluation. HHS also stated that CMS recently began to collect information on the completion of health risk assessments, but has not made it public because the information is relatively new. HHS did question the usefulness of quantifying the number of members expected to receive services described in the documents, stating that the model of care is a framework for indicating how the SNP proposes to coordinate the care of SNP enrollees. However, as we noted in the draft report, we believe such information could be useful in later evaluating whether D-SNPs met their intended goals. HHS also concurred with our recommendation that CMS should require D-SNPs to collect and report standard measures relevant to the populations they serve, and stated that CMS is working to create new measures that will be relevant to dual-eligible beneficiaries in D-SNPs. HHS also stated that CMS currently collects a broad range of standard quality measures, including HEDIS, as well as structure and process measures. HHS included in its response a recent Health Plan System Management memorandum that CMS sent to MA organizations, including D-SNPs, which outlined updated reporting requirements for 2013. HHS also noted that in addition to the data it currently collects, CMS requires D-SNPs to conduct both a Quality Improvement Project and a Chronic Care Improvement Project, and asked GAO to note this in the final report. We did not include these because, as we noted in the draft report, quality and quality measures were not in the scope of our work. HHS also concurred with our other two recommendations. First, SNP Alliance representatives stated that the benefits D-SNPs provide most frequently are more meaningful to dual-eligible beneficiaries than some of the supplemental benefits provided more frequently by other MA plans. We note in the report that some of the supplemental benefits offered at lower rates by D-SNPs may be covered by Medicaid, thereby reducing the need for them to be covered by D-SNPs. Second, SNP Alliance representatives were concerned with our definition of FIDESNPs. They explained that CMS’s definition, which we used, limits FIDESNPs to those that integrate all Medicare and Medicaid benefits without any limits, such as the number of nursing home days covered. They contended that some D-SNPs may be considered fully integrated even though they do not include all benefits, such as nursing home care, and may have some limits. However, in reporting on CMS activities we have no basis for using different definitions than those formally applied by the agency. Third, SNP Alliance representatives stated that the ability of their D-SNP members to fully integrate benefits through contracting is limited by the capacity and interest of state Medicaid agencies. We note in the report that state Medicaid agencies we interviewed acknowledged limitations in their capacity for contracting. Fourth, SNP Alliance representatives had some concern with our emphasis on estimating how many beneficiaries are expected to receive the services described in the model of care, stating that all dual-eligible beneficiaries would have access to the services described based on need. However, as we stated in the draft report, information is not generally available on the number of beneficiaries that use these benefits. Finally, SNP Alliance representatives were supportive of the state demonstrations under the financial alignment initiative. They noted that D-SNPs are being used as a platform for half of the state demonstrations, with the remainder being based on a Medicaid model. They considered the adoption of the D-SNP model by many of the state demonstrations as evidence of D-SNPs’ success. SNP Alliance representatives generally agreed with our recommendations. They said that they support better aligning reporting requirements with the models of care, and stated that D-SNPs need a set of core measures that are most relevant to the dual-eligible population they serve. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS and to interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Phyllis Thorburn, Assistant Director; Ramsey Asaly; George Bogart; Melanie Anne Egorin; Linda Galib; Giselle Hicks; Corissa Kiyan; Elizabeth T. Morrison; and Kristal Vardaman made key contributions to this report.
About 9 million of Medicare's over 48 million beneficiaries are also eligible for Medicaid because they meet income and other criteria. These dual-eligible beneficiaries have greater health care challenges than other Medicare beneficiaries, increasing their need for care coordination across the two programs. In addition to meeting all the requirements of other MA plans, D-SNPs are required by CMS to provide specialized services targeted to the needs of dual-eligible beneficiaries as well as integrate benefits or coordinate care with Medicaid services. GAO was asked to examine D-SNPs' specialized services to dual-eligible beneficiaries. GAO (1) analyzed the characteristics of dual-eligible beneficiaries in D-SNPs and other MA plans, (2) reviewed differences in specialized services between D-SNPs and other MA plans, and (3) reviewed how D-SNPs work with state Medicaid agencies to enhance benefit integration and care coordination. GAO analyzed CMS enrollment, plan benefit package, projected revenue, and beneficiary health status data; reviewed 15 D-SNP models of care and 2012 contracts with states; and interviewed representatives from 15 D-SNPs and Medicaid agency officials in 5 states. About 9 percent of the dual-eligible population is enrolled in 322 Medicare dual-eligible special needs plans (D-SNP), a type of Medicare Advantage (MA) plan. All dual-eligible beneficiaries are low income, but those in D-SNPs tended to have somewhat different demographic characteristics relative to those dual-eligible beneficiaries in other MA plans. On the basis of the most current data available (2010-2011), compared to those in other MA plans, dual-eligible beneficiaries in D-SNPs were more frequently under age 65 and disabled, more likely to be eligible for full Medicaid benefits, and more frequently diagnosed with a chronic or disabling mental health condition. In spite of these differences, the health status of D-SNP enrollees as measured by their expected cost to Medicare was similar to the health status of dual-eligible enrollees in other MA plans in 2010. D-SNPs provide fewer supplemental benefits--benefits not covered by Medicare fee-for-service (FFS)--on average, than other MA plans. Of the 10 supplemental benefits offered by more than half of D-SNPs, 7 were offered more frequently by other MA plans and 3 were offered more frequently by D-SNPs. Yet D-SNPs spent proportionately more of their rebate--additional Medicare payments received by many plans--to fund supplemental benefits compared to other MA plans, and less to reduce Medicare cost-sharing, which is generally covered by Medicaid. The models of care GAO reviewed, of 107 submitted for 2012, described in varying detail how the D-SNP planned to provide specialized services, such as health risk assessments, and meet other requirements, such as measuring performance. However, the Centers for Medicare & Medicaid Services (CMS), which administers Medicare and oversees Medicaid, did not require D-SNPs to use standardized measures in the models of care, which would make it possible to compare the performance of D-SNPs. While D-SNPs are not required to report that information to CMS, such information would be useful for future evaluations of whether D-SNPs met their intended results, as well as for comparing D-SNPs. CMS stated that contracts between D-SNPs and state Medicaid agencies are an opportunity to increase benefit integration and care coordination. Our review of the contracts indicated only about one-third of the 2012 contracts contained any provisions for benefit integration, and only about one-fifth provided for active care coordination between D-SNPs and Medicaid agencies, which indicates that most care coordination was done exclusively by D-SNPs, without any involvement of state Medicaid agencies. However, some D-SNP contracts with state Medicaid agencies specified that the agencies would pay the D-SNPs to provide all or some Medicaid benefits. Representatives from the D-SNPs and Medicaid officials from the states GAO interviewed expressed concerns about the contracting process, such as limited state resources for developing and overseeing contracts, as well as uncertainty about whether Congress will extend D-SNPs as a type of MA plan after 2013, and the implementation of other initiatives to coordinate Medicare and Medicaid benefits for dual-eligible beneficiaries that could replace D-SNPs. To increase D-SNPs' accountability, GAO recommends improving D-SNP reporting of services provided to dual-eligible beneficiaries and making this information available to the public. In its comments on a draft of GAO's report, CMS generally agreed with our recommendations.
To help identify priorities for highway and traffic safety programs, states maintain six core types of traffic safety data systems: vehicle, driver, roadway, crash, citation and adjudication, and injury surveillance (see table 1). Organizations responsible for implementing and maintaining these systems vary among states, but generally include highway safety offices, law enforcement agencies, motor vehicle offices, courts, emergency medical service (EMS) providers, and others. While state funds are generally the primary source of funding to implement and maintain these systems, states also use federal funds. SAFETEA-LU provides the Section 408 grant program with the most authorized funding exclusively for traffic safety data systems. Administered by NHTSA, this grant program authorized $34.5 million annually from fiscal year 2006 through 2009. For fiscal year 2009, all 50 states and D.C., received funding through the Section 408 grant program, with amounts ranging from $346,262 to $2.3 million. As stated in SAFETEA-LU, goals of this program are to encourage states to adopt and implement effective programs to: improve the timeliness, consistency, completeness, accuracy, accessibility, and integration of traffic safety data; evaluate the effectiveness of efforts to make such improvements; link these state traffic safety data systems with other data systems within the state; and improve the compatibility of the state data system with national and other state data systems to enhance the ability to observe and analyze national trends in crash occurrences, rates, outcomes, and circumstances. To receive funding through the Section 408 grant program, states must meet certain requirements, including establishing a traffic records coordinating committee (TRCC), demonstrating measurable progress toward meeting goals and objectives identified in a multi-year highway safety data and traffic records systems strategic plan, and certifying that an assessment of the state traffic records system has been performed within the last 5 years (see table 2). Among these requirements for the Section 408 grant program, a state TRCC serves to guide and make decisions about traffic safety data systems within the state. The Section 408 grant program requires states to include technical experts on the TRCC, including representatives from highway safety, highway infrastructure, law enforcement and adjudication, public health, injury control, motor carrier agencies, and other stakeholders. In addition to a technical-level TRCC, some states have also established an executive-level TRCC, which can include a manager or director—rather than technical—representatives from state organizations. To determine state eligibility for the Section 408 grant program and progress toward meeting goals and objectives set forth in a strategic plan, NHTSA has developed six performance measures of data system quality: timeliness, consistency, completeness, accuracy, accessibility, and integration (see table 3). While performance measure definition and relative significance may vary for each system within a state depending on the state’s baseline, goals and objectives, NHTSA officials are working to provide examples of these performance measures to make it easier for states to measure progress. NHTSA expects to finalize these improvements in April 2010. Traffic records assessments are an evaluation of states’ traffic safety data systems, which includes discussions of how systems met NHTSA’s performance measures. A NHTSA technical team or private sector contractors conduct assessments for states using a “peer” review approach. Technical teams recommended by NHTSA conduct most assessments. The teams are generally composed of five assessors that states approve to conduct the assessment. These assessors have demonstrated expertise in major highway safety program areas, such as law enforcement, engineering, driver and vehicle services, injury surveillance systems, and general traffic records development, management, and data use. The peer review team generally takes about 5 days to complete an assessment, including interviews with state officials, preparing the assessment report, and conducting a final briefing with state officials (see fig. 1). Assessors and NHTSA officials described the principal document to guide the traffic records assessment process as the Traffic Records Program Assessment Advisory, which was updated in 2006, and for the purposes of this report will be referred to as the 2006 Advisory. The format of traffic records assessments was updated to reflect changes made to the original advisory. The principle change made to the assessment format is that the sections describing traffic safety data systems are now combined with previously separate sections describing the information quality. Besides the Section 408 grant program, SAFETEA-LU authorized other NHTSA grant programs, such as the Section 402 State and Community Highway Safety Grants and the Section 406 Safety Belt Performance Grants, which states can use for any traffic safety purpose, including traffic safety data improvement projects. Also, the Federal Highway Administration (FHWA), the Federal Motor Carrier Safety Administration (FMCSA), and other federal agencies—such as the Centers for Disease Control and Prevention (CDC) and the Department of Homeland Security—have provided support to state traffic safety data projects. For example, the Highway Safety Improvement Program has provided funding to help states achieve a significant reduction in traffic fatalities and serious injuries on public roads through the implementation of infrastructure-related highway safety improvements, which can include traffic safety data projects. A new program is FHWA’s Crash Data Improvement Program (CDIP), which is designed to assist states in developing or improving methods of assessing the quality of their crash data. As part of CDIP, a technical team performs an assessment of a state’s crash data system and then produces a report with recommendations on the establishment of performance measures. FHWA officials reported that after the completion of the assessment, states are eligible to receive up to $50,000 in funding from FHWA to implement recommendations of the report. At the time of this report the program was in its beginning stages and three states had participated so far. Data system quality also varies by performance measure. For example, across all traffic safety data systems, states met the consistency performance measure 72 percent of the time, but met the data integration measure only 13 percent of the time (see fig. 3). The comparatively high level of consistency in state data systems may result from states using uniform reporting forms, such as uniform crash, citation, and EMS reports that are consistent with nationally accepted and published guidelines and standards. According to state officials, integrating data systems can be difficult due to older and outdated system design and obtaining cooperation from different data managers. Assessors said that integration is difficult to measure and report on. Further, state and other officials described integration as one of the last performance measures that states tend to focus on in creating high-quality traffic safety data systems while timeliness, accuracy, and completeness are addressed first. In addition to data system quality varying by system type and by performance measure, our analysis revealed differences in the extent to which individual state systems met various performance measures. Vehicle and Driver Systems: Vehicle and driver systems met at least 60 percent of the performance measures; specifically, 38 vehicle systems and 31 driver systems met four or more of the six performance measures. Vehicle systems performed best in the area of timeliness—completely meeting that performance measure in 45 states—while driver systems met the accessibility performance measure in 35 states. State officials cited multiple reasons why state vehicle and driver systems may be high- performing compared to other data systems, such as (1) these data systems need to be reliable and customer-oriented since the public has contact with the systems through vehicle registrations and driver license applications; and (2) these data systems generate revenue for states through fees and other charges for vehicle and driver licenses. In one state we visited, revenue collected through the Bureau of Motor Vehicles for motor vehicle licenses and fees amounted to over $90 million in 2009. Despite the general ability of driver and vehicle systems to meet most performance measures, only seven driver and five state vehicle systems met the integration performance measure. State officials said that integrating driver and vehicle systems with other traffic safety data systems is difficult due to the age of some systems. For example, in one state we visited the vehicle database is 30 years old and has no ability to electronically communicate or integrate with other data systems. In addition, 31 state driver systems met the performance measure for completeness of data. Based on assessments we reviewed, one reason why all states did not have complete driver data may be that some states do not collect previous driver histories from other states for non-commercial drivers. In order to meet the performance measure for completeness, driver histories must be included for all licensed drivers in particular adverse actions received by drivers in other states, either while licensed elsewhere or driving in other states. In addition, having complete records for drivers promotes safety for law enforcement officers conducting roadside traffic stops. For example, an officer can determine whether the driver that he or she has pulled over has a warrant out for his or her arrest or a suspended license, and with access to vehicle data, can find out if the driver is in a stolen vehicle. With this information the officer can better prepare for the interaction, whereas the officer may be more at risk without it. Roadway Systems: Roadway systems met almost half of the performance measures; specifically, they performed best in consistency—38 states met the performance measure—but, less than half of the states met the performance measure of completeness. According to one assessor and a state official, roadway data plays an important role in state planning. This may lead some states to collect such data consistently. However, in several states we visited, state officials only collected and inventoried roadway characteristics for the state maintained roadways, but less for locally maintained and other roadways, which may contribute to roadway data incompleteness. Nationally, locally maintained roads account for about 77 percent of all public roads, while state maintained roads represent about 20 percent of the total road mileage. In Idaho, of the over 47,000 miles of roadway in the state, the Idaho Transportation Department is responsible for collecting and maintaining data on about 5,000 miles of these roads. The remaining approximately 42,000 miles are the responsibilities of local road authorities. As GAO has previously reported, most states have not developed roadway inventory data for locally maintained roads because they do not operate and maintain those roads and are concerned about costs and time frames involved in collecting the data. In addition, state officials reported that they collect the amount of data on locally maintained roads that are required for the national database—the Highway Performance Monitoring System (HPMS)—which consists of all data collected and updated by states on selected highway segments across the United States. Because of this, officials said detailed data are not collected for all roadways. One effect of incomplete roadway data is that location data for some crashes will make the identification of hazardous locations difficult or impossible and can also prevent states from fully identifying and reporting on potential remedies for hazardous locations and estimating the costs of those remedies. Crash Systems: While state crash data systems met about as many performance measures as not, our analysis showed, and we have previously reported, that state crash data systems varied considerably in the extent to which they met NHTSA’s performance measures. For example, crash data systems in five states met all six performance measures, while systems in six states did not meet any of the performance measures. In addition, systems in 27 states met two or fewer of the six performance measures. Also, according to our analysis, crash systems in 32 states met the consistency performance measure. Several of the states that we visited had uniform crash report forms used by law enforcement to report vehicle crashes, which may contribute to the consistency of crash data. Traffic records assessors, NHTSA officials, and state officials said that states have tended to focus on improving crash data systems, in part, due to crash data having a clearer link to improving public safety than other traffic safety data systems. However, systems in 23 states did not meet the performance measure for crash data accuracy. Manual data entry and a lack of electronic edit checks could lead to less accurate data, which can inhibit meaningful analysis. For example, in one state we visited, law enforcement officers provided incorrect longitude coordinate data using Geographic Information Systems (GIS) equipment. This human error resulted in inaccurate crash location data; in multiple instances, the computer program located crashes in China. Citation and Adjudication Systems: For citation and adjudication systems, about as many performance measures were met as were not met; specifically, systems in 18 states met four or five performance measures, while systems in 21 states met one or none of the six performance measures. However, 17 percent of the time, the extent to which the performance measure was met was unknown. Citation and adjudication systems performed best in consistency—38 systems met this performance measure. Similar to crash data, the adoption of uniform citation forms may have improved consistency for this system. However, about half of state citation and adjudication systems did not meet accessibility and completeness performance measures, and only one state met the integration performance measure. These performance measures may be difficult for some states to meet due to the high number of jurisdictions that states rely on to report data or because a statewide citation system may not exist. For example, Georgia officials said that the state has nearly 800 different courts—about 400 of which are municipal courts, which handle most traffic violations—each with its own court data system. There is no comprehensive collection of citation data in the state, and the state has a limited ability to require jurisdictions to submit data. Georgia officials said that citation and adjudication data are relatively incomplete because some courts do not report all data. Also, if states do not have an electronic citation system, even police departments with the ability to submit citations electronically must submit their citations on paper. For example, a law enforcement officer from one state we visited said that his department has the capability to electronically submit citations, but must still print out citations to submit them to the state because the state is not able to electronically receive citations. Injury Surveillance Systems: Less than half of the performance measures were met, but similar to the citation and adjudication systems, the extent to which performance measures were met was unknown 17 percent of the time. Systems in 39 states met 3 to 0 performance measures, while systems in 12 states met four to six. In addition, within injury surveillance systems, 25 states met the performance measure for accuracy and 30 states met the performance measure for consistency. This may be attributed to training provided to those responsible for data entry. For example, one state hospital administration provides training to data entry staff on how to enter cases into the state data system properly. In contrast, 40 state injury surveillance systems did not meet the performance measure for integration. Assessors and one state official reported that the multiple components necessary for a state injury surveillance data system make meeting various performance measures more difficult than for other data systems. According to the 2006 Advisory for traffic records assessments, a complete injury surveillance system typically has five components: pre- hospital (i.e., EMS), trauma, emergency department, hospital in- patient/discharge, and rehabilitation to track injury causes, magnitude, costs, and outcomes. Officials said that maintaining multiple components often requires that several departments contribute data, which can make data management difficult. For example, in Minnesota, the EMS Regulatory Board collects EMS data, the Minnesota Hospital Association collects patient discharge information, and the Minnesota Department of Health maintains the Minnesota Trauma Data Bank, which contains trauma and mortality data, all of which are reported to the Minnesota Department of Health. In addition, systems in several states have only some components of a fully functioning injury surveillance system in place or have system components that are just in the beginning stages of development. Although NHTSA’s implementing guidance for the Section 408 program states that a traffic records assessment should be an in-depth, formal review of a state’s highway safety data and traffic records system, our analysis revealed instances where assessments were incomplete or inconsistent. We assigned “unknown” codes where no other categorization was possible due to limited or otherwise absent information in a state traffic records assessment, which includes both incomplete and inconsistent performance measure descriptions. The results of our analysis were that 49 of 51 traffic records assessments had at least 1 area out of 36 (six state traffic data systems multiplied by six performance measures) for which the extent to which a system met a performance measure was unknown. Incomplete or inconsistent information could limit the usefulness of these assessments to state officials and make it difficult to ascertain the full extent of data system quality. NHTSA officials said that they review traffic records assessments for quality and that they have accepted all state assessments as adequate to fulfill the statutory requirement included in NHTSA’s Section 408 grant program implementing guidance. NHTSA officials said that they are currently beginning work with a contractor to study the assessments. While the contract to study the assessments includes a component to examine state traffic records assessments for effectiveness and utility, the main objective is to review state traffic records programs and data systems from states that have had at least two traffic records assessments and identify any improvements or degradations that occurred between the two assessments. In addition to the contract, NHTSA officials reported starting other activities, which will include updating related advisory documents, increasing participation of other DOT administrations, aligning traffic records assessments with other similar NHTSA program assessments, determining the most effective frequency for requiring assessments, incorporating all performance measures identified in advisory documents, and developing a more robust list of assessors for states. As these efforts are in the beginning or planning stages, it is too soon to tell how they will impact the traffic records assessment process. Our review of traffic records assessments showed that for those traffic records assessments that had any unknown areas, the number of unknown areas ranged from 1 to 18 out of a possible 36, but most assessments had five or fewer unknown areas. Of the 49 assessments we coded with unknown areas, 27 had between 1 and 3 unknown areas and 6 had 10 or more (see fig. 4). Out of the total 1,836 codes that we assigned across all 51 assessments, 226 (about 12 percent) were coded as unknown. Our coding analysis revealed that the frequency of unknown areas is greater in the updated assessment format compared to the prior assessment format. Of the 51 assessments we reviewed, 11 (22 percent) were in the updated assessment format. Despite the lower number of assessments in the updated format, proportionally, we coded about three times as many areas as unknown in the updated assessment format than the prior format. The updated traffic records assessment format, which is based on the 2006 Advisory, is less tied to NHTSA’s Section 408 grant program implementing guidance than previously. The 2006 Advisory describes what characteristics state traffic safety data systems should have, but unlike NHTSA’s implementing guidance and the prior advisory, in several areas it does not include a discussion of each of the six performance measures as they relate to each of the six data systems. For example, the 2006 Advisory notes that data should be timely and includes an example of a quality control measure for timeliness, but unlike the prior advisory, does not establish a specific time frame by which timeliness can be assessed. The 2006 Advisory also does not expressly discuss the accessibility performance measure for five of the six traffic safety data systems. This means that for five of the six data systems, the 2006 Advisory addresses only four of the six performance measures. As previously noted, several assessments were incomplete, meaning that there was not enough information provided to determine the extent to which a state had met a performance measure. There was one instance in which an assessment lacked performance measure evaluation information on that state’s entire injury surveillance system since “representatives of the various medical data systems were not present during Traffic Records Assessment. Therefore no information related to timeliness, consistency, completeness, accuracy, accessibility, and integration…could be presented in report.” In other instances, we were unable to make a determination based on information provided in the assessment. For example, in 14 traffic records assessments it was unclear whether citation and adjudication data were timely. In another assessment, the timeliness of injury surveillance data was explained as the timeliness of EMS arrival time as opposed to the timeliness of when the injury data are available for analysis. Incomplete injury surveillance data may lessen a state’s ability to track injury causes, magnitude, costs, and outcomes. Some incomplete assessments may result from differing views on the value of various performance measures between NHTSA officials and traffic records assessors. According to NHTSA officials, the six performance measures across the six data systems are of equal importance in the context of assessing a state’s qualification for subsequent year Section 408 grant funding. NHTSA officials also said that making progress in one data system or performance measures is not more highly valued than making progress in another. Additionally, NHTSA officials said that part of the value of assessments is that they provide information on all areas of states’ traffic safety data systems. In contrast, some of the assessors we interviewed questioned the value of some or all of NHTSA’s performance measures for the various state traffic safety data systems. For example, assessors said that information on the integration performance measure was not valuable because it is difficult to measure. Others said that injury surveillance data assessors focus on integration more than the other performance measures and that one of the most important findings in the injury surveillance section of a traffic records assessment is how it integrates with other traffic safety data systems. Furthermore, some assessors reported that they do not evaluate certain performance measures if it appears that nothing has changed in a state since the last assessment, and that some performance measures and traffic safety data systems are not as important as others. As noted previously, the principal document used by assessors as a guide for the traffic records assessment process is the 2006 Advisory. The purpose of this guidance is to provide states with guidance on the necessary contents, capabilities, and quality of data in a traffic records system and to be a description of an ideal system, not to describe what information should be included in a traffic records assessment. Furthermore, as opposed to the previous advisory, the 2006 Advisory explicitly discusses some, but not all of the six performance measures for each traffic records systems. Given that, per Section 408 grant program requirements, assessments are conducted every 5 years, there is merit in having clearer guidance that assessments include all performance measures to update state officials on their traffic safety data systems, even if such an update explains that nothing has changed since the last assessment. In addition to completeness concerns, some traffic records assessments are inconsistent, meaning that information provided in one part of the assessment describing the extent to which a state met a performance measure was inconsistent with information provided elsewhere in the assessment. For example, one assessment described the performance measure of consistency as “…not to be an issue in that a uniform citation is used and there are a relatively small number of police agencies …that submit traffic citations.” However, later on the same page in the accuracy section it was noted that “The Court indicated that officer reporting is not consistent and more training is needed to assure that charging documents and affidavits of probable cause are completed correctly. Additional training could help to assure uniformity of submissions.” Another assessment explained, “Information provided during the assessment interviews indicated that the data are timely; the latest data available for analysis is 2006.” Upon review, the available data were at least a year old since the traffic records assessment was conducted in 2008. However, NHTSA guidance suggests that all injury data be available in a comparable time frame for the crash data, which is preferably within 90 days of a crash. Despite these limitations, traffic records assessments remain vital in helping states identify problems, develop plans, and prioritize projects to improve traffic safety data systems (see fig. 5). For example, Minnesota officials used recommendations made in a traffic records assessment, along with the strategic plan, to prioritize traffic records projects. In addition to being useful to states for making traffic records improvements, NHTSA officials emphasize traffic records assessments as valuable for strategic planning purposes. NHTSA officials added that the traffic records assessment process is important because it provides an independent look at the quality of traffic safety data systems, helps determine where priorities should lie, and guides states on targeting limited resources. Assessors and state officials also emphasized the value of traffic records assessments for states. For example, one assessor said that traffic records assessments serve as a tool and guideline for states in how to move forward with traffic safety data systems and to promote a data-driven approach by balancing stakeholder interests with priorities highlighted by data. In contrast, state officials reported that while information captured in traffic records assessments is useful, the more specific information such as problem identification, definitions of performance measures and data analysis recommendations included in FHWA CDIP assessments has additional benefits. Although CDIP began in 2008 and only three states have currently participated, officials in states where both a traffic records and CDIP assessment were conducted said that the information included in CDIP assessments was more in-depth and specific. CDIP assessments are conducted in a similar manner as traffic records assessments, take roughly the same amount of time to conduct, and cover all six performance measures identified in NHTSA’s implementing guidance, but focus only on a state’s crash data system. CDIP assessments include recommendations and particular steps or methods states can take to potentially improve their crash system. By contrast, assessors identify problems in traffic records assessments but state officials said that traffic records assessments generally do not provide specific strategies for ways to improve the traffic safety data systems. State officials reported that an assessment with information as specific as that provided in CDIP assessments would be valuable to have for each of their traffic safety data systems. In addition, several state officials said that insufficient time is spent conducting traffic records assessments to produce an in-depth, detailed report. In one state, traffic records assessment officials spent only 10 minutes with the team representing one of the six data systems. Information collected by NHTSA from the states shows that 49 states and D.C. have demonstrated progress in improving the quality of all six traffic safety data systems. States have demonstrated progress in all six traffic safety data systems, as well as across all six performance measures. It is important to note that reported state progress is not equivalent to achieving a high-quality traffic safety data system; rather, such progress represents steps toward that end goal. Of the possible 36 areas in which to demonstrate progress, by system and by performance measure, states demonstrated progress in 23 areas to NHTSA from fiscal year 2008 through fiscal year 2009 (see table 4). To remain eligible for Section 408 grant funding states must demonstrate measurable progress related to achieving the goals and objectives of a state’s multi-year highway safety data and traffic records strategic plans. NHTSA officials reported that states can fulfill this requirement by demonstrating progress in one performance measure for one data system per year. For example, a state might report progress involving the performance measure of completeness within the roadway data system. States can and have reported more than one area of progress. NHTSA does not require states to report all progress toward improving traffic safety data systems and, as a result, states may be making progress that is not reported. Additionally, NHTSA does not always accept every area of progress that a state reports if the state demonstrates sufficient progress in at least one area; therefore, state progress may be understated. Sometimes NHTSA cannot verify that progress has taken place in all reported areas, due to a lack of evidence or incomplete information. For example, Maine officials reported five areas of progress to NHTSA for fiscal year 2009 and NHTSA officials accepted four of those areas. West Virginia officials reported four areas of progress, one of which NHTSA officials accepted. While NHTSA officials reported that demonstrated progress does not represent all progress that states are making, it serves as a useful approximation for the areas in which states are making progress. Moreover, NHTSA officials said that in regards to qualifying for Section 408 grant funding, the most important development is that states are making some progress in improving traffic safety data systems. State progress, for the 2 most recent fiscal years, may reflect some trends identified by our analysis of the extent to which state traffic safety data systems met NHTSA performance measures. For example, states demonstrated the least progress in the vehicle and driver data systems (7 of the 164 total areas of progress listed in table 4). This may reflect that vehicle and driver systems already met most performance measures, as shown in our coding analysis. In contrast, states have demonstrated progress for crash data systems more often than other systems. Out of the 164 instances states have demonstrated progress, 89—over half—involved improvements to state crash data systems. This may indicate heightened state efforts to improve crash data systems due to these systems not meeting various performance measures, as shown in our analysis. Furthermore, state and NHTSA officials, as well as assessors, reported that states have focused on improving crash systems. Progress has resulted from states pursuing small- and large-scale projects to improve traffic safety data systems. For example, some progress has resulted from smaller-scale projects, such as printers for citations or online tutorials. NHTSA officials said that they have encouraged states to use Section 408 grant program funding to support near term, quick projects, recognizing that large-scale projects might require significant, additional time or funds. However, some state officials said that smaller- scale projects are less likely to immediately lead to substantial improvements in the overall quality of state traffic safety data systems. Support for large projects also depends on state funding in addition to Section 408 grant program funding awarded to a state. For example, Virginia has expended over $900,000 in state and local funding on the Traffic Record Electronic Data System (TREDS) project, which integrates federal, state, and local data; provides law enforcement the ability to collect and submit crash data electronically; reduces manual entry of data; provides enhanced analysis capabilities and increases accessibility for data users; among other things. Thus far, NHTSA has awarded Virginia approximately $2.5 million in Section 408 grant funding. For the states that we visited, federal assistance has helped states to improve traffic safety data systems. Officials in all eight states that we visited stressed the important role of the Section 408 grant program to improve traffic safety data and have used this and other federal funding to implement projects. Officials reported that while state funding makes up the majority of support for traffic safety data projects, without Section 408 grant program or other federal funding some projects would have happened much more slowly, or not at all. NHTSA officials estimated that for every dollar provided through Section 408 grant funding, states spend an additional $4 for traffic safety data projects. Below are examples of state projects that have used federal funding. Timeliness–Several states have implemented or are currently working on projects to transition from manual to electronic reporting of data. Electronic reporting reduces reliance on paper processes and can increase the speed of submission and eventual availability of data for analysis. Minnesota has undergone such a transition for crash data. Minnesota officials said that in 2009 over 90 percent of the state’s crash reporting was submitted electronically to its crash database. This includes all crash reports from Minnesota’s State Highway Patrol. Electronic submission has helped the state submit and finalize all data in Minnesota’s crash database within 30 days. This represents an improvement from the 6-week backlog to enter crash data that Minnesota experienced in 2003. Completeness–To improve the completeness of crash data, officials in three states that we visited reported using diagram software to help law enforcement officers depict crashes. Officers generate these diagrams by entering information electronically at the scene of a crash (see fig. 6). The diagram increases completeness by including visual information like the position of the vehicle(s), location of damage, intersection layout, and other crash features, such as trees and pedestrians. Using crash diagram software, officers can edit information before completing and submitting the diagram as part of the crash report. Consistency–Some states have improved consistency by adopting uniform reporting forms and increasing compliance with national guidelines. In 2007, Virginia revised its crash data collection form using guidance from NHTSA and guidelines captured in the Model Minimum Uniform Crash Criteria. Virginia officials reported that the form revision increased compliance with the Model Minimum Uniform Crash Criteria from 55 to 80 percent. Also, Georgia’s Emergency Medical Services Information System has used a revised form that includes approximately 300 data elements— as opposed to the previous form, which had 103. This revised form is “gold compliant” with National EMS Information System guidelines. Approximately 30 percent of Georgia’s EMS agencies are still using the previous forms, but state officials expect a continued transition to the new form. Accuracy–To improve the accuracy of roadway data, including roadway features such as bridge locations, some states have explored projects available through GIS and other technology. Maine’s Department of Transportation has created the Maine Department of Transportation Map Viewer System, which will eventually become available to a variety of state data users. This system integrates existing GIS technologies into a viewer screen where users can view roadway data and update information to increase accuracy. Users of the viewer system can also select and change which data are displayed and view photographs of a particular section of roadway to illustrate local features (see fig. 7). In another example, one law enforcement jurisdiction that we interviewed installed video data recorders in police vehicles. These devices record scenes to the front or rear of a vehicle. Uses of these recorders include reviewing crash footage to verify information and ensure that crash, driver, and vehicle data are accurate (see fig. 8). Accessibility–Several state and local jurisdictions we met with, including those in Maine, Minnesota, and Ohio, have completed projects to make traffic safety data more accessible to users. For example, data captured by Ohio’s Location Based Response System (LBRS) is available to data users and other citizens on the Internet. Ohio officials reported that this has increased accessibility to roadway information, and reduced public requests for roadway data. In addition to state Department of Transportation officials, LBRS users have included County Emergency Management Agencies, utilities, and county engineers. In addition to LBRS, other projects have included jurisdictions incorporating new technologies to make crash, driver, citation, and vehicle data more accessible in law enforcement vehicles. Figure 9 provides examples of other, completed projects that have increased the accessibility of various data systems for law enforcement officials. Integration–To improve the integration of traffic safety data systems with one another, 19 states participate in the Crash Outcome Data Evaluation System (CODES) effort. Facilitated and supported by NHTSA, CODES seeks to better link and otherwise integrate crash and injury surveillance data. Such integration can result in state officials better understanding the medical consequences of traffic crashes and the types of injuries that certain crashes are likely to produce. Of the states that we visited, Georgia, Maine, Minnesota, Ohio, and Virginia participate in the CODES project. Ohio officials reported that the most extensive linkage between injury surveillance systems in the state has happened through the CODES program, which has established links between EMS, trauma, and crash data. Virginia officials cited CODES in helping to submit and link data to other organizations including the Department of Motor Vehicles. While states have demonstrated progress, a number of overarching challenges exist to improving traffic safety data systems. This is due in part to the complexity and multifaceted nature of trying to establish traffic safety data systems. The Section 408 grant program is designed to improve six, oftentimes completely separate, state traffic safety data systems. We have previously reported that overhauling one outdated data system can be both challenging and expensive, particularly when integrating a new system with existing legacy systems. State officials in all states that we visited also reported that just maintaining one data system requires significant funding, time, or other limited resources. Therefore, trying to make simultaneous improvements to multiple traffic safety data systems can magnify these challenges. Limited Resources: Officials in all the states that we visited identified limited resources as a significant challenge in state efforts to improve traffic safety data systems. Some of the most frequently cited limitations in funding and human capital resources are discussed below. Limited funding. According to state officials, making improvements to one data system can cost tens of millions of dollars. Therefore, obtaining funding necessary to make improvements to six state traffic safety data systems is a challenge. As we previously reported, while traffic safety data grants have provided states with funding to improve traffic safety data systems and complete associated projects, the cost of developing and maintaining data systems can exceed Section 408 program grant amounts. While state officials reported that state funding supports most of the cost of traffic safety data projects, NHTSA and officials in five out of eight states we visited indicated that traffic safety data system improvements are not among the highest state priorities due to budgetary constraints or limited interest. The recent economic recession has amplified state funding limitations for data projects. Moreover, a state’s legislative process may delay funding for traffic safety data projects. Even in instances where funding is available, some traffic safety data improvements require state legislative action or approval to move forward on contracting, design, and implementation processes. Infrequent state legislative sessions can heighten delays in receiving approval to spend awarded federal funding. For example, according to state officials, the legislature in one state we visited meets every other year, which can delay approval of spending of federal grant and other funding on traffic safety data projects and contribute to carry over of funds. In another state, major technology projects must first be approved by the state’s information technology authority. The project planning involved to obtain state approval can make some projects cost prohibitive. For example, the state wanted to update the injury surveillance system 4 years prior, but had to obtain approval first, which resulted in delays in implementation and a doubling of the project’s costs. Limited human capital resources. States that rely on paper crash and citation forms require manual, time-consuming data entry, which can strain resources and lead to backlogs in data. For example, the Texas Department of Transportation assumed responsibility for the state’s crash data system in 2007 from another state department, and also assumed responsibility of a backlog of some 3 million crash reports over a 5-year period that needed to be entered into the data system (see fig. 10). The accumulated backlog was the result of the state’s use of a manual crash data system designed in the 1970s prior to implementing the state’s electronic crash data system in 2008. According to a Texas Department of Transportation briefing report, the manual process was inefficient, resource intensive, and not conducive to the timely dissemination of data. In some states, there are only a few staff that manage a state’s traffic safety data programs and grants. This is significant because state officials reported that grant applications are time consuming and difficult to balance with other key job responsibilities. In one instance, a state we visited had to return federal grant funding because it did not have available staff resources to effectively manage the grant and associated project. A regional NHTSA official also reported that the turnover and training of new, state staff can be a challenge, particularly when staff must be trained on the specifics of the Section 408 grant program due to limited institutional knowledge. Furthermore, NHTSA officials reported that regional meetings have helped state officials obtain contacts and share leading practices, but state budget restrictions have curtailed these meetings, removing this training opportunity and resource. NHTSA officials reported that they have recently begun using online webinars as an alternative for national, state, and regional audiences. Training individuals is an important component in ensuring the collection of high quality traffic safety data, as recommended in several traffic records assessments. However, a number of state officials told us that training on data collection may be limited due to funding and resource constraints, such as staff resources and travel expenses. In several states, officials reported that the local law enforcement officers collecting the data may not fill out a crash report completely or accurately, or submit the form in a timely fashion, which may lead to instances where crash data are inaccurate, incomplete, and untimely. Officials in several states reported that information technology resources are limited and that state agencies often have to share staff with technical expertise between different data systems and projects. Due to limited internal technical expertise, some states have used contractors, but state officials reported that this can be expensive. Also, some states have a limited list of contractors a state will approve or technologies that the contractor can offer. For example, officials from one state we visited reported that the technologies provided by contractors were not completely compatible with existing local traffic safety data systems, which limited its usefulness. However, we have previously reported that hiring a contractor can help states obtain the technical expertise needed to efficiently integrate data systems. In light of these challenges, some states have implemented strategies to overcome resource limitations. For example, North Carolina’s Governor’s Highway Safety Program office took a series of targeted, incremental steps to first focus on improving the quality of two traffic safety data system performance measures—specifically timeliness and accuracy—in each system before working on other performance measures, such as integration. State officials emphasized the importance of focusing on the “basics” and working from there, rather than starting with the most complicated improvements. For example, North Carolina initially used Section 408 grant program funding to create a guidebook that provides consolidated information on all six traffic safety data systems and their status. The guidebook enabled state officials to identify the most pressing needs among all six traffic safety data systems and target limited resources. Although the primary function of the guidebook was to increase the accessibility of data system information, it also helped state officials recognize the need to integrate traffic safety data systems to increase data accessibility between data systems. Accordingly, North Carolina has an active project to complete the linkage of crash and injury surveillance data. Although the amount of Section 408 grant program funding is small compared to state funding, North Carolina officials explained that the program is a catalyst for progress by sometimes supporting smaller projects like the guidebook, which then pave the way for larger projects, such as integrating data systems. According to state officials and one assessor, another strategy that one state has used to overcome limited funding and staff was to contract out the management of its centralized crash data system. For the state, this project was revenue neutral because it does not require additional funds for continued maintenance, as the contracted vendor receives payment by selling crash reports and data extracts to interested parties. The profit gave the vendor an incentive to work diligently with law enforcement agencies to ensure reports are complete, accurate, and submitted in a timely fashion to the central data system. There was also a built in incentive for the law enforcement agency that submits the crash reports as it also receives a reimbursement of 67 percent of the cost of each report sold. As a result of contracting out the crash data system, the state eliminated an annual cost of over $1 million for staffing, consulting, and system maintenance, and no longer requires annual federal funds to help support the system. This funding has since been redirected to hire additional state troopers and add additional staff where needed. Coordination Issues: Officials in all states we visited identified coordination issues that presented challenges in improving state traffic safety data. Some of the most frequently cited coordination issues are discussed below. “Stove-piped” agencies. Custodians of the different state traffic safety data systems are oftentimes housed in different state offices or agencies. A number of federal officials, state officials, and assessors reported instances of unwillingness to share data between various offices because of the “stove-piped” structure where there is little interaction between traffic safety data stakeholders. Furthermore, we heard from state officials and assessors that there is not always a clear understanding of the relationship among all six traffic safety data systems. Typically most of the data systems are housed within a state’s Department of Transportation or Department of Public Safety, which can compound coordination challenges for data systems housed elsewhere (e.g., injury surveillance data). Privacy concerns. According to state officials and assessors, federal and state privacy laws can limit accessibility and sharing of certain traffic safety data. A traffic records assessment for one state that we visited reported that restrictions placed on release of crash data in general, and of personal identifiers in the crash data for use by analysts within state government offices, posed major barriers to crash data analysis. This assessment also reported that these restrictions do not affect the state’s ability to generate reports such as annual crash reports or most ad hoc analyses of the state’s crash experience, but does limit the state’s ability to perform more detailed crash problem identification and to support research into the safety implications of specific laws or policies. Decentralized state governance structures. State governance structures can further complicate coordination efforts. For example, decentralized court systems such as those found in two states we visited make it difficult for the state to collect adjudication data from lower-level courts. Addressing such governance issues can take many years. For example, Minnesota officials said that the state has worked to centralize its court system over a 15-year process. State officials and assessors also reported that there is little incentive for jurisdictions, agencies, and individuals to collect and submit data in a timely fashion. Some states have mandated deadlines for the submission of data, but these deadlines are not always adhered to by all agencies required to report. Although some states have the ability to sanction those jurisdictions that do not submit data, this option is not always used. Federal and state officials, as well as assessors, told us that executive-level TRCCs, which include key decisionmakers such as agency directors, can help technical-level TRCCs overcome a variety of coordination and resource impasses. The technical-level TRCCs have been one of the successes of the Section 408 grant program and NHTSA officials and officials in nearly all of the states that we visited praised TRCC activities in bringing state stakeholders together, establishing important relationships, and moving traffic safety data systems forward. While technical-level TRCCs may lack the authority to implement certain decisions or traffic safety data projects, according to NHTSA officials, several assessors, and state officials, the authority associated with executive-level TRCCs can help prioritize traffic safety data improvements and coordinate efforts. For example, assessors explained that if a data manager refuses to share data, an executive-level TRCC could compel data sharing. They also said that the involvement and support of executive-level decision makers can raise the profile of traffic safety data projects, which do not always receive much attention, and provide the necessary leadership to complete traffic safety data improvement projects. NHTSA officials also noted that executive-level TRCCs can help states commit resources to traffic safety data projects. For example, officials in one state reported that information technology staff sometimes have not prioritized traffic safety data projects due to limited resources. However, executive-level TRCC representatives in that state have the authority to target and dedicate these sometimes limited information technology resources to traffic safety data projects. Figure 11 depicts some of the advantages of an executive-level TRCC. Officials from North Carolina, a state we visited with an executive-level TRCC, reported that the state’s executive-level TRCC oversees all highway safety issues and fills the role of “champion” for the state’s initiatives. All of the technical-level TRCC’s activities are reported to the executive-level TRCC. The executive-level TRCC helps the technical-level TRCC prioritize issues, provide assistance on legislative initiatives or interagency projects requiring significant resources. Currently, the state is developing new traffic safety data projects that will require legislation to be passed for funding. A state official said that executive-level TRCC endorsement and support will be necessary to pass the legislation. While advantageous, currently few states have established executive-level TRCCs. A NHTSA study recommended that states should have both a technical-level and executive-level TRCC to be successful, but as previously noted, only technical-level TRCCs are required for states to be eligible for funding under the Section 408 grant program. Based on estimates from one traffic records assessor, as of November 2009, nine states had an executive-level TRCC. Several traffic records assessments, however, have recommended that states establish executive-level TRCCs to help improve traffic safety data systems. Rural and urban areas across the country faced some distinct challenges in improving traffic safety data systems. As previously discussed, some state roadway data systems do not include locally maintained roadway data, which may include rural road data, and therefore do not provide a full picture of a state’s roadway system. As previously reported, many states have not developed roadway inventory data for locally maintained roads because they do not operate and maintain those roads, and are concerned about the possible costs and time frames involved in obtaining these data. As a result, states may have difficulty applying a data-driven, strategic approach to highway safety. In addition, despite the higher proportion of fatalities occurring in rural areas, officials in one state expressed concerns that a proportional amount of state traffic safety funding is not allocated to reflect this higher fatality level. We have also previously reported that limited data on rural roads can hinder state efforts in funding and addressing its top traffic safety priorities. However, some states are working to improve data on non-state owned roadways, including rural roads. For example, Ohio’s LBRS established a partnership between state and local governments and has allowed Ohio’s Department of Transportation to expand roadway data to include more comprehensive roadway information. Figure 12 depicts a map of Ohio’s Clark County, showing the 188 percent increase in the number of located crashes available for analysis through LBRS. This increase largely consisted of crashes occurring on locally maintained roadways. An additional challenge involves the volume of vehicle crashes affecting when and how much data rural and urban areas can submit. For example, state officials in two states we visited reported that rural areas submit crash data more regularly due to lower volumes. According to one assessor, some large urban law enforcement agencies have refused to report crash data, leading to gaps that limit the state’s ability to make decisions that effectively target resources. In contrast, officials in three states we visited reported that urban areas find it more difficult to submit crash data in a timely manner due to the large volumes of reports filed. Further, some cities have their own discrete crash data systems due to their high crash rates. According to officials in one state we visited, though large cities may have their own crash records systems, the system may not be linked to the state crash data system and contributes to a large number of missing crash reports. Some rural areas face additional challenges due to limited technology options. The lack of telecommunications services, such as access to the Internet, limits the ability of local jurisdictions to electronically submit data to state data systems, which can reduce the timely submission of data. We have previously reported that the cost of providing telecommunications services is higher in rural areas than in urban areas, in part due to lack of infrastructure. For some rural jurisdictions, even when the technology is available, it may not be cost effective to use due to lower volumes of traffic safety data submitted per year. Officials from states we visited reported on some strategies being implemented to overcome some of the challenges for rural and urban areas. For example, in one state we visited, the state’s highway safety office provided funding to equip state highway patrol vehicles in rural areas with mobile data terminals. Currently, roughly 70 to 80 percent of state highway patrol vehicles in rural areas have these terminals, which have increased the timeliness of crash reports submitted in the state. In another state, the state legislature created an organization to oversee funding for rural and locally maintained roadways. This organization had the mission of helping local agencies receive funding specifically targeted at locally maintained roadways. Lastly, officials in another state we visited reported an increase in the electronic submission of crash reports when the state required at least a certain percentage of crash reports to be electronically submitted in order to qualify for the Section 402 grant funding. State officials identified certain urban areas that were either underreporting crashes or not electronically submitting crash reports and then worked with these jurisdictions to improve submission rates. Since 2007, one urban area in this state has increased its electronic submission of crash reports from 44 percent to nearly 100 percent by the end of 2009. Overall, 91 percent of all crashes are now being electronically submitted in this state. Improving state traffic safety data systems is critical to state efforts to use data-driven approaches to improve traffic safety and reduce traffic fatalities and injuries. The Section 408 grant program has helped states to improve the quality of traffic safety data systems across NHTSA’s six performance measures. Despite this progress, however, almost all states have traffic safety systems that do not meet one or more performance measures. The wide range of quality we found in state traffic safety data systems underscores the importance of state traffic records assessments in helping states to plan and prioritize improvements to traffic safety data systems. However, incomplete and inconsistent information in the assessments limits the usefulness of the assessments, which, according to NHTSA’s implementing guidance, should be an “in-depth, formal review of a state’s highway safety data and traffic records system.” Furthermore, assessments in the updated traffic records assessment format currently being used often do not systematically evaluate each of the six performance measures as they relate to each of the six data systems. Improving the completeness and consistency of assessments would help states more accurately identify problems and effectively target limited resources. NHTSA officials recognize the importance of these assessments for states and are taking steps to identify improvements to some aspects of the assessment process. However, NHTSA’s efforts to review the assessment process and the effectiveness and utility of traffic records assessments are in the early stages. Based on NHTSA’s statement of work for the study, the contract includes a component to examine state traffic records assessments for effectiveness and utility and identify any improvements or degradations of traffic safety data quality. However, it is unclear whether this review will evaluate the overall content and quality of the information provided in the assessments to the level of specificity that may be needed. States face various resource and coordination challenges, which make further progress in improving the quality of traffic safety data systems difficult. State officials we spoke with noted several strategies to address these challenges. One of these strategies—establishing an executive-level TRCC—can potentially address multiple resource and coordination challenges. Specifically, an executive-level TRCC can be a helpful tool for states to prioritize traffic safety data improvements, coordinate efforts, and overcome impasses. Although the Section 408 grant program requires that states have a technical-level TRCC, it does not require states to establish an executive-level TRCC. The establishment of an executive-level TRCC holds promise, but we did not fully assess its value for states as it was beyond the scope of this report. We recommend that the Secretary of Transportation direct the NHTSA Administrator to take the following two actions: Ensure that traffic records assessments provide an in-depth evaluation that is complete and consistent in addressing all performance measures across all state traffic safety data systems. As part of NHTSA’s ongoing initiatives to improve the traffic records assessment process, specific efforts could include revisiting available assessment guidance, the frequency and manner in which assessments are conducted, and NHTSA’s assessment review process. Study and communicate to Congress on the value of requiring states to establish an executive-level TRCC in order to qualify for Section 408 grant funding. We provided a draft of this report to DOT for review and comment. DOT officials agreed with the findings and recommendations in the report and offered technical corrections that we incorporated, as appropriate. Regarding the recommendation to ensure that traffic records assessments provide an in-depth evaluation that is complete and consistent, the officials noted that NHTSA has begun several initiatives to identify opportunities to improve the assessment process and provide the states with a more effective assessment document. We are sending copies of this report to the Secretary of Transportation and interested congressional committees. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-2834 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In response to your request, this report provides information on the status of the quality of state traffic safety data systems. In particular, we sought to identify (1) the extent to which state traffic safety data systems meet National Highway Transportation Safety Administration’s (NHTSA) performance measures for assessing the quality of data systems, and (2) what progress states have made in improving traffic safety data systems and what challenges remain. To identify the extent of state traffic safety data systems meeting NHTSA’s performance measures, we analyzed the most recent traffic records assessments for each of the 50 states and the District of Columbia (D.C.). A traffic records assessment is a state document that contains findings and recommendations on the quality of a state’s traffic safety data systems, among other things. Assessments are conducted or updated at least every 5 years as one of the eligibility requirements for Section 408 grant program funding. At least three team members reviewed each assessment and coded the extent to which a state’s six traffic safety data systems met each of NHTSA’s six performance measures—timeliness, consistency, completeness, accuracy, accessibility, and integration. After individual team members independently coded the data quality of assigned state traffic records assessments, the three member sub-group met to discuss the coding categories and reached consensus on the final coding category assignment for each performance measure. Independently coding, initial unanimous agreement was reached 37 percent of the time amongst the three coders before discussions to reach consensus. Across states, initial unanimous agreement was as high as 58 percent for one state, but for two states there was no unanimous agreement for any of the coding categories. Within the performance measures there was also a range of initial unanimous agreement. For vehicle information timeliness, individual coders reached unanimity for 37 states, including D.C. (73 percent). The lowest level of initial unanimity (14 percent, or seven states) occurred within the injury surveillance system’s accuracy performance measure. Throughout this document we use the term “coding category” to refer to the extent to which a data system meets an individual performance measure. We created broad categories based on information presented in state traffic records assessments; these coding categories are not precise measurements of the extent to which data systems met performance measures. We assigned numbers to correlate with the coding categories defined below: 0 – Did not meet or minimally met performance measure (i.e., negligible, 0 to 5 percent). The state did not meet or minimally met a particular performance measure based on the available evidence. The state clearly did not meet, or the state met the performance measure to a negligible extent. For example, one state’s crash data timeliness was described as, “At present, crash data entry is experiencing a 12-month backlog. This is due to delays at every step in the process from initial crash reporting through final data entry and the multi-step/multi-stop process that is used in handling crash reports. The delays are having an impact on highway safety analysis and decision making in the state.” Since the criteria for crash timeliness is that the information should be available within a time frame to be currently meaningful for effective analysis of the state’s crash experience, preferably within 90 days, this performance measure area was coded as a zero. 1 – Marginally met performance measure (i.e., slightly, to a limited extent, greater than 5 to 50 percent). The state met the performance measure at some level above “minimally,” but not to a significant extent. The state met the performance measure to a slightly, or to a very limited extent. For example, one state’s citation and adjudication data consistency was described as, “Although there is a uniform traffic citation for [the state], not all agencies use it in the same manner. has opted to use it differently than the rest of the state. Since is a state with a court administrator that oversees each court, there seems to be some consistency in the way cases are adjudicated… is recorded at the courts is controlled so that each court records the same information.” Since the criteria for citation consistency is that all jurisdictions should use a uniform traffic citation form, and the information should be uniformly reported throughout all enforcement jurisdictions, this performance measure area was coded as a one. 2 – Generally met performance measure (i.e., significant extent, for the most part, greater than 50 to 95 percent). For the most part, the state met the performance measure, but with some limitations. For example, one state’s vehicle data accuracy was described as, “…in transition. The Department of Motor Vehicles has used Vehicle Identification Number (VIN) Analysis Software to enhance accuracy, but the descriptive information about vehicles was taken from registration and title applications. Beginning in 2006, the Department of Motor Vehicles has been entering the body style and descriptive information from VIN decoding and has been upgrading the descriptions to VIN decoded entry when re-titling vehicles.” The criteria for vehicle system accuracy includes that the state should employ methods for collecting and maintaining vehicle data that produces accurate data and should make use of current technologies designed for these purposes; therefore, this performance measure area was coded as a two. 3 – Completely met performance measure (i.e., fulfills or satisfies the condition, greater than 95 to 100 percent). The state fully met all aspects of the performance measure, and if any limitation was identified it was not material in nature. For example, one state’s roadway data accessibility was described as, “Data are accessed through the Roadway Information Management System and Integrated Transportation Management System. Various reports are produced on a daily basis for use both within the Department of Transportation (DOT) and for use by consultants, businesses and the general public.” Since the criteria for roadway accessibility is that the information should be readily and easily available to the principal users of these databases containing the roadway information for both direct (automated) access and periodic outputs (standard reports) from the files, this performance measure was coded as a three. 9 – Unknown. By “unknown” we mean that no other categorization was possible. This may be due to limited information preventing categorization, or that such information is absent. For example, one state’s roadway data integration was described as, “The integration of road and crash files seems to be adequate for present uses within [the state’s Department of Transportation].” This limited information did not directly address the integration of roadway data. In another example, a state’s injury surveillance data completeness and accuracy was described as, “Data completeness and data accuracy were not able to be evaluated during our assessment.” Due to the absent information, these performance measure areas were coded as a nine. The extent to which a state has met a performance measure is considered a reflection of data system quality. Throughout this report, in instances where a performance measure was coded as a zero or a one the performance measure is considered not met, whereas, if a two or a three was assigned the performance measure is considered met. After we concluded the coding of the assessments, we conducted a series of statistical analysis. Analysis included answering the following questions: Overall frequency of each coding category (0, 1, 2, 3, 9). Frequency of each coding category for each of the six data systems (0, 1, 2, 3, 9). Frequency of each coding category for each of the six performance measures (0, 1, 2, 3, 9). Frequencies by measure and system (total of 36 sets of frequencies). Sum “score” for each state (excluding the coding category 9). Frequency of the coding category 9 in the new assessment format as compared with the old format (which includes a section dedicated to “Information Quality”). Percent of states with one or more 9s and total percent of the time a 9 was assigned. The number of times a state scored a 3 or 2 (completely or substantially) in each system. Provided as the number of states with zero 2s or 3s in each system, one 2 or 3 in each system, etc. The number of times a state scored a 0 or 1 (not met or marginally) in each system. Provided as the number of states with zero 1s or 2s in each system, one 0 or 1 in each system, etc. The extent to which a state has met a performance measure is a reflection of data system quality. In addition to our analysis of state traffic records assessments, this objective was informed through documentary and testimonial evidence gathered on site visits. We collected and reviewed relevant advisories and guidance related to traffic safety. We also interviewed federal, state, and local officials, data users, and other experts to obtain perspectives on the quality of traffic safety data. However, we did not factor these other information sources into our traffic records assessment coding analysis. To identify the progress states have made in improving traffic safety data systems and to determine what challenges remain, we reviewed states’ progress in meeting performance measures reported to NHTSA and in state documents, such as State Highway Safety Strategic Plans. We conducted site visits to eight states: Georgia, Idaho, Maine, Minnesota, North Carolina, Ohio, Texas, and Virginia. We selected these states based on a number of factors, including NHTSA recommendations, fatality rates, population, roadway ownership, prevalence of rural roads, and geographic diversity. NHTSA officials also provided input on states that they believed encompassed a wide range of traffic safety data system quality. During our site visits we interviewed state officials to identify progress in improving the quality of traffic safety data and associated systems. To identify state challenges in improving data systems, we conducted a literature review of past GAO work and other relevant studies. We also conducted in-depth interviews with state officials responsible for data systems, and collectors and users of state traffic safety data during our state site visits. Additionally, we spoke with NHTSA, national industry associations representing the different data systems, and experts in the field to inform our analysis of the primary challenges states face, as well as to inform us of state efforts to address these challenges. We compiled all the various interviews and conducted an analysis to identify the most frequently cited challenges. The following table represents all values associated with our coding analysis of traffic records assessments for 50 states and D.C. We calculated scores for all 50 states and D.C. by adding the number of points received by a state. The total number possible was calculated by multiplying the number of systems (6) by the number of performance measures (6) by the number of possible points available per measure (3). This resulted in a maximum score of 108 points that states could receive based on the quality of their traffic safety data systems. In addition to the contact named above, Sara Vermillion (Assistant Director), Matt Cail (Analyst-in-Charge), Emily Eischen, Brandon Haller, Delwen Jones, Catherine Kim, Kirsten Lauber, Hannah Laufe, Josh Ormond, and Crystal Wesco made key contributions to this report.
Traffic crashes kill or injure millions of people each year. High-quality traffic safety data is vital to allocate resources and target programs as the Department of Transportation's (DOT) National Highway Traffic Safety Administration (NHTSA) and states work to improve traffic safety through data-driven approaches. To qualify for federal funding, states must submit plans which include fatality and crash data analyses to identify areas for improvement. This requested report provides information on (1) the extent to which state traffic safety data systems meet NHTSA performance measures for assessing the quality of data systems, and (2) progress states have made in improving traffic safety data systems, and related challenges. To conduct this work, GAO analyzed state traffic records assessments, visited eight states, and interviewed federal officials and other traffic safety experts. GAO's analysis of traffic records assessments--conducted for states by NHTSA technical teams or contractors at least every 5 years--indicates that the quality of state traffic safety data systems varies across the six data systems maintained by states. Assessments include an evaluation of system quality based on six performance measures. Across all states, GAO found that vehicle and driver data systems met performance measures 71 percent and 60 percent of the time, respectively, while roadway, crash, citation and adjudication, and injury surveillance data systems met performance measures less than 50 percent of the time. Also, data system quality varies by performance measure. For example, across all data systems, states met the performance measure for consistency 72 percent of the time, but states met the integration performance measure 13 percent of the time. According to NHTSA, assessments should be in-depth reviews of state traffic safety data systems; however, in some cases, incomplete or inconsistent information limits assessment usefulness. Of the 51 assessments we reviewed, 49 had insufficient information to fully determine the quality of at least one data system. Furthermore, an updated assessment format has resulted in more frequent instances of insufficient information. Despite varying state traffic safety data system performance, data collected by NHTSA show that states are making some progress toward improving system quality. All states GAO visited have implemented projects to improve data systems, such as switching to electronic data reporting and adopting forms consistent with national guidelines. However, states face resource and coordination challenges in improving traffic safety data systems. For example, custodians of data systems are often located in different state agencies, which may make coordination difficult. In addition, rural and urban areas may face different challenges in improving data systems, such as limited technology options for rural areas or timely processing of large volumes of data in urban areas. States GAO visited have used strategies to overcome these challenges, including establishing an executive-level traffic records coordinating committee, in addition to the technical-level committee that states are required to establish to qualify for traffic safety grant funding. An executive-level committee could help states address challenges by targeting limited resources and facilitating data sharing.
FCRA was enacted, among other reasons, to provide more accurate measures of the costs of federal loan programs and to more accurately compare costs among credit programs and between credit and noncredit programs. FCRA requires agencies with loan guarantee programs to estimate the subsidy cost, or the cost to the government, of their loan guarantees over the life of the loan. To calculate the subsidy costs, agencies must calculate, on a cohort basis, the net present value of the forecasted cash flows for the program, which for SBA included estimated defaults, recoveries, and fees related to the 7(a) program. In addition, as part of this process, SBA must determine the effects of loan prepayments on the cash flows. Under FCRA, SBA provides information that generates a single subsidy rate and does not provide information about any uncertainty in its estimate of the rate or other factors affecting the rate, such as prepayments or defaults. Prior to its 2003 budget submission, SBA’s methodology for estimating the subsidy on its 7(a) loans used historical averages for defaults and recoveries based on loan data going back to 1986 as the basis for estimates of future defaults and recoveries. This approach resulted in fairly stable subsidy estimates on a yearly basis as it included a sufficient volume of historical information that smoothed out fluctuations in economic conditions from year to year. However, this approach resulted in SBA consistently overestimating defaults and recoveries. In previous work, we found that SBA overestimated defaults by about $2 billion from fiscal years 1992 to 2000. In an effort to improve the accuracy of its subsidy estimate, SBA implemented a new methodology based on econometric modeling to estimate the subsidy cost for the fiscal year 2003 and 2004 budget submissions. Econometric modeling has advantages over historical averaging. For example, to the extent that data are available, it can take into account the effects of changes in such factors as economic conditions, program rules, and loan types on defaults and prepayments. All forecasts are uncertain, and this uncertainty has multiple causes. When relationships among economic variables are estimated, uncertainty may arise from the choice of variables used in the model, from the degree of precision with which the strength of the relationships is estimated, and from uncertainty about the future values of the independent variables used in the forecasting equation. Excluding a variable that should be in a forecasting model can reduce the quality of the model. For example, if some industries have high default rates, then excluding industry variables will tend to underestimate default costs in years when many loans go to high risk industries and overstate default costs in years when many loans go to low risk industries. The choice of variables to be used in a model results from a process of professional judgment and balancing the risks of including too many or too few variables. Economic theory and statistical tests play an important role in these decisions. The remaining sources of uncertainty, the precision of the estimated relationships and uncertainty about future values of independent variables, are often beyond the control of those building the model. The precision of the effects of the independent variables is determined largely by the amount of data available to the analyst, and uncertainty about future values of independent variables is inherent in any forecast. Internal control is a major part of managing an organization and this includes controls over data gathering and processing, such as SBA’s data on 7(a) loans. As mandated by the Federal Managers’ Financial Integrity Act of 1982, the Comptroller General issues standards for internal control in the federal government. These standards provide the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. According to these standards, internal control comprises the plans, methods, and procedures used to meet missions, goals, and objectives. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives and help ensure that actions are taken to address risks. Control activities are an integral part of an entity’s planning, implementing, reviewing, and accounting for government resources and achieving effective results. They include a wide range of diverse activities including controls over information processing. These controls are established to ensure that all data inputs are received, are valid, and outputs are correct. Agency management should design and implement internal control based on the related costs and benefits. No matter how well designed and operated, internal control cannot provide absolute assurance that all agency objectives will be met and, thus, once in place, internal control provides reasonable, not absolute, assurance of meeting an agency’s objectives. We found that the econometric equations that SBA used to estimate defaults, prepayments, and recoveries were reasonable, although other equations could also be reasonable. SBA uses an appropriate statistical technique for identifying the nature of these relationships. In addition, SBA’s equations produced estimated relationships for defaults and prepayments that were consistent with expectations based on economic reasoning. We found that there were additional variables available to SBA that it did not include in its equations, such as measures of interest rates and the borrower’s industry type that would also be reasonable and would produce different subsidy rates. In addition, SBA did not include any economic variables in its equation for estimating recoveries. According to documentation provided by SBA to estimate recoveries on defaulted loans, adding economic variables would not have increased the precision of the recovery rate estimates. Finally, we found that the new model’s estimated default and recovery rates were in line with recent historical experience. The econometric equations that SBA used at the time of our review related the likelihood that a borrower would either default on or prepay a loan to several variables that economic reasoning and prior research suggested were appropriate to include in these types of equations. These variables included: (1) characteristics of the borrower’s business, such as whether it was a sole proprietorship, partnership, or corporation; (2) characteristics of the loan, such as the amount borrowed; and (3) two measures of economic conditions, the unemployment rate in the state where the loan was made and the GDP growth rate. Economic reasoning and prior research suggested that differences in borrower and loan characteristics and economic conditions were likely to influence defaults and prepayments. For example, prior research suggested that new businesses were less likely to survive than were established businesses and thus were more likely to default. Prior research also suggested that the likelihood of default on loans made to partnerships or corporations should be less than it was for loans made to sole proprietors, while the likelihood of prepayment should be greater. Details about SBA’s econometric equations are found in appendix II. At the time of our review, SBA used an appropriate technique known as multinomial logistic regression to identify whether the variables included in its model were important influences on the likelihood that a borrower would either default on or prepay a loan and to estimate the magnitude of these relationships. This technique, which has been used in other models of this type, was appropriate because it corresponded to the decision-making process that borrowers faced. When deciding whether to default on the loan, prepay the loan, or keep it active, using this technique, SBA produced estimates of both the probability of default and the probability of prepayment. The relationships that SBA’s equations estimated between different variables and the likelihood of defaults and prepayments were consistent with economic reasoning. For example, SBA’s default equation suggested that defaults were more likely when unemployment was higher, and the rate of increase in gross domestic product was lower. Both of these estimated relationships were consistent with economic reasoning because it was less likely borrowers would continue paying their debts when more people are out of work, and the economy was growing less rapidly or in decline. SBA’s prepayment equation also suggested that prepayments were more likely when loans were made under the SBA Express Program, for which SBA guaranteed a smaller percentage of the loan amount than it did under the regular 7(a) business loan program. This result was consistent with our expectations because the smaller guarantee was likely to make lenders more cautious in making lending decisions, such that firms borrowing through this program may have been more creditworthy than firms borrowing through the regular program. In turn, the businesses’ enhanced creditworthiness may have led to more prepayments because these businesses may have been relatively more financially stable and may have been more likely to pay off their loans early. The details of SBA’s default and prepayment equations, which show these relationships, are in appendix II. We identified additional variables available to SBA, but not included in the model, that also influenced the likelihood of defaults and prepayments. The choice of variables included in a model reflects the modelers’ professional judgment and different equations using different sets of variables can all be considered reasonable. To analyze the effect of adding additional variables, we tested SBA’s model to estimate the 2003 subsidy cost using additional variables that (1) measured the current interest rate on 1-year U.S. Treasury bills and (2) considered the industry in which the borrowing firm operates. The interest rate could be important as either another measure of general business conditions or as a specific measure of the cost of capital. The industry in which the borrowing firm operates could be important if default and/or prepayment rates vary among industries, and the distribution of loans among industries varies over time. In addition, banks have traditionally recognized that the financial performance of a borrower depends on the nature of the business supporting the loan, the structure of the loan, and the financial condition of the firm. At the time of our review, SBA’s econometric equations contain information on the loan and the firm but did not include information on the firm’s business. The estimates produced by our testing suggest that these variables also influenced the likelihood of defaults and prepayments occurring and, therefore, that equations using these variables could also be reasonable. However, there are additional considerations that could be important in deciding whether to include a measure of interest rates in the default and prepayment equations. Specifically, including an interest rate variable would mean that forecasted interest rates would be used with the results of the econometric equations (and forecast values of other economic variables) to forecast future defaults and prepayments. The fact that forecasting interest rates is difficult may be a reason for not including an interest rate variable, even if the variable appears to be significantly related to the historical likelihood of default or prepayment. Furthermore, at present, forecasted interest rates are low relative to the interest rates that prevailed over most of the period from which the data were drawn to develop SBA’s equations, potentially limiting the usefulness of including an interest rate variable. We found that including either the interest rate on 1-year Treasury bills or the industry in which the borrowing firm operates as a variable in the default and prepayment equations changed the estimated cost of the program. (See app. II.) According to SBA’s model, the estimated subsidy rate for loans disbursed in 2003 was 1.04 percent. This estimate increased to 1.13 percent with the industry identifiers included and decreased to 0.76 percent with the inclusion of the interest rate on 1-year Treasury bills. In addition, when we included both the interest rate variable and the industry identifiers, we estimated a subsidy rate of 0.83 percent. Because interest rates are difficult to predict and have recently been quite low, we conducted tests to determine how sensitive the estimate was to small changes in forecasted interest rates. We found that it is not very sensitive to such changes. For example, when we increased the forecasted values above those included in the official OMB forecast by 10 percent, we estimated a subsidy rate of 0.80 percent while when we decreased the forecasted values by 10 percent we estimated a subsidy rate of 0.73 percent. The range of estimated subsidy rates that result from including additional variables was roughly comparable to the range that resulted from using different economic assumptions. We tested the sensitivity of SBA’s estimated subsidy rate to small changes in the forecast values of the GDP growth rate and the unemployment rate by reestimating the subsidy rate with SBA’s model but used both more optimistic and more pessimistic assumptions about future economic conditions. With the more optimistic assumptions, we estimated the subsidy rate decreased to 0.81 percent while with the more pessimistic assumptions we estimated that it increased to 1.28 percent. SBA’s model also included a separate econometric equation for estimating recoveries, which are the amounts of defaulted loans that were eventually recouped by collection efforts, such as the liquidation of assets. In this equation, the cumulative net recovery rate for a cohort of loans was estimated as a function only of the age of the loans in that cohort. In particular, this equation did not include any economic variables, so forecasted recovery rates were estimated to resemble historical recovery rates even though economic conditions in the future might be quite different from the past. According to documentation provided by SBA of the work done to develop this equation, adding economic variables would not have increased the precision of the recovery rate estimates. Our evaluation of the model’s estimated default and recovery rates found that these rates were in line with historical experience of the 7(a) program. There are some limitations to evaluating expected future loan performance compared with historical data because over time the economy changes and underwriting criteria and other factors that affect loan performance may also change. Therefore, one would not expect the estimated loan performance to exactly mirror historical experience. However, these types of comparisons are useful to evaluate the model’s estimated default and recovery cash flows. Because recently issued loans do not have significant experience and historical data can be summarized in several ways, we evaluated the new model’s estimated default and recovery rates compared with historical data in two ways to determine whether the estimates were in line with historical experience. In August 2001, we reported that from fiscal year 1992 through fiscal year 2000, SBA overestimated the cost of the 7(a) program by about $1 billion, primarily because it overestimated defaults by approximately $2 billion. Over this same period, SBA’s estimated recoveries closely matched actual loan performance. SBA’s prior method to estimate costs was based on averages of historical loan performance. As previously discussed, SBA’s current model estimated defaults significantly differently than the prior method in that it considered economic variables and loan specific information. Meanwhile, at the time of our review, the model continued to estimate recoveries based on historical patterns. While it was currently not possible to determine the accuracy of the model’s estimated default rate, as shown in the following two figures, the rate appeared to more closely match recent historical experience than SBA’s previous method. Figure 2 shows how the model’s estimated default rate compared with the estimated default rates calculated with SBA’s previous method and with the average default experience of loans issued between 1992 and 2001. We could have included more or fewer years of loans in our analysis, but we believe data since 1992 are sufficient to evaluate the model’s estimated default rate compared with historical experience because it included several years of loans that have been through their peak default period, which for 7(a) loans is generally between years 2 and 5. As previously mentioned, since historical data may be summarized differently, figure 3 shows how the new model’s estimated default rate compared with the estimated default rate calculated with SBA’s previous method and to actual default experience during fiscal year 2001 for the loans issued since 1986. This comparison allowed us to evaluate the estimated default rate over a longer period of time since data from older loans that have been outstanding for a longer period of time was included. SBA could enhance the reliability of its model’s estimates by adding information on both the businesses and the owners to the econometric equations and reestimating the equations and by correcting errors in the model. The econometric equations SBA used at the time of our review to predict default and prepayments included some variables describing the businesses and loans and two economic indicators, GDP and unemployment rates. But they did not include some variables other analysts and financial institutions often use that are associated with businesses and business owners, such as credit scores. In addition, during our review, we found some errors that resulted in underestimating the cost of the 7(a) program that was included in the fiscal year 2004 President’s Budget. Correcting these errors would have increased the estimated cost of the program by about $6.5 million. The quantitative relationships between the default and prepayment rates and the current independent variables would probably change if new information were included. In our review of the literature and discussions with large banks, additional information was mentioned as having an influence on defaults and prepayments. The information cited was more detail on the loans, the business, and on business owners, including credit scores. Our review of the academic literature and discussions with some commercial lenders indicated that private lenders often include variables SBA did not consider in forecasting the financial performance of small businesses. At the time of our review, the current SBA model included loan variables (age and term) and some business variables (new business indicators, form of ownership, and loan amount, among others) but was missing detailed information on businesses that can help predict financial viability. These variables include earnings, capital, payment records, and available collateral, all of which have been shown to affect creditworthiness and likelihood of default. Profit levels, for example, help predict a business’s ability to generate cash internally to cover loan payments. Records of debt payments help determine whether a business can cover its obligations, while available collateral tells a lender whether a business has the resources to cover outstanding debts during a financial crisis. Adding and periodically updating this information could enhance the predictive ability of SBA’s econometric model by providing more accurate estimates of potential defaults and prepayments. In addition, analysts and banks have found that variables describing business owners can aid in evaluating credit risk, and many large banks have started to underwrite and monitor small businesses using credit scores. Information from business owners’ credit records, such as income, personal debt, employment tenure, homeownership status, and previous personal defaults or delinquencies, can help predict delinquencies and defaults in the businesses themselves. Although at the time of our review SBA’s current model did not include variables that measure these characteristics, the agency was developing a new loan monitoring system that SBA officials told us was intended to track this type of information. This is an important issue since, if banks use credit scores and the SBA does not, the SBA may be left with riskier loans. SBA could then determine whether such variables also reflect risks in SBA loans and could be used to help evaluate the costs of SBA loan guarantees. During our review of the model used to generate the cost estimate of the 7(a) subsidy that was included in the fiscal year 2004 budget, we found errors that resulted in underestimates of program costs of about $6.5 million. Based on the estimated subsidy rate and the projected loan volume included in the fiscal year 2004 President’s Budget, the estimated cost of the program was about $94.9 million. If the errors we found had been detected and corrected by SBA before the budget was submitted, the estimated cost of the program with the same projected loan volume would have increased to about $101.4 million. These errors related to SBA’s method of estimating recoveries, annual guarantee fee cash flows, and projections of borrower interest rates. First, the recovery estimates were based on the assumption that loans would be issued during fiscal year 2003 instead of during fiscal year 2004, although default and prepayment estimates were based on the later year. As a result, the model estimated that recovery cash flows would occur 1 year early, affecting the net present value of the cash flows and the subsidy rate. Second, formulas SBA used to summarize the output of the cash flow segment of the model indicated that the same annual guarantee fees collected during the first quarter of fiscal year 2004 would be collected from about years 5–27, even though the fees would decline as loan balances were paid off. SBA officials indicated that these two errors would be corrected before the submission of the 2005 budget. Third, in estimating the cost of loans issued in the future, SBA assumed the loans would have characteristics similar to those of loans issued during fiscal year 2001. However, SBA did not adjust the borrower interest rates to levels that would be more appropriate for loans to be issued during fiscal year 2004. SBA officials indicated that this adjustment was not necessary because it would not significantly affect the cost of the program. However, SBA had made this adjustment when it calculated the subsidy cost for loans to be issued during fiscal year 2003. When we corrected the previously described errors, the estimated cost of the program for fiscal year 2004 increased by $6.5 million. We also found an error related to estimating prepayment penalties. SBA officials stated that they were aware of this error but believed that fixing it would be complicated and that these cash flows would be immaterial to the cost of the program. In the officials’ view, fixing the error would not be cost beneficial. In addition, the model could also be further enhanced if SBA were to update the model to include new information as it becomes available. For example, SBA used the 2001 cohort of loans to generate estimates of the 2003 and 2004 subsidy. But, they were not sure if they were going to use the 2002 cohort of loans for the 2005 estimate because they said that updating the cohort is complicated as a result of changes in program policies or in the composition of the 7(a) loan portfolio. However, the model would likely produce more reliable estimates if the most recent loan data were being used to generate the forecast rather than continuing to use an older cohort of loans. SBA contracted with OFHEO economists, with expertise in econometric modeling of mortgage defaults and prepayments, to develop its subsidy model, which included determining the variables to be included in the econometric equations. SBA consulted with OMB officials, who are required by FCRA to approve agency subsidy estimates. SBA also hired a private consulting firm to conduct a limited review of the model as part of its ongoing review process to minimize errors in estimating the subsidy. In February 2002, SBA entered into an agreement with OFHEO to assist in developing the subsidy model. According to SBA staff, they selected OFHEO because it had staff with expertise and experience in econometric modeling and was less expensive than a private contractor. According to SBA staff, the OFHEO economists followed a four-step process to develop the model. The first step was refining and building the data set that would be used to generate the estimates. The data set OFHEO used was constructed from the SBA databases that were used to track loan payment history and personal financial information on borrowers. The second step was the design and estimation of the default, prepayment, and recovery equations, including the selection of variables for these equations. The third step of the process was the construction of the cash flow module, and, the fourth step was the construction and testing of the model that OFHEO would deliver for use by SBA. OMB officials also played a key role in the development of the model because, under FCRA, OMB has final responsibility for approving estimation methodologies and determining subsidy estimates. SBA officials said they consulted with OMB during the model’s development until OMB approved it in the fall of 2002. OMB officials told us that they considered the model to be an improvement over the previous method that SBA used to calculate the program subsidy rate because it used better data and the econometric equations allowed for more accurate estimates of future cash flows. In addition, SBA could now use the model to consider both programmatic and economic variables in estimating the subsidy rate. For example, they said SBA could model how such variables as lender type affected the subsidy rate. In reviewing the model, OMB officials told us that they focused on the methodology of the model, the cash flow projections, appropriate use of variables in the econometric equations, and the validity of the data used to make the calculations. They approved the model in November 2002. SBA hired a private consulting firm to conduct an independent limited review of the model for September 2002 to October 2002, as part of its ongoing process to identify errors before OMB approved the model. The consulting firm assessed the model conceptually and evaluated its underlying computer programming—specifically, the key data inputs that were the primary source of the model’s cash flows and the model’s programming specifications (to ensure they were correctly coded and that the code functioned properly). The firm also assessed the model’s compliance with the relevant statutes and regulations and conducted scenario testing to evaluate how it performed under different economic assumptions. The consulting firm concluded that although the model performed reasonably well in estimating the subsidy cost, SBA had made errors in estimating loan guaranty and servicing fees, the calculation of recoveries, and prepayment penalties. SBA made changes to the model to address the identified discrepancies for fees and recoveries, the net effect of which was, to increase the subsidy rate estimate by about 36 percentage points. The consulting firm also determined that the model lacked adequate documentation and they were, therefore, unable to review the econometric component of the model. However, OFHEO subsequently provided SBA with a report documenting the model’s development to a limited extent. In developing its new econometric model, SBA did not prepare adequate supporting documentation to enable independent reviewers to understand and evaluate the process that was used. For example, the independent contractor SBA hired to review the 7(a) credit subsidy model was hampered by the lack of adequate documentation and, as a result, this team’s review of the model’s theoretical basis and its working features was severely limited. While SBA later developed some general documentation of its model development process, this documentation did not contain, among other things, an adequate discussion of alternative variables, or combinations of variables, that it considered, tested, and rejected, and the reasons for rejecting them. SBA officials told us that they did not prepare this type of documentation because they believed that there was no specific requirement to do so. Current guidance is either silent or unclear about supporting documentation needed to explain the development of econometric models used to generate credit subsidy estimates for the budget and financial statements. Nevertheless, we believe that maintaining adequate documentation on how such models were developed is a sound internal control practice that would provide SBA and other agencies the opportunity to demonstrate and explain the rationale and basis for key aspects of their models that provide important cost information for budgets, financial statements, and congressional decision makers. Moreover, as a practical matter, this documentation would help facilitate SBA’s and other agencies’ annual financial statement audits. BearingPoint, the independent contractor hired to perform an initial review of the SBA 7(a) credit subsidy model prior to its finalization, was hampered by the lack of adequate documentation. In response to our inquiry, the contractor stated that the team did not validate the model which, from an audit perspective, would have encompassed a more robust effort. In its final report to SBA, the contractor reported that SBA lacked sufficient supporting documentation for a “thorough review of its theoretical basis (including alternative modeling methodologies explored), its working features, or the update and maintenance procedures necessary to use the model on an ongoing basis. This lack of adequate documentation severely limited our ability to assess certain critical parts of the model in detail, including its econometric components.” Further, the contractor recommended that “SBA develop a robust set of documentation to support this model” including “the modeling methodology, alternate methodologies considered, data inputs and outputs, and model maintenance and update requirements.” In its January 30, 2004, audit report, Cotton and Company, the independent public accounting firm, identified in its internal control report 9 specific deficiencies in the model’s documentation. These deficiencies included, for example, a lack of technical references for the statistical method used for the performance of the model, the absence of mathematical specifications, the fact that important variables were not clearly identified, and that units of measure for key variables were not specified. In addition, the audit report stated that the documentation that was provided was “self- contradictory” about the quality of the default and prepayment model and lacked a discussion of the assumptions and limitations of SBA’s modeling approach. In responding to the independent public accountant’s internal control report, SBA’s Chief Financial Officer generally agreed with the report’s findings, including the deficiencies in SBA’s model documentation, and stated that the internal control report presented “fundamentals of good financial management and SBA is committed to accomplishing as many of these items as possible in the coming year.” In response to BearingPoint’s recommendation, SBA’s OFHEO contractor prepared some documentation for the model, but this documentation was not sufficient to allow us and SBA’s financial statement auditor to gain an adequate understanding of certain key parts of the model development process. For example, the documentation that SBA provided included a broad overview of how the model works, a list of the variables that the final econometric equations included, the estimated coefficients of the equations, and figures showing how well the equations fit the data during the historical period. For some variables, SBA’s documentation indicated how the variables were expected to influence default or prepayment probabilities, but did not provide any reasons, conceptual justification, or supporting empirical analysis. Some of these statements seemed intuitive, such as when the output of the economy increases, as measured by the percent change in real GDP, it is expected that default rates will drop. However, other statements were not intuitive. For example, SBA’s documentation indicated that larger loans were expected to default at elevated levels and did not include any support for this assertion. Additionally, the model documentation did not explain in sufficient detail why SBA excluded some variables. Rather, the model documentation included a table of 29 variables that were tested and rejected and stated that the information presented was “a list of most variables tested.” The documentation also provided a general overview about why these 29 variables were excluded. SBA’s documentation stated that “variables were removed for a variety of reasons. Some of the reasons include— insignificant, highly correlated with other variables, low economic importance (significant but impact on probabilities was negligible), inconsistent results (variable was not robust to different specifications), and incoherent results (results could not be reconciled with any economic logic).” While the documentation that SBA provided to us contained acceptable reasons that economists could cite in rejecting variables, the documentation’s lack of specificity did not allow us to determine which variables were rejected for which reasons. Further, we were unable to determine whether these were the only criteria or whether they were consistently applied throughout the model development process. SBA and the OFHEO contractor told us that, during the model development process, approximately 800 pages of raw testing information were generated and retained in an electronic file. They further stated that these 800 pages were not organized in any fashion and that there was no summary document or road map with greater detail than the model documentation provided us that would describe the variable-testing process or the results of that process in an understandable fashion. In addition, SBA and the contractor told us that the variables reflected in the 800 pages were not recorded in English words, but rather in mnemonics, and that there was no crosswalk or key still in existence to decode the mnemonics. Based on these representations by SBA and its contractor, we initially concluded that this information would be of questionable or no usefulness in assessing SBA’s development of the assumptions and selection of variables used in the modeling process. SBA eventually provided us access to the 800 pages of material that contained some information on variables that were considered and rejected. This document was a partial compilation of analyses conducted during the model development process with no explanation or discussion of what was learned from each analysis conducted. Thus, on its own, this document provided little additional information regarding the process that SBA’s contractor followed in developing the econometric equations used in the subsidy model. Further, the document was written in mnemonics and was not organized in any logical manner. In addition, SBA officials could not identify any specific parts of this documentation that related to alternative variables that were considered and rejected during the model development process. Documenting the basis for selecting and rejecting variables from an econometric model used to develop credit subsidy estimates is an important internal control that would also help to provide financial statement auditors reasonable assurance that a bias was not introduced into the credit subsidy estimates by systematically excluding variables to influence the subsidy rate in a particular direction. Statement on Auditing Standards Number 57, Auditing Accounting Estimates (SAS No. 57), states that “even when management’s estimation process involves competent personnel using relevant and reliable data, there is potential for bias in the subjective factors.” When evaluating the reasonableness of an estimate, the auditor should concentrate on, among other things, “key factors and assumptions that are subjective and susceptible to misstatement and bias.” Because of the nature of econometric models and the effect that variables used have on future loan default and prepayment projections, auditors need to understand both what was included and excluded from the model to assess the reasonableness of the credit subsidy estimate from a financial accounting perspective. As our work demonstrated, changing the variables that were included in the model changed the subsidy rate. Because of the lack of adequate documentation on SBA’s 7(a) model development process, we were unable to determine whether a bias in selecting variables existed in the model. Further, SBA’s lack of adequate documentation on the 7(a) model development process could have impeded our ability to reach a conclusion on SBA’s loan accounts in connection with the audit of the consolidated financial statements of the federal government. Currently, there is limited specific guidance on the nature and extent of documentation that agencies must prepare related to the development of models to generate credit subsidy estimates. OMB Circular A-11, Preparation, Submission, and Execution of the Budget, provides guidance on how agencies should prepare credit subsidy estimates. Circular A-11 does not include any guidance to the agencies for documenting their model development process including selection and rejection of variables for use in the models that generate federal credit subsidy estimates. However, Federal Financial Accounting and Auditing Technical Release 6, Preparing Estimates for Direct Loan and Loan Guarantee Subsidies under the Federal Credit Reform Act Amendments to Technical Release 3: Preparing and Auditing Direct Loan and Loan Guarantee Subsidies under the Federal Credit Reform Act, provides some implementation guidance about the nature and extent of documentation agencies should have for their models. Technical Release 6 states that agencies should document the cash flow model(s) used and the rationale for selecting the specific methodologies. Agencies should also document the sources of information, the logic flow, and the mechanics of the model(s) including the formulas and other mathematical functions. In addition, because the model is the basis for budget and financial statement credit subsidy estimates, this documentation also facilitates an OMB budget analyst’s review, if the analyst is not involved in the development process, the external financial statement audit, and other independent reviews. Technical Release 6 also states that agency documentation for subsidy estimates and reestimates should be complete and stand on its own, enabling an independent person to perform the same steps and replicate the same results with little or no outside explanation or assistance. In addition, if the documentation were from a source that would normally be destroyed, then copies should be maintained in the file for the purposes of reconstructing the estimate. Technical Release 6 does not specifically address expected documentation of an agency’s model development process, including a detailed discussion of alternative variables that are considered, the reasons for their rejection, and specific examples based on results of earlier regressions. Nevertheless, in our view, the documentation principles in this Technical Release represent sound internal control practice that could also be applied to an agency’s development of a model used to generate budget and financial statement credit subsidy estimates. Such documentation would introduce transparency into an agency’s budget process and enable agencies’ models and the resulting estimates to withstand scrutiny and inquiry from independent reviewers. For example, such documentation would allow validation of an agency’s model by independent reviewers, and provide reasonable assurance that the agency selected and rejected assumptions and variables for the model on a sound basis. Further, this documentation would help demonstrate to congressional stakeholders sound decision making and stewardship over millions of dollars in appropriated funds. Calculating a reliable credit subsidy estimate requires that the key cash flow data, such as defaults or recoveries and the timing of these events be reliable, or the credit subsidy estimate could be affected. Internal control standards call for agencies to have a process to help ensure the completeness, accuracy, and validity of all transactions processed. SBA’s monthly reconciliation process, combined with lender incentives and loan sales, helped ensure the quality of the underlying data used in its credit subsidy estimation process. Although at the time of our review, some errors in its data existed in SBA’s databases, the nature and magnitude of these errors was unlikely to significantly alter the subsidy rate. Further, we tested the data used by SBA’s new econometric model and found them to be consistent with the data in SBA’s loan systems at the time of our review. The primary method that SBA used to help ensure the integrity of its loan data is its Form 1502 reconciliation process. Reconciliations are an important internal control established to ensure that all data inputs are received and are valid and all outputs from a particular system are correct. This process, which has been in effect since October 1997, utilized an SBA contractor to conduct monthly matches of borrower data submitted by 7(a) program lenders on SBA’s Form 1502 to the information in the agency’s Portfolio Management Query Display System to help ensure the completeness and accuracy of the agency’s data. The information on the Form 1502 included a wide variety of data for an individual loan, some of which was used in the credit subsidy estimation process, and included, among other things, loan identification number; loan status such as current, past due, or in liquidation; loan interest rate; the portion of the loan guaranteed by SBA; and the ending balance of the loan’s guaranteed portion. Errors identified by this match were loaded each month into SBA’s Portfolio Management Guaranty Information System, and it was accessed by the various district office staff to work with lenders to correct the erroneous data. Although we did not independently test the data match conducted by SBA’s contractor or the field office staff’s correction of identified errors, we reviewed summary reports of the errors in the Guaranty Loan Reporting System for each district office over a 4 month period during fiscal year 2003 and found that most of these reported errors were resolved during the month the errors were identified. During the months we reviewed, the percentage of errors resolved ranged from a low of about 65 percent to a high of nearly 89 percent. Although one month we reviewed had only a 65 percent resolution rate, leaving 4,860 errors uncorrected at the end of the month, as explained in the following paragraph, not all of these errors would affect the subsidy estimate and this number is relatively small compared to the large volume of loan transaction level data used in the credit subsidy estimation process. Our review of the underlying data used in the model showed that about 5.7 million data records were used to record the quarterly loan performance of 392,315 loans from 1988–2001. In order to assess whether the remaining errors in SBA’s data base would likely have a significant affect on the credit subsidy estimation process, we reviewed the 38 different error codes that are reported monthly by the Guaranty Loan Reporting System and found that less than half of these error codes were related to data used by the econometric model and, as a result, could have affected the credit subsidy estimate. For example, the Guaranty Loan Reporting System identified errors for lender contact name and phone number—data that were not used by the new econometric model and would not affect the subsidy estimate. Other error codes relating to the guaranteed portion principal balance or whether a loan was in liquidation status could affect the credit subsidy estimate if the number of errors and their dollar volume were significant. We reviewed a 6-month summary error report from the Guaranty Loan Reporting System for activity between February and July 2003 and found that, for those error codes that could affect the credit subsidy estimate, only two of these codes had error rates that exceeded 1 percent of the transactions. One of these codes indicated that the loan status was not correct because the loan was in liquidation and had an average error rate of about 1.4 percent for the 6-month period we reviewed. The other error code indicated that the bank did not report any information for a particular loan and had an average error rate of about 2.4 percent for the same time period. The remaining 11 error codes that could have affected the credit subsidy estimate had rates of less than 1 percent. We assessed the error rates on this report in aggregate to determine if these could affect the credit subsidy estimate and found that the average aggregate error rate was about 6.5 percent during this period. However, given that most of these errors were corrected in the month the error was identified, it was unlikely that the remaining uncorrected errors would affect the credit subsidy estimate at the time of our review. In addition to the monthly loan data reconciliation process, lender incentives also helped ensure the integrity of the underlying data used in the credit subsidy estimates. In accordance with current SBA policy, the agency can reduce or completely deny a lender’s claim payment if the defaulted loan data are not correct. According to SBA officials, this policy gives the 7(a) program lenders an incentive to correct data errors because it helps ensure they will be paid the full guarantee amount if the borrower subsequently defaults on the loan. SBA provided us with repair and denial data for fiscal years 1999 through the first three quarters of fiscal year 2003 showing that the agency exercised these options 2,177 times during this time, totaling at least $69.9 million. Further, an ancillary benefit of SBA’s loan sales program was to help ensure data integrity. Prior to a sale, SBA district office staff, as well as contractors, reviewed loan files as part of the “due diligence” reviews to provide accurate information about the loans available for sale to potential investors so that they may make informed bids. SBA officials told us that prior to selling a loan, discrepancies between the lenders’ data and SBA had to be resolved. In order to assess the consistency between the data used in SBA’s econometric approach and the data in SBA’s loan system, we selected and tested a stratified random sample of 400 items to test key data that could affect the credit subsidy estimate and found no errors. Specifically, we randomly selected 100 default and recovery transactions and compared the amounts and transaction dates between the loan system data and loan-level data used for the credit subsidy estimate. In addition, we randomly selected 100 loans identified by the model to be prepaid and reviewed the loan histories in SBA’s database and determined that all of these loans were paid off prior to their scheduled termination date. Further, we tested 100 additional loans and compared their status such as current, paid off, or default to ensure their status in the model was correct and found no errors. We also assessed the magnitude of 7(a) loans that were excluded from the model in order to determine whether excluding these potentially valid loans would likely affect the credit subsidy estimate. Our earlier work on SBA’s previous 7(a) credit subsidy model that primarily used historical averages of defaults and recoveries found that excluding loans from certain years that had higher default rates would lower the overall average default rate. Excluding large numbers of loans from this model would likely have a similar effect on the estimated subsidy rate. To assess the magnitude of excluded loans, we reviewed the computer coding for the econometric model and found that SBA excluded loans when critical data for the model were missing such as the initial disbursement date, the loan amount, or demographic information on the borrowers. For most of the years between 1988 and 2001, the number of loans excluded because they lacked these essential data ranged from 1 percent to 2 percent and overall, we concluded that the degree of excluded loans was acceptable and would not significantly affect the credit subsidy estimation calculation, at the time of our review. Overall, we found that from an economics perspective, SBA’s econometric equations for its 7(a) credit subsidy model were reasonable. However, from an audit perspective, SBA’s lack of adequate documentation of the model development process precluded us from (1) independently evaluating the model’s development; (2) determining whether SBA used a sound and consistently applied method to select and reject variables to be included in the model; and (3) determining whether a bias in selecting variables existed in the model. Based on our review, SBA’s econometric equations for estimating defaults, prepayments, and recoveries, which were used to derive the estimate of its fiscal year 2004 subsidy costs, were reasonable. This model’s methodology has the potential to produce more reliable estimates than the previous method of using historical averaging to project the estimated program cash flows because this model relies on economic reasoning in addition to historical program data. However, the precision of any econometric model is limited because any estimate produced by such a model should be considered one point in a range within which the actual subsidy cost will likely fall. Because the budget process requires agencies to select a specific estimate rather than project a range, there will likely be some variance between the forecasted and actual subsidy amounts. Using additional data that SBA anticipates gathering in its new loan monitoring system, such as borrower-specific data, could further enhance the reliability of SBA’s estimates of the subsidy cost. Therefore, further enhancements could produce more reliable results. Although the errors we identified in the model did not materially affect the subsidy cost estimate, they did indicate that the process SBA used to validate the model could be improved. Therefore, it is important to invest the resources needed to periodically reevaluate the underlying assumptions of any model to ensure that they are correct and comprehensive, and that any errors or erroneous assumptions are corrected so that the model continues to yield reasonable results. While we found SBA’s equations to be reasonable from an economics perspective, the lack of adequate documentation of the model’s development process hampered three independent reviews of the 7(a) model. Notwithstanding the current lack of clear OMB Circular A-11 guidance, SBA could benefit from applying the documentation principles embodied in Technical Release 6 to the development of the 7(a) econometric model and other credit subsidy estimation models it has recently developed or is currently developing. Without adequate documentation, SBA will be unable to transparently demonstrate the rationale and basis for key aspects of models that provide important cost information for budgets, financial statements, and congressional decision makers. Although OMB provides guidance on how agencies should prepare credit subsidy estimates in Circular A-11, it does not include any guidance to the agencies for documenting their model development process including the selection and rejection of variables for use in the models that generate federal credit subsidy estimates. A lack of improved OMB guidance for model documentation will continue to hamper adequate external oversight and validation of models used to generate credit subsidy estimates. We are making three recommendations to SBA and one to OMB. To further enhance the reliability of SBA’s subsidy estimates, we recommend that the SBA Administrator take the following two actions: determine how best to include in future subsidy models borrower- specific information, such as credit scores and loan-to-value ratios, to be collected in the new loan monitoring system; and ensure that the model remains reasonable by establishing a process for periodically evaluating the model to correct any errors and revising it to reflect changes in the 7(a) business loan program or other factors that could affect the subsidy estimate. To demonstrate and explain the rationale and basis for the 7(a) econometric model and all other models developed, we recommend that the SBA Administrator take the following action: prepare and retain adequate documentation of the model development process including a detailed discussion of the alternative variables or combinations of variables that were considered, tested, and rejected, as well as the reasons for rejecting them. To facilitate (1) validation of models used to generate credit subsidy estimates, (2) external oversight, and (3) financial statement audits, we recommend that the Director, OMB, take the following action: revise OMB Circular A-11 to require that agencies document the development of their credit subsidy models, including the process followed for selecting modeling methodologies over alternatives, and variables tested and rejected, along with the basis for excluding them. We provided an initial draft and a revised draft, based on our review of additional model documentation, to both SBA and OMB for review and comment. While our initial draft was at the agencies for comment, we continued to pursue additional documentation that SBA had to further explain its 7(a) model development process, including what variables were selected, rejected, and why. When we eventually obtained access to the 800 pages of SBA material, we determined that it was not organized and included no road map to describe the variable testing process or its results. We concluded that this information was of questionable or no usefulness to our assessment of SBA’s modeling process. We addressed the weaknesses in SBA’s documentation in the revised draft report and provided it to SBA and OMB for comment. In commenting on the initial draft, SBA’s Chief Financial Officer (CFO) generally agreed with our findings and the first two recommendations related to actions to further enhance the reliability of the model’s subsidy estimates. OMB did not provide any comments on the initial draft report. We received comments on the revised draft from SBA’s CFO who generally disagreed with our findings and recommendations related to the lack of adequate documentation supporting the model’s development process. We also received comments on the revised draft from the OMB Assistant Director for Budget and the Controller who disagreed with our recommendation that OMB revise Circular A-11. Their written comments are reprinted in appendixes III and IV, respectively, and are summarized below. Both agencies provided technical comments that we have incorporated into the report as appropriate. In commenting on our final draft report, SBA stated that it had provided us with extensive documentation, briefings, and explanations about how the model was developed. We met with SBA officials and their contractor who constructed the model and discussed their methodology, but we were unable to corroborate this information with the documentation they subsequently provided. SBA’s comment letter stated that it provided us with 800 pages of material that contained some information on variables that were considered and rejected. During our subsequent review of this material, we found that this documentation was a partial compilation of analyses conducted during the model development process with no explanation or discussion of what was learned from each analysis conducted. After reviewing all of this documentation, as discussed in the report, we concluded that it provided little additional information to enable us to understand and corroborate the process and criteria that SBA used to select and reject variables for its 7(a) model. Our conclusions regarding the lack of adequate documentation for the model’s development process were consistent with those of both the independent contractor SBA hired to review the model in 2002 prior to its implementation and the independent public accounting firm that audited SBA’s fiscal year 2003 financial statements. As part of its January 30, 2004, audit report, the independent public accounting firm identified in its internal control report 9 specific deficiencies in the model’s documentation. These deficiencies included, for example, a lack of technical references for the statistical method used for the performance of the model, the absence of mathematical specifications, that important variables were not clearly identified, and that units of measure for key variables were not specified. In addition, the audit report stated that the documentation that was provided was “self-contradictory” about the quality of the default and prepayment model and lacked a discussion of the assumptions and limitations of SBA’s modeling approach. While SBA’s CFO agreed with the independent accounting firm’s findings regarding the lack of adequate documentation for the credit subsidy model, he disagreed with similar weaknesses identified in our report. SBA disagreed that its lack of adequate documentation on the 7(a) model development process could impede our ability to reach a conclusion about SBA’s loan accounts in connection with the audit of the consolidated financial statements of the federal government. Instead, SBA believed mandating additional documentation would establish a new and unnecessary requirement. Our comment was in regard to our responsibility as the auditor of the consolidated financial statements of the federal government and does not establish a new or unnecessary requirement for SBA. For the consolidated financial statement audit, we evaluate the reasonableness of credit program estimates based on audit guidance in SAS No. 57. In auditing estimates, SAS No. 57 states that an auditor should consider, among other things, the process used by management to develop the estimate, including determining whether or not (1) relevant factors were used, (2) reasonable assumptions were developed, and (3) biases influenced the factors or assumptions. SBA’s lack of adequate documentation of the 7(a) model development process impaired our ability to make such an assessment. OMB disagreed with the recommendation that Circular A-11 should be revised and believed that the report did not demonstrate that revisions were needed. OMB officials commented that they worked closely with SBA during the model development process and believed that the documentation SBA provided to OMB was adequate for them to determine that the subsidy estimates and reestimates were reasonable. OMB also did not concur with our statement that a lack of improved OMB guidance hampered adequate external oversight. Unlike OMB, in this case, we and other external reviewers did not have the opportunity to work with SBA during the model development process and, as a result, relied on oral explanations and documentation provided by SBA staff and its contractor who developed the model. Further, we attempted to corroborate SBA’s statements with the documentation that SBA provided. However, as we reported, three independent external reviews of SBA’s 7(a) model were hampered by a lack of adequate documentation of SBA’s model development process. We reaffirm our conclusion that adequate documentation is needed for the SBA 7(a) model’s development and that independent external review and oversight will continue to be hampered without a requirement to provide adequate documentation about how econometric models are developed. OMB stated that Ernst and Young was able to independently validate SBA’s 7(a) model with the available documentation. According to OMB, this firm stated that the 7(a) model assumptions and methodology appeared to be reasonable and accurate. We obtained and reviewed the reports OMB cited and found that the firm was not hired to validate or review the same segments of the model that we reviewed. This series of reports was related to the cash flow module of the 7(a) model, as well as the model used to calculate reestimates, but did not review the econometric equations or the model’s development process. In its report, the firm explicitly stated that it was not reviewing the same parts of the model that we reviewed. We confirmed this information in conversations with the accounting firm’s engagement partner and concluded that this firm’s work was not relevant to the findings and conclusions presented in our report. OMB also commented that SAS No. 57 states that internal controls over accounting estimates may or may not be documented. While SAS No. 57 does state that the process for preparing accounting estimates may not be documented, it also states that auditors should assess whether there are additional key factors or alternative assumptions that need to be included in the estimate and assess the factors that management used in developing the assumptions. Further, SAS No. 57 states that auditors should concentrate on key factors and assumptions that are subjective and susceptible to misstatement and bias. We believe this includes the selection and rejection of variables that can be included in the model. Without adequate documentation on the credit subsidy model development process, it is difficult for auditors to fulfill their responsibilities to assess these areas. OMB also commented that SBA fulfilled the management responsibilities described in SAS No. 57 regarding internal controls for accounting estimates. We disagree with this statement and point out that SAS No. 57 provides guidance for auditing accounting estimates as part of conducting financial statement audits rather than directing agency management’s actions. Management’s responsibility for internal controls are contained in our “Standards for Internal Control in the Federal Government,” which states, among other things, that “internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination.” Further, as previously stated, Cotton and Company also identified the lack of adequate model documentation as an internal control weakness. Moreover, SBA’s CFO generally agreed with the independent public accountant’s report’s findings, including the deficiencies in SBA’s model documentation, and stated that the internal control report presented “fundamentals of good financial management and SBA is committed to accomplishing as many of these items as possible in the coming year.” OMB also stated that requiring agencies to prepare additional documentation of the variables tested and rejected would be unduly burdensome. We disagree with this statement and note that this documentation would only need to be prepared when a model is developed or when significant updates are implemented. Further, this requirement would be consistent with other segments of OMB Circular A-11 that require agencies to provide supporting documentation for their budget submissions. However, as we mentioned in the report, there is currently no explicit guidance for agencies to document the development of the models that are used to generate credit subsidy estimates. OMB also commented that we received sufficient information to test alternative variables to measure the reasonableness of the final SBA credit subsidy model. We note that our work demonstrated that using additional variables that were also reasonable changed the subsidy estimate. We believe that this work highlights the need for agencies to document their basis for rejecting variables or combinations of variables from their final credit subsidy models. By documenting this work, agencies will be able to demonstrate to independent reviewers that a bias from variable selection does not exist in the final model. Both agencies provided technical comments that we incorporated into the report as appropriate. The written comments of both agencies are reprinted in appendixes III and IV. We are sending copies of this report to the Chair of the Senate Committee on Small Business and Entrepreneurship, other appropriate congressional committees, the Administrator of the Small Business Administration, and the Director of the Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 8678 or [email protected] or Katie Harris, Assistant Director, at (202) 512-8415 or [email protected]. Key contributors to this report are listed in appendix V. As agreed with your staff, we (1) assessed the reasonableness of the model’s econometric equations and evaluated the model’s estimated default, prepayment, and recovery rates based on the 7(a) program’s recent historical loan experience; (2) identified additional steps the SBA could take to further enhance the reliability of its subsidy estimate produced by the model; (3) reviewed SBA’s process for developing the subsidy model; (4) evaluated the model’s supporting documentation, including its discussion of what variables were tested and rejected; and (5) determined what steps SBA has taken to ensure the integrity of the data used in the model and determined whether these data are consistent with information in its databases. We did not validate SBA’s model. To analyze the model, we obtained from SBA copies of the model as approved by the Office of Management and Budget (OMB), along with the loan-level data that were used to develop the subsidy estimates. We analyzed the econometric equations to determine whether they were reasonable based on the variables they included, the statistical techniques used, and the results obtained. For example, we determined whether the econometric equations included appropriate variables and whether the variables used in the equations were statistically significant. To evaluate the model’s estimated default and recovery rates, we compared these rates with recent historical loan experience of the 7(a) program provided by SBA. Using SBA’s data, we also calculated what SBA would have estimated for default and recovery rates based on the estimation methodology it used prior to its fiscal year 2003 budget submission. (See app. II for a detailed discussion of our analysis of the reasonableness of the model’s econometric equations.) To identify additional steps SBA could take to enhance the reliability of its model, we considered additional types of data that SBA might collect and consider including in its econometric equations. As part of this analysis, we reviewed the academic literature on default modeling and interviewed officials with several banks engaged in similar efforts. To determine SBA’s process for developing the model, we met with SBA officials in the Chief Financial Office who were responsible for estimating the 7(a) program subsidy costs. We also met with OMB officials who were responsible for approving the model. Finally, we also reviewed available documentation on the model’s development provided by SBA and the report by the private consultant who reviewed the model. To evaluate the model’s supporting documentation, including its discussion of what variables were tested and rejected, we obtained and analyzed available relevant documents and met with SBA officials and their contractor who developed the model. We compared the information presented in SBA’s model documentation with existing credit subsidy guidance including OMB Circular A-11 and Federal Financial Accounting and Auditing Technical Release 6: Preparing Estimates for Direct Loan and Loan Guarantee Subsidies under the Federal Credit Reform Act Amendments to Technical Release 3: Preparing and Auditing Direct Loan and Loan Guarantee Subsidies under the Federal Credit Reform Act. We also assessed the impact the lack of documentation would have on SBA’s financial statement audit by comparing the documentation with Statement on Auditing Standards Number 57, Auditing Accounting Estimates. SBA and its contractor told us that 800 pages of raw testing information contained in an electronic file was not organized in any fashion, and that there was no summary document or road map that had greater detail than the model documentation provided us that described the variable-testing process or the results of that process in an understandable fashion. In addition, SBA and the contractor told us that the variables reflected in the 800 pages were not recorded in English words, but rather in mnemonics, and that there was no crosswalk or key still in existence to decode the mnemonics. Thus, no documentation existed that would link the variable names used in the programming to a table of variable descriptions. We obtained and reviewed a copy of this documentation and confirmed the representations of SBA and its contractor. To determine what steps SBA took to ensure the integrity of the data used by the model, we met with SBA officials to gain a general understanding of the agency’s data integrity efforts. We also assessed the number of errors that were resolved by the district offices each month by analyzing 4 months of fiscal year 2003 field office activity from the Form 1502 Guaranty Loan Reporting System. We further assessed whether the remaining errors at the end of the month would likely affect the credit subsidy estimate by analyzing the types of errors tracked by the system and determining which errors affected data used by the new model. We also assessed the magnitude of these errors by analyzing 6 months of fiscal year 2003 activity in the Guaranty Loan Reporting System. To determine whether the data in the new model was consistent with data in SBA’s loan-level databases, we selected and tested a stratified random sample of 400 key data elements that could affect the credit subsidy estimate. Specifically, we randomly selected 100 default and 100 recovery transactions and compared the amounts and transaction dates between the loan system data and loan-level data used for the credit subsidy estimate; 100 loans identified by the model to be prepaid and reviewed the loan histories in SBA’s database to determine whether all of these loans were paid off prior to their scheduled termination date; 100 additional loans and compared their status such as current, paid off, or default to determine if their status in the model agreed with SBA’s loan-level databases. This appendix provides more detail on the three econometric equations that the Small Business Administration (SBA) used to estimate the subsidy rate for its 7(a) loan guarantee program and the expanded equations that we developed. These equations are used to forecast defaults, prepayments, and recoveries. The first section of this appendix describes the variables that SBA used in the default and prepayment equations and presents SBA’s estimated coefficients. The second section explains how we created the variable that we used to represent the borrower’s industry and presents the estimated coefficients from our expanded default and prepayment equations. The third section describes the equation that SBA used to forecast recoveries and presents the estimated coefficients from that equation. In its new model for estimating the subsidy rate for the 7(a) loan program, SBA uses multinomial logistic regression to estimate the likelihood of defaults and prepayments as functions of a variety of explanatory variables. Because multinomial regression is a simultaneous estimation process, the default and prepayment equations are identically specified (that is, the same explanatory variables are used in each equation). SBA conducts its analysis at the level of the individual loan, using loans that were disbursed from 1988 through 2001. For each loan, SBA’s data set contains an observation for each quarter that the loan is active. For example, if a loan prepays at the end of the third year (counting the disbursement year as the first year), then it is active during 12 quarters and, therefore, there are 12 observations for that loan in the data set. For each observation, the dependent variable measures whether in that quarter the borrower defaults on the loan, prepays the loan, or keeps it active. As a result, the coefficients in the default or prepayment equation are estimates of the association of each explanatory variable with the likelihood of the loan defaulting or prepaying in that quarter. There are several categories of explanatory variables included in the default and prepayment equations. The first group consists of a set of dummy variables that indicate the age of the loan. These variables thus serve to reflect the fact that prepayment and default behavior change as a loan seasons. Specifically, there is a dummy variable for each of the first ten quarters of the life of a loan. From the eleventh quarter to the thirty- fourth quarter, there is a dummy variable for each two consecutive quarters. Finally, if a loan remains active past an age of thirty-four quarters, there is one more dummy variable. The second set of explanatory variables concern loan characteristics. A set of dummy variables indicates the contractual term of the loan at origination. The categories are less than 5 years, 5 to up to 10 years, 10 years to up to 15 years, and 15 years or greater. Less than five years serves as the omitted category in the regression. Loan amount is another characteristic and is measured in millions of dollars. SBA also includes a dummy variable that shows whether a loan was delivered through the SBA Express Program. Also known as Subprogram 1027, this program allows lenders to originate a loan using their own loan documents instead of SBA documents and processing, but the loan guarantee is only up to 50 percent. By comparison, the typical SBA guarantee is almost 80 percent. Finally, there is a set of dummy variables for type of lender: Regular, Preferred, and Certified. In the regression, the regular type serves as the omitted category. The next set of explanatory variables provides information on the borrower. A set of dummy variables identifies ownership structure. The categories are sole proprietorship, corporation, or partnership. Sole proprietorship is the omitted category in the regression. An additional dummy variable indicates whether the borrower is a new business. Finally, there is a set of dummy variables that indicate the U.S. Census Bureau region where the borrower is located. The final set of explanatory variables contains two measures of economic conditions. The first is the state unemployment rate where the borrower is based. The source for these data is the U.S. Bureau of Labor Statistics. The second is the quarterly percentage change in gross domestic product. SBA obtained these data from the U.S. Bureau of Economic Analysis. Table 1 summarizes the explanatory variables. The coefficients in the SBA equations indicate that the probability of both defaults and prepayments generally increase and then decline as a loan seasons. Defaults peak during the eighth quarter while prepayments peak around quarters 27 and 28. Longer-term loans are less likely to default or prepay. By comparison, larger loans are more likely to default or prepay. Good economic conditions, as reflected by the coefficients on unemployment and the percentage change in gross domestic product, reduce the chances of default and increase the likelihood of prepayment. The positive coefficients on the variable for new business indicate that such firms are more likely to default and prepay. Corporations and partnerships are less likely to default and more likely to prepay than sole proprietors. Finally, loans granted under Subprogram 1027 are less likely to default and more likely to prepay. Table 2 presents the coefficients in SBA’s default and prepayment equations as well as some summary statistics. Although we found that SBA’s default and prepayment equations are reasonable, we evaluated the impact of including additional variables in those equations and found that equations containing some additional variables are also reasonable. In particular, we found that when measures of interest rates and the industry of the borrower are included, these factors appear to be significantly related to the likelihood of defaults and prepayments. Table 3 presents the descriptions of the additional variables. Table 4 presents the coefficients from three alternative specifications of the default and prepayment equations, respectively, as well as, for comparison purposes, the coefficients from SBA’s equations. The first pair of alternative equations include an interest rate variable, the second pair include a set of dummy variables that identify the borrower’s industry, and the third pair include both the interest rate variable and the industry-specific dummy variables. The interest rate variable that we use is the interest rate on 1-year Treasury bills. We selected that rate, in part, because of the availability of forecasted values for it that would be consistent with the forecasted values SBA uses for other economic indicators in forecasting future defaults and prepayments. To create the industry-specific dummy variables, we used data from SBA that identified the borrower’s industry category, using either the Standard Industrial Classification (SIC) codes or the North American Industrial Classification (NAIC) codes. The NAIC is the Department of Commerce’s current system for classifying businesses into industries and in 1997 the NAIC codes replaced the SIC codes that Commerce previously used. When possible, for loans that had NAIC codes, but not SIC codes, we converted the NAIC code into the corresponding SIC code. We aggregated the SIC codes into broader categories defined by the first digit of the code. To reduce the number of dummy variables, we aggregated some small categories. In particular, we aggregated mining and construction and combined the small number of firms classified in the public administration industry with firms in the service industry and used that category as the omitted category in our regressions. As a result, the coefficients on the industry-specific dummy variables should be interpreted as the difference in the likelihood of default and prepayment from the likelihood for the service category. Table 5 shows how loans in SBA’s database are distributed among categories defined by single-digit SIC codes. The coefficients for the interest rate on 1-year Treasury bills are positive and highly significant for the default equations, as expected, and negative and highly significant for the prepayment equation. Most of the coefficients for the industry-specific dummy variables are also statistically significant. As can be seen in Table 4, the coefficients for most of the other variables in the equations are not much different in the alternative specifications from their values in SBA’s equations. SBA uses an ordinary least squares regression equation to estimate the relationship between the cumulative net recovery rate for a cohort of loans and the age of the loans in that cohort. This equation differs from the default and prepayment equations in that there are no economic or programmatic variables. As a result, forecasted recoveries on new loans will follow the historical pattern of recoveries on previously disbursed loans and will not depend on forecasted economic conditions. In addition, the unit of analysis is the cohort of loans rather than individual loans. The recovery equation uses ordinary least squares to regress the cumulative net recovery rate on a set of dummy variables for the age of the cohort. The cumulative net recovery rate is defined as cumulative net recoveries to date divided by cumulative defaults to date. Each dummy variable covers two quarters ranging from quarters 1 and 2 to quarters 55 and 56. As expected for a cumulative dependent variable, the coefficients are generally increasing. In addition, except for the variable indicating the first two quarters, they are highly statistically significant. The adjusted R is .9776, showing a good fit. Table 6 gives the names and descriptions for variables in the recovery equation while table 7 shows the coefficients for that equation. The following are GAO’s comments on the Small Business Administration’s letter dated February 19, 2004. 1. Highlights page was adjusted to reflect SBA’s position. 2. We adjusted the report text to recognize SBA’s position. See page 1. 3. We adjusted the text to reflect that SBA subsequently provided us access to this documentation and provided a description of the documentation as well as an assessment of its usefulness in assessing the model development process. See page 26. 4. We adjusted the text of the report. See page 7. 5. We acknowledge that SBA briefed us on the variables that were selected and rejected and that we could not corroborate this with the supporting documentation that SBA provided. See the Agency Comments and Our Evaluation section of the report pages 35-36. 6. We do not concur with SBA that this change should be made to the report because it is redundant with the information provided on pages 24 and 25. 7. We adjusted the report text to clarify our position. 8. We do not concur with SBA. See the Agency Comments and Our Evaluation section of the report on page 36. 9. We concur with SBA’s assertion that we have not proven that the model had a bias. Our report states that we were unable to determine whether such a bias existed because of SBA’s insufficient documentation. 10. We concur with SBA’s definition of an independent person in the context of this report and point out our team that reviewed the 7(a) model met SBA’s definition of an independent person. However, any revisions of the definition of an independent person would need to be made by the Federal Accounting Standards Advisory Board. 11. We do not concur with SBA’s statement that an independent party ensured that the 7(a) model was free from bias from variable selection. As we discussed, neither Bearing Point nor Ernst and Young, both of which SBA asserted were independent reviewers who ensured the model was free of bias, assessed the variable selection process. Bearing Point reported that its review was severely limited by the lack of documentation and did not assess the econometric segment of the model. Ernst and Young reported that, at the request of SBA, it did not assess the econometric component of the model. Thus, neither of these firms could assess whether a bias existed from the variable selection process. We also do not concur with SBA’s statement that its tests show that there was no identifiable bias in the model. While SBA may have tested its final model for bias, the agency has not provided us with any supporting documentation of these analyses. Further, testing the model would not identify this type of bias. Rather, an analysis of the variable selection process and whether it was consistently applied to all variables tested would more likely reveal whether such a bias existed in the final model. We also do not concur with SBA’s suggested change to the conclusions of our report regarding whether a possible bias existed in the final model. The bias that is described in our report would result from variable selection or rejection. SBA discusses a statistical bias that suggests that over the historical period the chosen model systematically either under predicts or over predicts the likelihood of defaults or prepayments. To provide reasonable assurance that a bias was not introduced into the subsidy rate estimate through the choice of particular equations from among the set of reasonable equations, adequate documentation of the basis for selecting and rejecting variables is an important internal control. We were unable to determine whether this type of bias existed because of the lack of documentation on the model development process. The following are GAO’s comments on the Office of Management and Budget’s letter dated February 18, 2004. 1. We do not concur with OMB and believe that in light of the consistent difficulty experienced by three independent reviews of SBA’s 7(a) model, our report makes a case for the need to enhance the guidance in Circular A-11 to require agencies to document the process they used to develop the model. See the Agency Comments and Our Evaluation section of the report pages 36-37. 2. We do not concur with OMB. See Agency Comments and Our Evaluation section of the report page 37. 3. We do not concur with OMB. See Agency Comments and Our Evaluation section of the report pages 37– 38. 4. We do not concur with OMB. See Agency Comments and Our Evaluation section of the report page 38. 5. While we concur that agencies need to have discretion in the level of documentation that they maintain when dealing with inconsequential matters, we do not agree that such discretion should be allowed in clearly consequential activities such as the development of the 7(a) model. We reaffirm our position that OMB needs to enhance its guidance regarding the need for adequate documentation for the credit subsidy model development process. 6. We concur with OMB that the fundamental nature and purpose of Circular A-11 is not to provide guidance on internal controls as it relates to financial statement audits. However, the primary focus of this report is on credit subsidy estimates which are prepared in accordance with Circular A-11. Also, the financial statement audit is an important validation of the credit subsidy estimates included in the budget. We reaffirm our conclusion and recommendation that enhanced guidance on credit subsidy model development would facilitate external review, including those performed by OMB, of the credit subsidy estimate. Because of the relationship between the credit subsidy estimates prepared for the budget and those used in the financial statements, the enhanced guidance would benefit both the financial statement audit and budgetary review. 7. Report language was revised to address technical points about Technical Release 6. However, as we discussed, this guidance does not specifically require documentation of credit subsidy model development. In addition to those individuals named above, Jay Cherlow, Dan Blair, Edda Emmanueli-Perez, Mitch Rachlis, Marcia Carlsen, Beverly Ross, Susan Sawtelle, and Mark Stover made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Small Business Administration (SBA) approved about $8.6 billion in loan guarantees through its 7(a) loan program in fiscal year 2003. SBA must estimate the subsidy cost of this program. Since fiscal year 2003, SBA has been using econometric modeling to estimate the subsidy. This report reviews SBA's estimation methodology and equations, assesses the default and recovery rates the model produced, identifies ways to enhance the estimates' reliability, describes the process for developing the model, and analyzes SBA's data. From an economics perspective, SBA's econometric equations were reasonable, and its model produced estimated default and recovery rates that were in line with historical experience. However, from an audit perspective, SBA's lack of documentation of the model development process precluded GAO, and others, from independently evaluating the model's development and determining if SBA used a sound and consistently applied method to select and reject model variables. Taking into account economic reasoning and research, SBA's econometric equations for estimating defaults, prepayments, and recoveries were reasonable. SBA's equations used a limited set of variables; equations using other variables could also be reasonable but would produce different estimates. Since an estimate is an approximation, no one estimate can be considered accurate, and reasonable estimates can fall within a range of values. The model's estimated default and recovery rates were in line with recent historical experience. SBA could improve its estimation methodology by periodically checking for and correcting errors and should consider adding more borrower information, such as credit scores. Some errors in the model resulted in understating the estimated program costs. SBA used the expertise of other agencies and a contractor to develop its model and worked closely with the Office of Management and Budget (OMB), which must approve the methodology agencies use to estimate subsidies. OMB officially approved the model in the fall of 2002. SBA did not adequately document its model development process, including alternative variables considered and rejected, to enable external reviewers to assess the process that was used. Further, GAO and two other independent reviewers could not determine whether a bias existed in the model by systematically excluding variables to influence the subsidy rate in a particular direction. Adequate documentation, a key internal control, would enable SBA and other agencies to demonstrate the rationale and basis for key aspects of the model that provide important cost information for budgets, financial statements, and congressional decision makers and facilitate SBA's annual financial statement audit. Current OMB and other guidance is either silent or unclear about the level of documentation necessary for credit subsidy model development. SBA had a process to help ensure data integrity and data consistency in the equations with the loan-level data in its databases. Although errors existed in SBA's data systems, the magnitude and nature of these errors were not likely to significantly affect the subsidy rate.
From 1944 until the 1980s, the United States used nuclear reactors to produce plutonium and other materials for nuclear weapons. Plutonium was extracted from the fuels used by these reactors by a chemical process known as reprocessing. As a result of these activities, after the shutdown of weapons production and of some reprocessing plants at the end of the Cold War, DOE retained an inventory of spent nuclear fuel that had not been reprocessed, as well as high-level waste—which is one of the byproducts of reprocessing. Weapons production and related defense activities—such as the reprocessing of the Navy’s spent nuclear fuel to produce new fuel, which also created high-level waste—are the source of about 87 percent of DOE’s inventory of spent nuclear fuel and almost its entire inventory of high-level waste. Because weapons production and reprocessing of the Navy’s spent nuclear fuel have ended, DOE’s inventories of this waste are largely fixed. DOE is also responsible for managing other nuclear waste from a variety of sources, including some active programs that continue to add to DOE’s inventory. For example, DOE is responsible for managing spent nuclear fuel from the Navy through the Naval Nuclear Propulsion Program, which is jointly operated by DOE and the Navy. The Navy uses nuclear-powered ships and submarines in carrying out its missions. The spent nuclear fuel removed from these vessels is the primary driver of increases in DOE’s inventory, but it totals only 1 percent of DOE’s spent nuclear fuel inventory. The remainder of DOE’s inventory of nuclear waste comes from various nondefense sources, including spent nuclear fuel from its own test and experimental reactors, reactors at U.S. universities, and other government research reactors; commercial reactor fuel acquired by DOE for research and development; and fuel from foreign research reactors. For example, DOE stores fuel debris from the Three Mile Island accident that occurred in 1979 at a commercial nuclear power plant. It also stores spent nuclear fuel from three commercial power demonstration projects, including from the first commercial-scale high-temperature gas-cooled reactor plant in the United States, at the Fort St. Vrain site. In addition, the United States operates a program to take custody of spent nuclear fuel from foreign research reactors, which supports a U.S. policy to prevent the proliferation of nuclear weapons; this program is scheduled for completion in 2019. DOE currently stores its inventories of nuclear waste at five DOE sites. In 1995, DOE decided to consolidate nearly all of its spent nuclear fuel from other sites at three primary locations—the Hanford Site, Idaho National Laboratory, and the Savannah River Site—for storage and preparation for permanent disposal. The exception to this consolidation decision is DOE’s Fort St. Vrain site, which stores less than 1 percent of DOE’s total inventory. In 1999, DOE decided to store its high-level waste where it was generated, at the same three primary sites. In addition, DOE manages a small amount of high-level waste that resulted from the relatively brief operation of the only commercial reprocessing plant ever run in the United States. This waste was generated between 1966 and 1972 from reprocessing spent nuclear fuel at a site near West Valley, New York, where DOE is now responsible for storing it. Some of the nuclear waste at these sites requires further processing and packaging before it can be safely stored over the long term or removed for final disposal. In the case of spent nuclear fuel, this generally means removing it from storage pools of water and packaging it in stainless steel canisters. The processing and packaging of high-level waste is vastly more complicated—a massive enterprise in which DOE is removing waste from storage tanks and transferring it to treatment facilities. For example, at the Savannah River Site, DOE is vitrifying high-level waste by mixing it with a glass-forming material, melting the mixture into glass, and pouring it into stainless-steel canisters to harden. Across all sites, DOE expects to eventually produce about 20,000 canisters of solidified high-level waste. Once the wastes are stabilized, removing them from the sites would require a destination where they could be stored or permanently disposed of and a decades-long shipping campaign to get them there. Appendix I describes how the sites are in different stages of preparing spent nuclear fuel and high-level waste for final disposal. In the meantime, DOE manages many types of storage facilities, as illustrated in figure 1, of widely varying ages and conditions. For example, DOE has generally been moving spent nuclear fuel from wet storage in pools of water, designed to cool the fuel and provide radiation protection, to dry storage. Dry storage has numerous configurations, including underground storage vaults, only some of which are covered by a building, and casks on an outdoor pad or a railroad car. Overall, these storage facilities vary from aging to almost new; for instance, they range from a 1950s building at the Idaho National Laboratory to a high-level waste canister building constructed in 2005 at the Savannah River Site. DOE operates these five sites under a legal framework that includes self- regulation, as well as regulation by federal agencies and states. In contrast to the commercial nuclear industry’s sites, which are regulated by NRC, DOE generally operates under its own regulations for nuclear safety at its sites. In addition, DOE’s treatment, storage, and disposal of radioactive and hazardous wastes are governed by a number of federal and state laws, including the Resource Conservation and Recovery Act of 1976 (RCRA), as amended, which regulates the management of hazardous waste from generation to disposal. The Federal Facility Compliance Act of 1992 amended RCRA to require federal agencies, including DOE, to develop waste treatment plans for their sites that contain mixed wastes—certain wastes with both radioactive and chemically hazardous materials. For example, high-level waste is sometimes considered a mixed waste because it contains highly corrosive, organic, or heavy metal components that may be regulated under RCRA. These plans are approved by states that the Environmental Protection Agency (EPA) has authorized to administer RCRA or by EPA in states that have not been so authorized. Activities carried out under these plans are often governed by compliance agreements between DOE, EPA, and the states (state agreements), which regulate and oversee the activities. State agreements establish the scope of work to be performed at given sites, as well as “milestones”—specific dates by which these activities should be achieved. The agreements may also impose monetary or other penalties for missing milestones. Milestones may cover actions to treat, store, and dispose of hazardous wastes located at the DOE sites. Agreements differ by state. Some cover virtually all cleanup activities at a site, while others cover just a portion. These activities may include soil and groundwater remediation, low-level radioactive waste disposition, and special nuclear material consolidation; in this report, we focus on state agreement cleanup activities involving spent nuclear fuel and high-level waste. States and DOE can negotiate to amend or modify the agreements, including extending or eliminating milestones. State agreements may be created in at least four ways. First, states may enter into Federal Facilities Agreements (also known as Tri-Party Agreements) with DOE and EPA, which implement the Comprehensive Environmental Responsibility, Compensation, and Liability Act of 1980 (CERCLA) and RCRA, as well as state hazardous waste law requirements to set the cleanup schedules at sites. CERCLA, among other things, authorizes EPA to compel responsible parties to initiate cleanup activities at hazardous waste sites. Second, states may take legal action against DOE seeking review of its compliance with the National Environmental Policy Act, which can result in settlement agreements between the parties and may outline activities and milestones. Third, Congress may address the management of wastes at a specific site. Finally, federal government officials may enter into agreements with states concerning DOE-managed radioactive waste, which may include specific cleanup milestones. The five states with DOE sites storing nuclear waste have agreements with DOE, and in one case with the Navy, regarding how nuclear waste will be managed. However, only the agreements with Colorado and Idaho would be affected by a termination of the Yucca Mountain repository because only those agreements specify dates for removing the waste from the DOE sites. Each DOE site falls under at least one state agreement that specifies certain treatment, storage, or disposal activities for high-level waste, spent nuclear fuel, or both. The agreements with four sites deal with the safe storage and treatment of high-level waste. (DOE’s site in Colorado, does not store any high-level waste; it stores only spent nuclear fuel.) In addition, state agreements for some DOE sites focus on the storage of spent nuclear fuel or its removal from the states. Major state agreements at each site are as follows: Idaho National Laboratory. DOE and the Navy are party to a 1995 settlement agreement and consent decree (the Idaho Settlement Agreement), entered into the United States District Court for the District of Idaho, to settle a lawsuit brought by the state. The agreement commits DOE to prepare its high-level waste for shipment out of Idaho for disposal. The agreement also contains provisions for managing spent nuclear fuel. Specifically, it requires DOE and the Navy to move their spent nuclear fuel from storage in pools of water to dry storage—given state concerns that the water pools might leak and radioactively contaminate the underlying groundwater—and later to move the spent nuclear fuel out of Idaho. Fort St. Vrain Site. In 1996 the Governor of Colorado signed an agreement with the Assistant Secretary for Environmental Management at DOE, referred to as the “Agreement Between the Department of Energy and the State of Colorado Regarding Shipping Spent Fuel Out of Colorado.” The agreement states that DOE is committed to shipping its spent nuclear fuel stored at Fort St. Vrain out of Colorado. Hanford Site. The Hanford Federal Facility Agreement and Consent Order (Tri-Party Agreement) of 1989, as amended, entered into by DOE, EPA, and the state of Washington’s Department of Ecology, focuses on completing DOE’s closure of tanks that store liquid waste and solidifying its high-level waste for safer storage. The agreement also requires DOE to develop a disposition plan for cesium and strontium capsules, which are managed as high-level waste, if vitrification is not planned. Savannah River Site. The 1993 Federal Facility Agreement for the Savannah River Site and the Savannah River Site Treatment Plan of 1995 between DOE and the South Carolina Department of Health and Environmental Control focus on completing DOE’s closure of tanks that store liquid waste and solidifying its high-level waste for safer storage. West Valley Site. The West Valley Demonstration Project Act, enacted in 1980, directs the Secretary of Energy to enter into a cooperative agreement with New York and to carry out a radioactive waste management demonstration project at the western New York Service Center in West Valley, New York. The project includes solidifying high-level waste, developing waste containers suitable for permanent disposal, and transporting the solidified waste to an appropriate federal repository for permanent disposal. A termination of the Yucca Mountain repository may prevent DOE and the Navy from meeting agreements with Colorado and Idaho that establish milestones for shipping the spent nuclear fuel out the states. As shown in table 1, the other agreements do not set dates for removing spent nuclear fuel from DOE sites. No state agreement sets a date for removing high- level waste. DOE and the Navy, under the 1995 Idaho Settlement Agreement, are required to remove from the state by January 1, 2035, spent nuclear fuel stored at Idaho National Laboratory. In addition, DOE’s head of EM signed an agreement to remove the spent nuclear fuel stored at the Fort St. Vrain site from Colorado by the same date. When the agreements were signed, DOE had intended to remove the spent nuclear fuel from these sites and ship it to the Yucca Mountain repository for final disposition. Similarly, the Navy had planned to transport its spent nuclear fuel from Idaho to the Yucca Mountain repository starting after 2020. If the Yucca Mountain repository is terminated, DOE and the Navy would lose their planned shipping destination for their spent nuclear fuel, which could cause them to miss the 2035 removal date they have committed to. DOE and the Navy may be faced with significant penalties for missing these removal milestones. For example, under the Idaho Settlement Agreement, the federal government may be liable to pay the state $60,000 for each day past January 1, 2035, that DOE and the Navy have not removed their spent nuclear fuel from the state. Under the Colorado state agreement, DOE may be liable to pay the state $15,000 for each day after January 1, 2035, that DOE fails to remove its spent nuclear fuel. These penalties would total approximately $27.4 million per year, although both state agreements stipulate that any possible future payments of these penalties will be subject to the availability of appropriations specifically for that purpose. Under the Idaho Settlement Agreement, the state may also have the ability to suspend any further DOE or Navy shipments of spent nuclear fuel to DOE’s Idaho site until the agreement’s obligation for removal of spent nuclear fuel is met. According to Navy officials, this would be of much greater concern than the financial penalties. After removing spent nuclear fuel from its warships as part of the refueling process, the Navy transports it to the Idaho site for examination and storage. No other sites are available for these critical activities. A Navy official told us that developing the infrastructure for these activities at a new site outside of Idaho would be time consuming and costly, and other states might oppose such a facility within their boundaries if there were no disposal pathway for the spent nuclear fuel. If Idaho were to suspend the Navy’s shipments of spent nuclear fuel, the Navy would not be able to refuel its nuclear warships, which Navy officials said would raise national security concerns. In addition, suspension might effectively prevent the Navy from continuing to examine its spent nuclear fuel at the Idaho site after 2035. If DOE determines that it will not be able to meet the removal milestones in the Idaho and Colorado agreements, it is unclear when the department would approach these states or whether either state would be amenable to renegotiating the agreement milestones. For example, Idaho officials said they still expect DOE and the Navy to meet the milestones. They stated that the 25 years remaining to remove spent nuclear fuel from Idaho may not be enough time to establish an alternative repository, but they noted that the Idaho Settlement Agreement does not require the spent nuclear fuel to be sent to the Yucca Mountain repository, only that it be removed from Idaho. These officials also said Idaho might seek remedies in court if it becomes evident that DOE is not positioned to meet a future milestone. According to DOE and Navy officials, a termination of the Yucca Mountain repository would not generally affect their nuclear waste operations in the near term. However, it would likely extend on-site storage of nuclear waste, which would lead to increased storage costs for the federal government. In addition, DOE officials said they will need additional information on storage facilities to plan storage beyond the time set forth in the current site plans. According to EM officials, a termination of the Yucca Mountain repository is not expected to affect site operations in the near term because current DOE operations are primarily focused on treating high-level wastes and moving spent nuclear fuel from wet to dry storage—activities that do not depend on having a repository available. Operations at the primary DOE sites we reviewed—Hanford, Idaho, and Savannah River—are currently focused on treating high-level radioactive liquid tank waste or moving spent nuclear fuel from wet to dry storage. These efforts are intended to immobilize high-level waste and provide safer storage on site until disposal at a repository. Savannah River is vitrifying the site’s high-level waste by combining it with glass-forming chemicals to make a glass that is poured into stainless steel canisters and sealed by welding; Hanford is building a $12.3 billion complex to do the same. Savannah River and Hanford officials said they intend to continue these operations through completion, regardless of the status of the Yucca Mountain repository, because of EM’s mission to mitigate environmental risk and because the officials are trying to meet milestones in their state agreements for removing high-level waste from tanks. Idaho National Laboratory has treated much of its high-level waste with a different process, called calcination, which turns the waste into a dry granular powder. In a 2009 record of decision, DOE decided to take additional steps to put the calcine waste into a monolithic form within canisters for permanent disposal, but according to EM officials, this work has not yet been started. Regarding spent nuclear fuel, Idaho is in the process of moving all of it from wet to dry storage, and Hanford has generally completed the process. According to EM officials, there are no plans at this time for the Savannah River Site to move spent nuclear fuel from wet to dry storage. Furthermore, at a 2010 hearing, the head of the Naval Nuclear Propulsion Program stated that termination of the Yucca Mountain repository would have no near-term effect on its operations at Idaho. The Navy intends to continue moving its spent nuclear fuel out of wet storage and placing it into canisters that are ready for transport when an alternative to the Yucca Mountain repository is available. In the meantime, the Navy will store the canisters at the Idaho site, as it anticipated doing while waiting for the Yucca Mountain repository to open. Some officials, such as those from the Washington State Department of Ecology, raised concerns that a termination of the Yucca Mountain repository could affect current operations if a replacement repository is selected with different requirements for accepting waste. Waste acceptance criteria govern aspects such as the waste canister’s shape, size, and radioactive content. According to EM officials, however, continuing operations in accordance with the treatment and packaging requirements established for the Yucca Mountain repository license application likely does not raise any significant issues. They said that EM, in coordination with NRC and EPA, strives to develop waste forms and package designs that will likely be accepted at any geologic repository, and they expect that any new repository would be designed to safely hold the high-level waste and spent nuclear fuel that has already been packaged. While the sites can generally continue with their operations and plans without the opening of a repository, a termination of the Yucca Mountain repository may change some plans related to disposal. For example, if a repository is not available, sites can delay building shipping facilities, which would need to be in place about 5 years before a repository is available. Without a Yucca Mountain repository, DOE will likely have to extend storage of nuclear wastes at DOE sites, which will increase its storage costs—although it is difficult to predict by how much. According to a 2009 Congressional Research Service report, halting the development of the Yucca Mountain repository would almost certainly require that nuclear waste remain at on-site storage facilities longer than currently planned. This is because a new repository to replace the Yucca Mountain repository would be unlikely to open by 2020. Similarly, senior EM officials told us they understand that high-level waste and spent nuclear fuel may remain at DOE sites for a “considerable” period of time. On-site storage can be safe and secure for long periods, according to a National Research Council report, but it would require a continuing commitment of resources for the storage to be continuously monitored, maintained, and periodically rebuilt. For our analysis, we used DOE’s own estimate that the Yucca Mountain repository would be open in 2020. This 2008 estimate was made before DOE took steps to terminate the Yucca Mountain repository program. While we recognize the 2020 date was not certain, we know of no better assumption to meaningfully assess the impact of a termination of the Yucca Mountain repository. In a written comment to us, DOE officials stated that it is incorrect to conclude there will be a delay in moving the nuclear materials or disposing of them using an alternative strategy compared to pursuing the Yucca Mountain program. Specifically, they stated it is speculation to say a new strategy will take longer to implement than continuing with the Yucca Mountain program because there is no guarantee of when, if ever, the many significant steps for opening the Yucca Mountain repository would have occurred. Since the comment provides only a hypothetical bounding possibility—the Yucca Mountain repository might have never opened, even without DOE’s current steps to terminate it—rather than a new estimate for when the repository might have opened, we note the DOE officials’ position but do not analyze it further. Longer storage would increase costs at DOE sites because it would require additional years of storage beyond current plans, which assumed shipments to the Yucca Mountain repository starting in 2020. These storage costs generally fall into three categories: Annual and recurring storage costs: Annual costs include costs for operations, maintenance, surveillance, and security for the storage facilities. Recurrent costs are generally maintenance or repair costs that are not annual, such as the anticipated cost of replacing a storage building’s roof every 25 to 30 years. Increased storage capacity: Beyond storage already available or planned, the Hanford Site, the Savannah River Site, and the Naval Reactor Facility at the Idaho site would have to build additional storage if their canister inventory cannot be reduced by sending canisters to the Yucca Mountain repository. This capacity can be expensive. For example, an EM analysis estimated that Hanford would need three additional storage facilities to accommodate all of the waste canisters. These facilities would be built as needed, at an estimated cost of $100 million (2010 dollars) each. Replacement of storage facilities and containers. Existing storage systems must be replaced once they exceed their useful lives. DOE has not yet determined the design of these replacement storage systems, and these costs could be incurred well into the future. For example, in a 2002 analysis, DOE assumed that the storage facilities would undergo complete replacement after the first 100 years and every 100 years thereafter. EM estimates that it could need an additional $918 million (2010 dollars) to extend storage if the opening of a permanent repository were delayed from 2020 to 2040. About two-thirds of these costs would fall into the category of annual and recurring storage costs. For example, costs for storing spent nuclear fuel at the Hanford Site were estimated at $6 million per year for an additional 20 years. The remaining one-third of the projected additional costs fall into the category of increased storage capacity beyond what would be needed if the Yucca Mountain repository had opened in 2020. EM’s estimate did not include any costs in the category of replacing storage facilities and containers because it assumed a delay of 20 years would not necessitate the replacement of any existing storage buildings or containers. If storage were extended well into the future, however, some buildings would need to be replaced. For instance, Savannah River Site officials said the high-level waste canister storage buildings at the site have a design life of 50 years, but are expected to have a usable life of 100 years if properly maintained. According to the officials, if storage needs to be extended beyond the storage buildings’ usable life, these buildings would have to be replaced at an estimated cost of about $75 million each, the cost when the last one was built in 2005. DOE may also have to replace or reinforce waste containers. Specifically, spent nuclear fuel canisters might need to be either repackaged or left in the original canister but then placed into a larger one, called a canister overpack. For the high-level waste canisters, which are not amenable to repackaging (which would involve the removal of the high-level waste glass from the original stainless steel canisters), Savannah River officials stated that they could likely be stored safely on site for a long time, perhaps 1,000 years, without the canisters breaching from corrosion. Problems could arise earlier for transport to a repository, however. After an estimated 200 years, DOE could face problems safely retrieving and moving the canisters from the on-site storage vault to the permanent repository because of potential corrosion at the neck of the canister. Savannah River officials explained that a transporter lifts the canister by its neck to move it in or out of storage in subsurface vaults, as illustrated in figure 2. If a corroded neck breaks when lifted, DOE would have difficulties retrieving the canister. Breaking the neck of the canister could also contaminate the vault, which would require cleanup. Because of these concerns, according to a site official and an EM expert, DOE might decide to overpack the high-level waste canisters, perhaps as early as after 100 to 150 years of storage. Moreover, if DOE did overpack the canisters, it would also need to design and construct new storage buildings because the new larger overpack would not fit into the storage positions in the existing buildings at Savannah River. It is difficult to accurately estimate these increased on-site storage costs because of three key factors. First, how long the wastes will remain on-site cannot be projected with certainty because it is unclear when an alternative to the Yucca Mountain repository will be available. Reflecting the degree of uncertainty, presenters at a March 2010 EM conference on managing spent nuclear fuel considered a wide variety of possible periods of storage, from 40 to 300 years. Second, the actual configuration and cost of any future storage systems are not yet known. This is because DOE has not devised a plan for long-term storage and because DOE has yet to make certain decisions that could change the type of future storage and costs, according to EM officials. For example, because DOE has not decided whether to process spent nuclear fuel through Savannah River Site’s H- Canyon facility, it does not know the final configuration of the waste storage system or the cost of storing it. Third, because DOE does not know how long current storage systems can be used safely, it does not know the appropriate timing for replacing them, EM officials said. They emphasized that the useful lives of existing storage systems are uncertain and will only be discovered over time through continuous surveillance to identify degradation. EM officials told us that DOE can extend storage of spent nuclear fuel and high-level waste on DOE sites for some time but will need additional information on storage facilities to plan storage beyond the time set forth in the current site plans. These officials said the current plans generally assume that the nuclear waste will be shipped to a repository by about 2050, and the sites’ facilities are designed to last approximately until then. A major exception is that Idaho National Laboratory had planned to use its spent nuclear fuel storage facilities only through 2035, a date chosen because of the Idaho Settlement Agreement’s milestone. One option for extending on-site storage would be to extend the lives of existing storage facilities when they reach the end of their design lives. EM officials said they do not know how long a storage facility may last because long-term storage at sites is unprecedented. In addition, they said they know of no studies that verify the estimates of facilities’ useful lives beyond their design lives. It is also unclear how long the canisters or the spent nuclear fuel can be stored without degradation, which would interfere with safe retrieval and transport to another location. Such degradation could necessitate repackaging or overpacking to meet NRC transportation requirements before sending the canisters to a disposal site. Although EM officials told us EM has not yet planned for extending the lives of storage buildings, an official at Idaho National Laboratory told us that studies could be designed to provide confidence that storage buildings will last for an additional 20 or 30 years. Specifically, these longevity studies could identify components of the storage facility that are at risk for failure and repairs that could extend storage. For example, a longevity study may conclude that Idaho National Laboratory needs to shore up a particular wall in a storage area for spent nuclear fuel in order to assure that the area will last for another 30 years. Such information would be useful to EM in budgeting for the maintenance and repairs that are needed to extend the lives of existing facilities or for their replacement at the end of their useful lives. Similarly, to assess how to manage aging facilities for the long term, EM officials told us about some internal proposals for research and development on spent nuclear fuel storage, including ways to monitor wet and dry storage for degradation. However, it is uncertain how much information this intended effort will ultimately provide, since EM officials said that EM has not budgeted any funds for this work. A second option would be to build new storage facilities for very long-term storage—such as beyond 120 years—that may exceed the useful lives of existing facilities. However, to plan for very long-term storage, DOE may need to conduct research to get information about its sites’ unique storage needs. EM officials said EM currently has no research plan for very long- term storage for the wastes at DOE sites. An NRC official stated that NRC and other groups are planning to research the technical basis for the very long-term storage of commercial spent nuclear fuel beyond 120 years. However, it is unclear whether this research will address all of DOE’s waste storage needs since EM officials said DOE storage systems generally differ from those used for commercial waste. NRC is not evaluating DOE spent nuclear fuel because it generally does not have authority over DOE, according to an NRC official. According to NRC officials, NRC also is not yet looking at long-term storage of spent nuclear fuel in the two NRC- licensed storage facilities at DOE’s Idaho and Colorado sites. Because this spent nuclear fuel also differs from commercial spent nuclear fuel, it will require a unique analysis that NRC is not likely to undertake soon, NRC officials said. More information would also be needed for DOE and the Navy to decide between these two options. New facilities might increase the cost- effectiveness of storage over the long term and be better designed to monitor deterioration and address security issues. However, DOE and the Navy cannot determine the resulting benefit without knowing the costs and time periods involved for each of the two options. For example, EM officials said DOE would not want to invest in costly new storage facilities that could last hundreds of years, only to discover that a shorter period of storage was needed. Furthermore, DOE may need more information about state and local support for the two options. Based on our discussions and review of documents, some states and communities may oppose any signs that DOE is planning long-term storage at the sites. As New York officials told us, for instance, the local community may react negatively to a new storage facility at the West Valley site because it would be a visible sign that the nuclear waste is not moving. On the other hand, some states and communities may favor building robust storage facilities to help ensure safety. EM and Navy officials told us they will not make any mitigation plans until those plans can be informed by the Blue Ribbon Commission’s recommendations, which are expected by January 29, 2012. EM officials told us that it is too early for EM to jettison its current plans because of the uncertainties about the possible alternatives to the Yucca Mountain repository. In addition, according to EM management, EM will not make any plans for extended storage before the Blue Ribbon Commission has made its recommendations because it does not want to preclude any strategies or options the Blue Ribbon Commission might recommend. For some years after the commission’s recommendations are available, however, DOE and the Navy could experience difficulties planning how to mitigate the impact of a termination because uncertainties about the alternative to the Yucca Mountain repository may take time to resolve. Establishing an alternative site for a repository, for example, would likely require new legislation, according the officials at DOE’s Office of General Counsel. This might reopen lengthy and contentious political debates over repository siting. It took almost 4 years of congressional effort to pass the Nuclear Waste Policy Act of 1982, followed by about 5 years of additional effort, before Congress narrowed the evaluation of possible repository sites to Yucca Mountain. In addition, because it is not clear how specific the Blue Ribbon Commission’s recommendations will be, it may take DOE additional work and time to use these recommendations to develop a new nuclear waste management policy. For example, it may take time to reassess whether to use the same procedures in siting a repository for DOE and Navy materials and commercial spent nuclear fuel. According to a 1982 Office of Technology Assessment report, this issue was a major obstacle to passing nuclear waste legislation in 1979 and 1980. With a termination of the Yucca Mountain repository, both DOE and the Navy recognize they will need to devise alternative strategies to meet state commitments for removing spent nuclear fuel from both Colorado and Idaho, and both are waiting for the Blue Ribbon Commission recommendations before planning a strategy. Navy officials said they expect that the Blue Ribbon Commission recommendations will define a potential alternate path for defense waste that will allow it to comply with the Idaho Settlement Agreement and to continue operations at DOE’s Idaho site. EM officials believe it is too early to talk with states about renegotiating agreements and told us that they plan to wait until alternative plans to the Yucca Mountain repository can be made. In any event, they stated, DOE intends to remain in compliance with milestones and requirements in agreements with the states of Colorado and Idaho. A termination of Yucca Mountain, however, may threaten DOE’s and the Navy’s ability to meet state commitments. Specifically, some alternatives that the Blue Ribbon Commission might consider may not provide a solution soon enough—in the less than 25 years remaining before the 2035 milestones—or may not be applicable to DOE’s and the Navy’s spent nuclear fuel. Although the commission has not indicated what it plans to recommend, it has heard testimony on alternatives that have previously been discussed and that might allow for removal of nuclear waste from DOE sites. One of these alternatives is to establish one or more new permanent repositories to replace the Yucca Mountain repository. However, establishing another repository may not allow enough time to meet the 2035 milestones unless the process is more expeditious for a new repository than it was for Yucca Mountain. For the Yucca Mountain repository, in 2008 this process was projected to ultimately last at least 37 years—from the beginning of the siting process in 1983 to the earliest possible start of operations, in 2020. The commission is also considering changes to the way nuclear waste is stored prior to final disposal. One alternative that DOE previously studied for commercial spent nuclear fuel is storing it at a centralized site. For our November 2009 report on alternatives to the Yucca Mountain repository, an expert in centralized storage estimated that opening a centralized facility could take between 17 and 33 years from site selection until the facility began accepting waste. A third alternative, which DOE has also previously considered, is for the United States to reprocess spent nuclear fuel to create new fuel for reactors. However, current reprocessing technology may not be cost- effective and, if not managed properly, creates proliferation concerns because the resulting materials could be used in a nuclear weapon. Transitioning the nuclear industry to new technologies to address these concerns could take 50 to 100 years, according to a 2010 report from the Massachusetts Institute of Technology. Even then, this solution might apply mainly to commercial spent nuclear fuel, rather than the fuel stored at DOE sites, because it may be impractical or uneconomical to reprocess the relatively small quantities and many different types of spent nuclear fuels stored at DOE sites, according to DOE documents and Navy officials. For decades, the United States has been struggling with the issue of what to do with the nuclear waste from weapons production and several other sources. With the possible termination of the Yucca Mountain repository, it may be about to restart this potentially time-consuming and contentious process. In the short term, this is unlikely to affect nuclear waste operations for DOE or the Navy. However, long-term storage costs at sites are likely to increase since DOE would need to store waste for longer periods prior to permanent disposal. Furthermore, as a result of the potential termination, DOE and the Navy may fail to meet commitments they have made with Colorado and Idaho to remove spent nuclear fuel by 2035. The fate of the Yucca Mountain repository is still uncertain, and DOE’s Blue Ribbon Commission may not provide recommendations on a new direction for nuclear waste management until January 2012. Given this situation, DOE and the Navy cannot yet easily plan or wisely invest in long-term storage since they will not know how long they will have to store waste at DOE sites. Nevertheless, it seems likely that some extension of on-site storage will be needed, and additional information about storage systems will be needed to even start planning for extended storage. For example, it is not known how long the lives of existing facilities can be extended or what will happen to the waste or the storage containers during long-term on-site storage. EM officials told us that EM currently has no plan for developing information on extending the lives of existing facilities, but longevity studies could identify components of the storage facilities that are at risk for failure and repairs that could extend storage. Moreover, although NRC and other groups are planning to research the long-term storage of commercial spent nuclear fuel, DOE does not have comparable research planned for somewhat different storage systems at its sites. Thus, without taking some preliminary steps to assess the information necessary to plan for long-term storage, DOE and the Navy will not have the understanding needed to proceed with such planning when the future direction becomes clearer. The alternative is to wait until there is further clarity about national and departmental policy, which may take years after the Blue Ribbon Commission provides recommendations. To help prepare for longer storage of nuclear waste at DOE sites, we recommend the Secretary of Energy direct the Assistant Secretary for Environmental Management, and other DOE officials as appropriate, to take the following two actions: Assess the condition of existing nuclear waste storage facilities and the resources and information needed to extend the facilities’ useful lifetimes. Identify any gap between past and ongoing research into long-term nuclear waste storage and any additional actions needed to address DOE’s unique waste storage needs. We provided DOE and the Navy with a draft of this report for their review and comment. The Navy chose not to provide formal comments. DOE provided written comments on March 11, 2011, which are summarized below and reproduced in appendix II. DOE stated that it agreed with our recommendations but disagreed with two aspects of our report—that (1) there would likely be delay and increased costs due to DOE’s decision to terminate a repository at Yucca Mountain and (2) DOE may not meet its commitments to the states of Idaho and Colorado. After reviewing DOE’s comments, we believe that our findings are adequately supported and that any assumptions upon which those findings are based are appropriately acknowledged. We are encouraged that DOE agrees that it needs better information on the condition of existing nuclear waste storage facilities as well as research on very long-term storage to meet its unique needs. DOE recognizes that the waste may remain on its sites for a considerable period of time. This will likely require DOE to revise the target date in its current plans, which assume that a repository will be available in 2020. DOE disagreed with parts of the draft report that stated there would likely be a delay in removing waste from DOE sites and increased costs as a result of DOE's decision to terminate the proposed repository at Yucca Mountain. DOE stated that there was no “certain” date for opening the Yucca Mountain repository and that any opening was subject to contingencies beyond DOE’s control. DOE characterized our finding of a likely delay as speculation. DOE also stated that the Blue Ribbon Commission could propose options that will lead to more rapid disposal of waste than the Yucca Mountain approach. We believe that using 2020 as an opening date for the Yucca Mountain repository was a reasonable assumption for analyzing the effects of a possible termination of the program. In 2008, DOE itself established this target date for opening the planned Yucca Mountain repository, before it took steps to terminate the program. DOE did not provide an alternative target or any basis for one in its comments, which would be necessary for conducting a meaningful analysis. We agree that the opening date for the Yucca Mountain repository was uncertain, and therefore we have made clear in the report that our analysis is based on DOE’s own assumption of a 2020 opening. Regarding DOE’s assertion that the Blue Ribbon Commission could propose options for more rapid disposal, this also provides no new basis for analysis. It is unclear how specific the commission’s recommendations will be, whether DOE will choose to implement them, or how quickly they can be implemented. Key alternatives to Yucca Mountain that we reviewed—centralized storage, reprocessing, or a new repository—could take decades to implement. Therefore, the Yucca Mountain repository could have opened many years after 2020 and still possibly have been available sooner than these alternatives. Such uncertainties for both the availability of the Yucca Mountain repository and for any alternative led us to report a “likely” lengthening of the duration of on-site storage. DOE’s comments provide no basis for revising our finding. Second, DOE objected to the suggestion that DOE may not meet its commitments to the states of Idaho and Colorado. DOE stated in its comments that it intends to meet its commitments to remove spent nuclear fuel from those states by 2035, and that there is no factual basis to support that the commitments will not be met. However, we disagree with DOE’s representation of our findings and supporting facts. Although our report does conclude that DOE may not meet it commitments, it does not state that DOE “will not” meet them. Instead, we highlight some challenges to meeting these commitments if the Yucca Mountain repository program were terminated. Without the Yucca Mountain repository, DOE currently has no planned shipping destination for its spent nuclear fuel, and it is not clear when a new destination will be available. We also reported that some alternatives that the Blue Ribbon Commission might consider may not provide a solution soon enough—in the less than 25 years remaining before the 2035 milestones—or may not be applicable to DOE’s spent nuclear fuel. We are unable to say more because, as we reported, DOE has yet to announce a new plan for meeting its commitments. Its likelihood of meeting them will be clearer after DOE specifies how it plans to establish a new destination and ship its spent nuclear fuel there by 2035. DOE and the Navy also provided technical comments, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Energy and Defense, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The following is GAO’s comment on the Department of Energy's letter dated March 11, 2011. We acknowledged in our recommendations that they may need to be directed to other DOE officials, as appropriate. In addition to the contact named above, the following staff members made key contributions to this report: Janet Frisch, Assistant Director; Arkelga Braxton; Kevin Bray; Penney Harwell-Caramia; Scott Fletcher; Eugene Gray; Terry Hanford; Jonathan Kelly; Anne Rhodes-Kline; Mehrzad Nadji; Ben Shouse; and Vasiliki Theodoropoulos.
The Department of Energy's (DOE) Office of Environmental Management (EM) is responsible for storing and managing a total of about 13,000 metric tons of nuclear waste--spent nuclear fuel and high-level waste--at five DOE sites in Colorado, Idaho, New York, South Carolina, and Washington. Also, a joint DOE-Navy program stores spent nuclear fuel from warships at DOE's Idaho site. DOE and the Navy intended to permanently dispose of this nuclear waste at a repository planned for Yucca Mountain in Nevada. However, that plan is now in question because of actions taken to terminate the site. This report assesses (1) agreements DOE and the Navy have with states at the five sites and the effects a termination of the Yucca Mountain repository would have on their ability to fulfill these agreements; (2) the effects a termination would have on DOE's and the Navy's operations and costs for storing the waste; and (3) DOE's and the Navy's plans to mitigate these potential effects. GAO reviewed state agreements and DOE plans, visited waste facilities, and interviewed federal and state officials. Five states have agreements with DOE, and in one case with the Navy, regarding the storage, treatment, or disposal of nuclear waste stored at DOE sites. Only agreements with Colorado and Idaho include deadlines, or milestones, for removing waste from sites that may be threatened by a termination of the Yucca Mountain repository program. Under the agreements, DOE and the Navy are expected to remove their spent nuclear fuel from Idaho, and DOE is to remove its fuel from Colorado, by January 1, 2035. If a repository is not available to accept the waste, however, DOE and the Navy could miss these milestones. As a result, the government could face significant penalties--$60,000 for each day the waste remains in Idaho and $15,000 for each day the waste remains in Colorado--after January 1, 2035. These penalties could total about $27.4 million annually. Navy officials told GAO, however, their greater concern is that Idaho might suspend Navy shipments of spent nuclear fuel to the state until the Navy meets its agreement to remove spent nuclear fuel, a suspension that would interfere with the Navy's ability to refuel its nuclear warships. Terminating the Yucca Mountain repository would not affect DOE's or the Navy's nuclear waste operations on DOE sites in the near term, according to DOE and Navy officials. But it would likely extend on-site storage and increase storage costs, which could be substantial. For example, an EM analysis estimates that EM could need an additional $918 million to extend storage, assuming a 20-year delay in a repository's opening. Since it is not known when an alternative to Yucca Mountain will be available, it is difficult to estimate the total additional storage costs stemming from terminating the repository. Although EM officials told GAO that DOE can extend storage of nuclear waste on DOE sites for some time, additional information is needed to plan for longer storage. For instance, DOE does not know how long the lives of existing storage facilities can be extended beyond estimates in current site plans. In addition, although research is being planned for long-term storage of commercial spent nuclear fuel beyond 120 years, DOE has no plan for comparable research focusing on its unique long-term waste storage needs. DOE and the Navy have not yet developed plans to mitigate the potential effects of longer storage resulting from a termination of the Yucca Mountain repository. EM and Navy officials said they are waiting for recommendations from a Blue Ribbon Commission that DOE created in 2010 to clarify future nuclear waste management alternatives. Even after the commission's recommendations are available, however, DOE could face difficulties in planning how to mitigate the impact of a termination of the repository. For example, because it is not clear how specific the commission's recommendations will be, it may take time to develop the recommendations into a new nuclear waste management policy. Further, some recommendations may not lead to a solution soon enough to meet existing waste removal milestones. DOE and the Navy said it was too early to change existing plans since no final disposition path for the waste has been determined. GAO recommends that DOE (1) assess existing nuclear waste storage facilities and the resources and information needed to extend their useful lifetimes and (2) identify any additional research needed to address DOE's unique needs for long-term waste storage. DOE agreed with the recommendations, but objected to some of GAO's findings, which GAO continues to believe are sound.
The federal government has historically established minimum eligibility requirements for Medicaid and CHIP and provided states with considerable flexibility in expanding eligibility to individuals in households with higher incomes. PPACA made numerous changes to existing federal Medicaid and CHIP eligibility requirements and specified eligibility criteria for new types of assistance, such as the premium tax credit. PPACA also provided for a continued focus on certain CHIPRA initiatives; specified additional policies to facilitate eligible children’s enrollment in Medicaid, CHIP, and the premium tax credit; and included provisions to facilitate children’s access to private health insurance. Federal and state implementation of PPACA enrollment and eligibility provisions is under way. Eligibility for Medicaid and CHIP is limited to U.S. citizens and certain legally residing immigrants and is generally based on household income in relation to the FPL. For Medicaid, the federal government requires that states cover children with household incomes at or below specific eligibility levels, which range from 100 through 133 percent of FPL depending on the age of the child. States have flexibility to increase eligibility levels beyond the federally required levels for children of specific ages. For example, several states have Medicaid eligibility levels of 185 percent of FPL for infants, and a more limited number of states also have eligibility levels higher than the federal requirement for children older than age 1. Because Medicaid eligibility levels vary by children’s age, some members of a given family may qualify for Medicaid, while others do not. With CHIP programs, states cover children whose household incomes are too high for Medicaid eligibility; most states’ CHIP eligibility levels are between 200 and 300 percent of FPL. States use different methods for counting household income; for example, some states disregard portions of certain types of income, such as earned income, and states have varying standards regarding which household members to include when determining family size. With regard to changes to children’s eligibility for Medicaid and CHIP and eligibility specifications for the new premium tax credit, which are to be fully effective in 2014, PPACA included the following provisions. PPACA expanded Medicaid eligibility to children and adults under age 65 with household incomes at or below 133 percent of FPL. As a result, minimum eligibility levels for Medicaid will generally be the same for all family members. Some children with household incomes higher than 133 percent of FPL will continue to be eligible for Medicaid in states that have established higher eligibility levels for children. These states are not allowed to lower their Medicaid eligibility levels for children until fiscal year 2020. PPACA required a uniform method of counting household income, based on a household’s modified adjusted gross income (MAGI) to determine eligibility for Medicaid, CHIP, and the premium tax credit. As a result, household income for Medicaid and CHIP, as well as for the premium tax credit, will be determined consistently in all states. PPACA defined eligibility criteria for the new premium tax credit, which will apply in all states. Similar to Medicaid and CHIP, eligibility for the premium tax credit will be limited to U.S. citizens and legally residing immigrants. Eligibility will also be limited to individuals with household incomes between 100 and 400 percent of FPL. In addition, to be eligible for the premium tax credit, an individual cannot have access to public insurance such as Medicaid or CHIP or to affordable employer-sponsored health insurance that provides a minimum value. A child’s eligibility for Medicaid, CHIP, and the premium tax credit can change over time under PPACA as his or her household income fluctuates. For example, a child who begins the year eligible for the premium tax credit may become eligible for Medicaid or CHIP if household income declines during the year. Conversely, depending on the state, a child who begins the year eligible for Medicaid or CHIP may lose eligibility for these programs if household income increases. PPACA also contained provisions to facilitate eligible children’s enrollment in Medicaid, CHIP, and private health insurance subsidized by premium tax credits. For example, PPACA extended funding for CHIPRA outreach and enrollment grants through fiscal year 2015, prohibited states from requiring in-person interviews for enrollment beginning in 2014, provided for income to be verified through a federally managed hub of data electronically accessible to states, and specified a coordinated enrollment process, whereby with one federally defined uniform application, states will assess families for eligibility for Medicaid, CHIP, or the premium tax credit. PPACA also made funding available to states to plan and implement exchanges, which will provide eligible individuals and families—including those eligible for premium tax credits—the ability to compare, select, and enroll in participating private health insurance plans with standardized benefit and cost-sharing packages. Under PPACA, exchanges must be established in every state by January 1, 2014, either by the state itself or by the Secretary of HHS. Although not the focus of this report, PPACA also contained provisions to facilitate children’s access to private health insurance, apart from the provision of the premium tax credit. For example, as of September 2010, PPACA prohibited health plans and issuers from limiting or denying coverage for children under age 19 because of preexisting health conditions. (See table 1.) Implementing PPACA’s changes to Medicaid and CHIP eligibility determination and enrollment policies and preparing for implementation of the premium tax credit and other provisions of PPACA will require significant state and federal efforts. In August 2011, CMS and IRS separately issued three proposed rules to implement key PPACA provisions related to eligibility and enrollment for Medicaid, CHIP, and the premium tax credit; the CMS rules were finalized in March 2012. According to CMS, more detailed guidance, such as the specific information to be collected in the uniform application or the nature of the data available from the federal hub, will be distributed at a later date. The IRS proposed rule specified how to calculate household MAGI for determining premium tax credit eligibility, and the CMS rules adopted these methods for determining Medicaid and CHIP eligibility, with certain exceptions. IRS finalized its proposed rule in May 2012 with minimal change to these methods. The IRS proposed rule also described the standard for determining whether an individual has access to affordable employer-sponsored insurance for purposes of determining eligibility for the premium tax credit. Under the proposed affordability standard, employer-sponsored insurance is considered affordable if the cost of a self-only plan—meaning a plan that only covers the employee—does not exceed 9.5 percent of household income. Under the proposed standard, if one family member has access to affordable self-only employer- sponsored insurance, all other family members who are eligible to enroll in the employee’s plan are also considered to have access to affordable insurance and are therefore ineligible for the premium tax credit. In this manner, the proposed rule applied the same standard to all family members eligible for the employee’s plan, even if the cost of enrolling the family as a whole exceeds the 9.5 percent threshold. In the preamble to its proposed rule, IRS stated that the PPACA statute specifies using the self-only insurance affordability standard for employees as well as for spouses and dependents of an employee, citing a report issued by the Joint Committee on Taxation that similarly interpreted the law. Some who commented on the proposed rule suggested that it would be more consistent with congressional intent to interpret the statute to require the use of the cost to an employee of insuring all eligible family members in determining access to affordable employer-sponsored insurance. In its final premium tax credit rule, IRS confirmed that the proposed self-only insurance affordability standard would apply to employees, but it deferred a decision on the affordability standard for other eligible family members, such as children, to future rule making. Therefore, because this report focuses on children, the relevant affordability standard remains a proposed standard, and is referred to as such for the remainder of the report. Over three-quarters of uninsured children in January 2009 would be eligible for Medicaid, CHIP, or the premium tax credit under 2014 PPACA eligibility rules, according to our estimates. Applying final CMS and proposed IRS rules for 2014 program eligibility to 2009 SIPP data, we estimate that on the basis of household income and other eligibility criteria, such as citizenship, nearly 68 percent of the approximately 7 million children who were uninsured in January 2009 would be eligible for Medicaid or CHIP—about 48 percent for Medicaid and about 20 percent for CHIP. In addition, 7.5 percent of the uninsured children would be eligible for the premium tax credit. Nearly 13 percent of the uninsured children were noncitizens for whom we did not estimate eligibility because of limitations in the data.approximately 12 percent of uninsured children would be ineligible for We estimate that the final Medicaid, CHIP, or the premium tax credit. Specifically, 5.5 percent would be ineligible because they were in families with a household income that was too high—at greater than 400 percent of FPL. The remaining 6.6 percent would be ineligible because, though their families were considered low-income in that they met the household income requirements for the premium tax credit, they were considered to have access to affordable employer-sponsored insurance based on IRS’s proposed affordability standard. In particular, these children had at least one parent with employer-sponsored insurance that had an estimated cost below 9.5 percent of household income for a self-only plan. (See fig. 1.)These children would not be automatically eligible for the premium tax credit if the affordability standard were instead based on a family plan; their eligibility would depend on the cost of the family plan to which they had access. See appendix I for more information about our estimates. The proposed affordability standard could potentially affect significantly more children than the approximately 460,000 uninsured children we estimated above under certain scenarios. Many children eligible for CHIP have a parent with employer-sponsored insurance. Under PPACA, CHIP is not funded beyond 2015, and, even if federal funding is extended, states may opt to reduce eligibility levels for CHIP or eliminate Without CHIP- CHIP programs altogether beginning in fiscal year 2020.funded Medicaid expansion or separate CHIP programs, we estimate that an additional 1.9 million children who would otherwise be eligible for CHIP would be considered to have access to affordable insurance under this proposed standard and would be ineligible for the premium tax credit.(See fig. 2.) In commenting on IRS’s proposed rule on eligibility for the premium tax credit, some states and other organizations noted that IRS’s proposed interpretation of access to affordable employer-sponsored insurance— defining affordability on the basis of the cost of a self-only plan, and not on the cost of a family plan—could result in some children remaining uninsured. They explained that although a self-only plan for the employee may cost less than the 9.5 percent threshold, a family plan that would also insure the employee’s eligible family members could exceed it. As a result, some employees would not be able to afford the higher premiums to insure their family members, who therefore could remain uninsured. We did not estimate the cost associated with defining the affordability standard based on the cost of a family plan. The cost of such a change would depend on multiple factors, many of which remain uncertain, such as the availability of CHIP funding beyond 2015, the extent to which eligible families avail themselves of the premium tax credit, employer decisions, and the extent to which additional enrollees could affect the aggregate cost of premiums. The Congressional Budget Office has commented on the high degree of uncertainty inherent in projecting the future actions of employees and employers under PPACA as well as other factors that may affect federal costs, such as the number of individuals and families who will have household income in specific eligibility ranges in future years. We did not examine how many of the children estimated to be ineligible for the premium tax credit because of access to affordable employer- sponsored insurance would become eligible if the affordability standard were instead based on the cost of a family plan; the cost of family plans available to employees who chose not to purchase them was not available in the data we analyzed. However, separate data on the cost of family plans among employees who purchased a family plan suggest that some of these uninsured children, particularly those in families facing higher-than-average premium contributions, could become eligible for the premium tax credit if the affordability standard were based on the cost of a family plan. For example, in a 2011 survey, the Kaiser Family Foundation and Health Research & Education Trust found that on average, employees contributed $4,129 annually for a family plan, or 28 percent of the total cost to the employer of an annual family premium, which averaged $15,073. For a family of four with household income equivalent to 250 percent of the FPL, $4,129 represents about 7 percent of household income. However, the percentage of the annual premium paid by employees ranged widely around this average, and 15 percent of employees with family plans paid more than 50 percent of the annual premium. For a family of four with household income equivalent to 250 percent of the FPL, paying 51 percent of the average annual premium (or $7,687) would represent just over 13 percent of household income, exceeding the 9.5 percent threshold. Whether families ultimately choose to purchase insurance for children will depend on many factors, including individual decisions regarding what they can afford for health insurance. Applying final CMS and proposed IRS 2014 PPACA eligibility rules to children in 2009, we estimate that nationally, 9 percent of children eligible for Medicaid, CHIP, or the premium tax credit experienced a change in household income within 6 months that would affect their eligibility for a specific form of assistance, and 14 percent of these children experienced (See table 2.) In addition, some at least one such change within 1 year.children experienced multiple income changes within these time periods that would affect their eligibility for assistance more frequently. We estimate that, nationally, 2 percent of eligible children experienced changes in household income that would affect eligibility two or more times within 6 months, and 6 percent experienced two or more such changes within 1 year. The effect of continuous eligibility policies for Medicaid and CHIP on the frequency of eligibility changes becomes apparent when we consider children in states with versus states without such policies separately. Eligibility changes are higher than the national average in states without continuous eligibility policies in either their Medicaid or CHIP programs, and lower than the national average in states with them. In states with continuous eligibility policies for Medicaid and CHIP, eligibility changes under PPACA would be limited to children who begin the year eligible for the premium tax credit but experience a decrease in household income that would result in eligibility for Medicaid or CHIP instead. Therefore, the percentage of children experiencing changes in eligibility, and at risk of experiencing disruptions in coverage, is lower than the national average among children in the 23 states that have adopted continuous eligibility in both their Medicaid and CHIP programs and greater than the national average among children in the 18 states that do not have continuous eligibility in either program. Specifically, we estimate that about 3 percent of eligible children in states with continuous eligibility for Medicaid and CHIP experienced a change in household income that would affect eligibility under PPACA within 1 year. In contrast, we estimate that about 19 percent of eligible children in states without continuous eligibility experienced a change in household income that would affect program eligibility under PPACA at least once within 6 months, and about 30 percent experienced such a change within 1 year. (See table 3.) Changes in eligibility caused by income fluctuations could deter children’s enrollment in relevant programs if the process for changing enrollment is burdensome for the families and could further complicate other eligibility complexities, such as variation in eligibility within households. Eligibility for specific types of assistance can vary within households because low- to moderate-income adults with household incomes greater than 133 percent of FPL will typically be ineligible for any assistance or will be eligible for the premium tax credit rather than Medicaid or CHIP, while children in some of these households—particularly in states with higher income eligibility levels for Medicaid and CHIP—will be eligible instead for Medicaid or CHIP. We estimate that based on 2009 data, 21 percent of children eligible for Medicaid, CHIP, or the premium tax credit under PPACA would have different eligibility from their parents as of the beginning of the year. However, because of income fluctuations that occurred over the course of the year, we estimate that an additional 9 percent of eligible children would encounter this situation. CMS has provided states with incentives and guidance to implement current initiatives to improve enrollment policies and has made progress assisting states in implementing PPACA requirements aimed at further simplifying Medicaid and CHIP enrollment. State officials reported ongoing challenges with regard to enrolling eligible children, including the need for timely guidance to implement PPACA provisions, concerns about enrolling family members who are not eligible for the same program, and state budget constraints. Through an array of financial incentives and technical assistance, CMS has worked with states to enroll and retain eligible children in Medicaid and CHIP and to set up state exchanges under PPACA. Many of these efforts were initiated with funds appropriated under CHIPRA and continue under PPACA. For example, CHIPRA appropriated $100 million for fiscal years 2009 through 2013 in outreach grants and related efforts to improve the enrollment and retention of underserved populations in Medicaid and CHIP, and by the end of fiscal year 2011, CMS had awarded $80 million in such grants. CMS awarded the first round of outreach grants in fiscal year 2009 to 69 applicants in 43 states, which included state agencies and community-based and other nonprofit groups, and the second round in fiscal year 2011 to 39 applicants in 23 states.officials from selected states, officials noted that the outreach grants had helped the agencies reach eligible children. For example, Oregon Medicaid officials said that the CHIPRA outreach grant their agency received was crucial to reaching the state’s Hispanic population. The grant sought to support outreach by safety net providers, public health departments, and school-based health centers. Since fiscal year 2009, CMS has also awarded performance bonuses annually to states that implemented at least five of the eight enrollment initiatives outlined in CHIPRA and met specific enrollment goals, which are based on the state’s current Medicaid enrollment and population growth. (See table 4.) The number of states receiving these bonuses has more than doubled over the 3 years that bonuses have been awarded, increasing from 10 states in fiscal year 2009 to over 23 states in fiscal year 2011. In 2011, the amount of the performance bonuses ranged from approximately $1.3 million for Idaho to over $28 million for Maryland. (See app. II for a summary of the states that received these performance bonuses and the amounts of the awards.) In addition, among the 23 states that received a performance bonus in 2011, 16 received an enhanced bonus for exceeding their enrollment target by more than 10 percent.bonuses annually through fiscal year 2013. A key goal of PPACA was to increase Americans’ access to affordable health insurance. PPACA expanded eligibility for existing federal health programs and private health insurance, offered a new premium tax credit to offset the cost of private health insurance for some low- to moderate- income families whose incomes are too high to qualify for Medicaid or CHIP, and provided means for streamlining enrollment. Although our estimates are based on 2009 data, they illustrate the potential impact of PPACA, when fully implemented in 2014, on children’s access to affordable health insurance, and highlight the importance of many of the policies introduced in CHIPRA and continued in PPACA. For example, our estimates suggest that about 68 percent of children who were uninsured in 2009 would be eligible for Medicaid or CHIP under PPACA, underscoring the continued importance of outreach and simplified enrollment policies to ensure that eligible children are enrolled in the appropriate program. Similarly, significantly higher estimates of changes in eligibility within a year among children in states without continuous eligibility policies compared to states with such policies underscore the importance of a continued emphasis on such policies to minimize changes in eligibility. In addition, a small but significant number of uninsured children from low- to moderate-income families whose incomes are too high to qualify for Medicaid or CHIP would be ineligible for the premium tax credit under IRS’s proposed definition of access to affordable employer-sponsored insurance, which is based on the cost of a self-only plan available to the employee. Yet the cost of insuring other eligible family members could be higher and potentially unaffordable for some families. One implication of this proposal is that some families in which one member has an offer of self-only, employer-sponsored health insurance could be less likely to obtain family insurance than if no employer insurance were offered, because of their ineligibility for the premium tax credit. We recognize that in finalizing the affordability standard for an employee’s eligible family members, IRS must weigh many complex factors, such as costs to the federal government and effects on employers and families, some of which are difficult to predict, as well as the scope of its authority. However, under the proposed standard, an offer of affordable employer-sponsored health insurance to one family member could impede other family members’ access to affordable insurance—an outcome which would not further the broader goals of PPACA. In the Department of the Treasury’s future rule making, we recommend that the Secretary of the Treasury, in consultation with the Commissioner of Internal Revenue, consider the impact of the proposed standard for determining affordability of employer-sponsored insurance on children and other family members who are eligible to enroll, and whether it would be consistent with the goals of PPACA to adopt an alternative approach that would consider the cost of insuring eligible family members, or as necessary, seek clarification from Congress regarding its intent with respect to this standard. We provided a draft of this report for comment to HHS and the Department of the Treasury. Neither HHS nor the Department of the Treasury provided general comments on the report or its recommendation. Department of the Treasury officials provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to relevant congressional committees, the Secretary of Health and Human Services, the Secretary of the Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our first two objectives were to assess the extent to which uninsured children would be eligible for Medicaid, the State Children’s Health Insurance Program (CHIP), or the premium tax credit available under the Patient Protection and Affordable Care Act (PPACA) and the extent to which they would experience a change in eligibility among these forms of assistance because of changes in household income during a year. We identified the Survey of Income and Program Participation (SIPP), a nationally representative, longitudinal survey conducted by the U.S. Census Bureau, as a useful data set for our purposes because it provides detailed monthly information over a multiyear period about specific types of income, family relationships, and health insurance status of individuals and households representing the civilian, noninstitutionalized population of the United States. We analyzed data from the most recently available SIPP, which began in 2008, and surveyed the occupants of approximately 42,000 households. Our analysis of SIPP data is subject to limitations. The analysis uses 2009 data to illustrate the extent to which uninsured children would be eligible for Medicaid, CHIP, or the premium credit program had proposed and final 2014 PPACA eligibility rules been in effect at that time. To the extent that patterns in household income, insurance status, or other eligibility criteria differ in 2014, eligibility in 2014 will also differ. In addition, the estimates are based on a sample of the population and may differ from estimates that would be obtained if the full population had been surveyed using the same methods, and the estimates are based on self-reported information that may contain errors because of factors such as differing interpretation of survey questions, inability or unwillingness of survey participants to provide correct information, or data processing errors. The Census Bureau reported that quality control and edit procedures were used to reduce such errors.data have shown that the SIPP captures less income compared to other Studies of SIPP income surveys, particularly for higher-income survey participants.analysis focuses on lower-income survey participants, to the extent that the SIPP data underrepresent income in this population as well, our estimates would indicate that more children meet income requirements for Medicaid or CHIP versus the premium tax credit, and for the premium tax credit versus being ineligible for any type of assistance, than other surveys might suggest. Our analysis variables approximate but do not always fully capture key PPACA eligibility criteria, such as household income or citizenship status, as described further below. To determine the reliability of SIPP data, we reviewed related documentation and conducted electronic testing for missing data, outliers, and apparent errors. For example, we tested whether persons who reported being uninsured in January 2009 had reported having health insurance in the prior and following month. We also compared our results to estimates based on data from another Census Bureau survey, the American Community Survey, and to other studies that addressed related research questions. We determined that the SIPP data were sufficiently reliable for the purposes of our engagement. A child’s eligibility for Medicaid, CHIP, or the premium tax credit under PPACA is based in part on having household income below specified limits relative to the federal poverty level (FPL). The Centers for Medicare & Medicaid Services (CMS) and the Internal Revenue Service (IRS) have specified methods for determining a child’s household income under PPACA in final and proposed eligibility rules, and differences exist in how household income is determined for Medicaid and CHIP versus the premium tax credit. A child’s eligibility for these programs is also based on citizenship or legal residence, and, for the premium tax credit, on whether the child is considered to have access to other affordable insurance. From the available SIPP data, we developed variables for our analysis based on these eligibility rules. Household composition. We created two household composition variables for children on the basis of final CMS and proposed IRS rules for determining household composition for the premium tax credit and for Medicaid or CHIP. The premium tax credit household composition variable defined households as composed of a taxpayer and spouse, if applicable, and tax dependents. Tax dependents were defined as follows: Children under age 19 (or ages 19 through 23 who were full-time students) whose taxable income (together with a spouse’s income, if applicable) was not more than half of household income. Other family members, who (together with a spouse, if applicable), earned less than the IRS threshold and whose total income was not more than half of household income. Taxpayers were those who did not meet the above definition of a tax dependent. This definition of a tax dependent excluded those with significant income, but did not capture tax rules about the amount of financial support a taxpayer must provide for children or other dependents in order to claim them as tax dependents. For example, most children under age 19 were defined as tax dependents. Households of tax-dependent children included the child, the child’s taxpayer parents or guardians, and any other tax dependents of the taxpayers, such as the child’s siblings. When children lived with two unmarried parents, the parent with the higher income was designated as the taxpayer parent. This household composition variable did not account for children who may be claimed as tax dependents by a noncustodial parent or for spouses who choose to file taxes separately. The Medicaid household composition variable was the same as the premium tax credit household composition variable, with certain exceptions. When a tax-dependent child did not live with a taxpayer parent or lived with two parents who were not married to one another, or had household income below tax filing thresholds, the child’s household for purposes of determining Medicaid and CHIP eligibility was composed of the child, the child’s parents, siblings under age 19 (or ages 19 and 20 who were full-time students), and any children of the child. In addition, pregnant women were counted as two household members when determining Medicaid and CHIP eligibility. Pregnancy status is not directly available from the SIPP data; we estimated that women were pregnant in a given month if they had a new infant during one of the following 8 months. Household income. We constructed four household income variables for children based on rules for counting income for Medicaid and CHIP and for the premium tax credit under PPACA. Tax dependent’s income was not included in any household income variable if it was less than the amount that would necessitate filing a tax return. To approximate modified adjusted gross income (MAGI) household income under PPACA, a child’s premium tax credit household income was defined as the sum of all income, less means-tested assistance income; child support or foster care payments; veterans and workers compensation or sickness or accident insurance payments; or gifts from relatives or friends—self-reported by individuals included in the premium tax credit household composition variable defined above, during calendar year 2009. A child’s baseline Medicaid household income was the sum of the same income types in the Medicaid household composition variable defined above, during specific months of 2009. A child’s adjusted Medicaid household income was equal to the baseline Medicaid household income variable, less income deductions applied in specific states in their Medicaid eligibility determination processes, including deductions of certain amounts of earned income and child care expenses. A child’s adjusted CHIP household income was equal to the baseline Medicaid household income variable, less income deductions applied in specific states in their CHIP eligibility determination processes, including deductions of certain amounts of earned income and child care expenses. FPL. We constructed four percentages of FPL variables based on the four household income variables and two household composition variables defined above. A child’s baseline Medicaid percentage of FPL was the Medicaid household income variable divided by the 2009 poverty threshold applicable to the child’s state and family size contained in the Medicaid household composition variable. A child’s adjusted Medicaid percentage of FPL was the adjusted Medicaid household income variable divided by the 2009 poverty threshold applicable to the child’s state and family size contained in the Medicaid household composition variable. A child’s adjusted CHIP percentage of FPL was the adjusted CHIP household income variable divided by the 2009 poverty threshold applicable to the child’s state and family size contained in the Medicaid household composition variable. A child’s premium tax credit percentage of FPL was the premium tax credit household income variable divided by the 2009 poverty threshold applicable to the child’s state and household size contained in the premium tax credit household composition variable. Insurance status. Employer-sponsored insurance was defined as insurance obtained through an individual’s or a family member’s employer, former employer, union, or the military. Individuals were not categorized as having employer-sponsored insurance if they also had Medicaid or CHIP coverage. We used a procedure that the Census Bureau has adopted for the American Community Survey to address under-reporting of Medicaid coverage. Specifically, respondents were recategorized as having Medicaid if they were one of the following: a child under age 19 and the unmarried child of a parent with public a citizen parent with public assistance, a citizen parent married to a citizen with public assistance or a foster child, or a Supplemental Security Income recipient who met one of the following conditions: (1) did not have children or (2) had children but was not working. Individuals were defined as uninsured if they were not categorized as having employer-sponsored or other private insurance, Medicaid, CHIP, or other public insurance. Access to affordable employer-sponsored insurance. Children who had a taxpayer parent as part of their household composition who had employer-sponsored insurance, and children who were taxpayers and had employer-sponsored insurance, were defined as having met the proposed standard for access to affordable employer-sponsored insurance if the average annual employee contribution for a self-only plan, $921, was less than or equal to 9.5 percent of premium tax credit household income. This definition of access to affordable employer- sponsored insurance did not take into account the requirement that employer-sponsored insurance must provide a minimum value in order to be considered affordable, and it assumed that children were eligible to enroll in a parent’s employer-sponsored insurance. Citizenship or legal residence. Citizenship status is available in SIPP data, but the legal status of noncitizens is not directly available from SIPP data. We defined noncitizens as legally residing if they or a parent reported receiving public insurance, such as Medicaid, or other public assistance, which requires documentation of citizenship or legal residence. The remaining noncitizens were defined as potentially ineligible noncitizens. Based on the variables defined above, we categorized children as eligible or ineligible for Medicaid, CHIP, and the premium tax credit under proposed and final 2014 PPACA eligibility rules. We defined children as eligible for Medicaid under PPACA if they were citizens or legally residing noncitizens whose baseline Medicaid percentage of FPL was less than or equal to 138 percent, or who had an adjusted Medicaid percentage of FPL that was less than or equal to the 2012 state-specific income eligibility level for their age group. Foster children and Supplemental Security Income recipients were also defined as Medicaid eligible. We defined children as eligible for CHIP under PPACA if they were citizens or legally residing noncitizens not estimated to be eligible for Medicaid, with an adjusted CHIP percentage of FPL that was less than or equal to the applicable 2012 CHIP state-specific income eligibility level. CHIP included both CHIP-funded Medicaid expansion programs and separate CHIP programs. Children with employer- sponsored or other private insurance were defined as ineligible for separate CHIP programs. We defined children as eligible for the premium tax credit under PPACA if they were citizens or legally residing non-citizens not estimated to be eligible for Medicaid or CHIP, with premium tax credit percentage of FPL between 100 and 400 percent and without access to affordable employer-sponsored insurance. Our analyses considered three groups of children: uninsured children ages 0 through 18, CHIP-eligible children ages 0 through 18 who were uninsured or publicly insured, and all children ages 0 through 18 eligible for Medicaid, CHIP, or the premium tax credit. We limited our analysis to children who participated in the SIPP for all of calendar year 2009. Among uninsured children, we used January 2009 SIPP data to estimate the percentage who would be eligible for Medicaid, CHIP, and the premium tax credit based on the above definitions of 2014 PPACA eligibility rules, as well as the percentage who would be ineligible. Among uninsured or publicly insured children estimated to be eligible for CHIP, we used January 2009 SIPP data to estimate the percentage who would be eligible for the premium tax credit based on the above definitions of 2014 PPACA eligibility rules, as well as the percentage who would be ineligible, if CHIP were not available. Among all children estimated to be eligible for Medicaid, CHIP, or the premium tax credit in January 2009, we used calendar year 2009 SIPP data to estimate the percentage who would have experienced one or two changes in eligibility for specific types of assistance, including becoming ineligible for any type of assistance, under 2014 PPACA eligibility rules within 6 months and a year. We incorporated state-specific continuous eligibility policies; for children living in states that according to CMS had a continuous eligibility policy in place as of 2012, we did not count them as changing eligibility for the relevant program even if their income eligibility changed. Among all children estimated to be eligible for Medicaid, CHIP, or the premium tax credit in January 2009, we used calendar year SIPP data to estimate the percentage who would be eligible for Medicaid and CHIP under 2014 PPACA eligibility rules with a Medicaid percentage of FPL higher than 138 percent in January 2009 and during 2009 as a whole, in order to examine the percentage of children who could be eligible for different program than their parents. For all estimated percentages, we used the SIPP 2009 calendar year sampling weight and calculated a lower and upper bound at the 95 percent confidence level using replicate weights that took into account the complex survey design. The Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA) and PPACA included a number of initiatives and provisions under which states may obtain federal funding to assist in enrolling eligible children, and most states have taken advantage of at least one of these. For example, CHIPRA provided incentives to states to undertake eight enrollment initiatives. Beginning in fiscal year 2009, CMS awarded performance bonuses to states that implemented at least five of the eight enrollment initiatives and also achieved specific enrollment goals. (See fig. 3.) PPACA authorized the provision of planning grants and establishment grants to assist states with the implementation of the American Health Benefit Exchanges (referred to as exchanges)— marketplaces where eligible families and individuals can purchase private health insurance. Recognizing that states will need to upgrade their Medicaid information technology systems to comply with PPACA, CMS has provided states with the opportunity to claim an enhanced Federal Medical Assistance Percentage (FMAP)—the federal share of Medicaid expenditures—through fiscal year 2015 for the costs associated with certain systems improvements, such as updates to their claims processing and enrollment systems. Specifically, instead of the 50 percent FMAP that has historically been available for most Medicaid administrative expenses, qualified states can obtain a 90 percent FMAP for the costs of implementing new information systems and a 75 percent FMAP for the costs of administering these new systems. In addition to the contact named above, Susan T. Anthony, Assistant Director; Susan Barnidge; Emily Beller; Sandra George; Eagan Kemp; and Roseanne Price made key contributions to this report.
PPACA sought to increase access to affordable health insurance, and major provisions, such as a tax credit to offset the cost of private insurance premiums, will become effective in 2014. GAO estimated the extent to which (1) uninsured children would be eligible for Medicaid, CHIP, or the premium tax credit under PPACA, and (2) children would experience a change in eligibility among Medicaid, CHIP, and the premium tax credit under PPACA because of income changes. GAO also assessed CMS steps thus far to help states enroll children and related state challenges. GAO applied proposed and final 2014 PPACA eligibility rules to nationally representative 2009 data from the U.S. Census Bureau and interviewed officials from CMS and IRS, two federal agencies responsible for implementing relevant PPACA provisions, and six states that received federal funds for enrollment efforts. GAO estimates that under the 2010 Patient Protection and Affordable Care Act (PPACA), about three-quarters of approximately 7 million children who were uninsured in January 2009 would be eligible for Medicaid, the State Children’s Health Insurance Program (CHIP), or the new premium tax credit. The remaining children had family incomes too high to be eligible, were noncitizens, or would be ineligible for the premium tax credit because they would be considered to have access to affordable employer-sponsored insurance per the Internal Revenue Service’s (IRS) proposed affordability standard, in which IRS interpreted PPACA as defining affordability for an employee’s eligible family members based on the cost of an employee-only plan. Some commenters raised concerns that IRS’s interpretation was inconsistent with PPACA’s goal of increasing access to affordable health insurance as it does not consider the higher cost of family insurance and could result in some children remaining uninsured. Under PPACA, CHIP is not funded beyond 2015, and states may opt to reduce CHIP eligibility or eliminate programs in fiscal year 2020. Without CHIP, more children could become uninsured. In May 2012, IRS finalized its rule but deferred finalizing the proposed affordability standard. GAO estimates that about 14 percent of children in January 2009 who met 2014 PPACA eligibility criteria for these programs experienced a change in household income that would affect eligibility within 1 year. Changes in eligibility among children in states without policies allowing them to remain eligible for Medicaid and CHIP for a full year were estimated to be higher than in states with such policies. Frequent eligibility changes could deter enrollment if the process for changing enrollment is burdensome. The Centers for Medicare & Medicaid Services (CMS) has provided states with financial incentives and technical guidance to improve enrollment and to implement PPACA provisions. States reported challenges to enrolling eligible children, including the need for guidance to implement certain provisions—which CMS indicated was forthcoming—and state budget constraints. GAO recommends that in future rule making, the Secretary of the Treasury, in consultation with the Commissioner of Internal Revenue, consider the impact of the proposed standard for determining affordability of employer-sponsored insurance on eligible family members, and whether it would be consistent with PPACA to adopt an approach that would consider the cost of insuring eligible family members, or as necessary, seek clarification from Congress regarding its intent with respect to this standard. HHS and Treasury were given a draft of this report for review, but neither provided formal comments. Treasury provided technical comments, which GAO incorporated as appropriate.
Established as a department in 1913, Labor carries out its mission by administering and enforcing a variety of federal labor laws guaranteeing workers’ rights to a workplace free from safety and health hazards, a minimum hourly wage and overtime pay, family and medical leave, freedom from employment discrimination, and unemployment insurance. Labor also protects workers’ pension rights; provides job training programs; helps workers find jobs; works to strengthen free collective bargaining; and keeps track of changes in employment, prices, and other national economic measures. About three-fourths of Labor’s almost $35 billion budget is composed of mandatory spending on income maintenance programs, such as the unemployment insurance program. Table 1 shows Labor’s appropriation and authorized staff-year spending for fiscal year 1998. Fiscal year 1998 appropriations (millions) Unemployment insurance and other income maintenance expenses Occupational Safety and Health Administration Mine Safety and Health Administration Pension and Welfare Benefits Administration Office of the Inspector General Included under employment training. Labor’s diverse functions are carried out by different offices in a decentralized organizational structure. Labor has 24 component offices or units and more than 1,000 field offices to support its various functional responsibilities (see fig. 1). However, its many program activities fall into two major categories: enhancing workers’ skills through job training and ensuring worker protection. A third category relates to developing economic statistics, such as the Consumer Price Index (CPI) and unemployment data, which are used by business, labor, and government in formulating fiscal and monetary policy and in making cost-of-living adjustments. reliable information for executive branch and congressional decision-making. The Results Act is aimed at improving program performance. It requires that agencies, in consultation with the Congress and after soliciting the views of other stakeholders, clearly define their missions and articulate comprehensive mission statements that define their basic purposes. It also requires that they establish long-term strategic goals and link annual performance goals to them. Agencies must then measure their performance against the goals they have set and report publicly on how well they are doing. In addition to monitoring ongoing performance, agencies are expected to evaluate their programs and to use the results from these evaluations to improve the programs. The Results Act requires virtually every executive agency to develop a strategic plan covering a period of at least 5 years from the fiscal year in which it is submitted and to submit the plan to the Congress and the Office of Management and Budget (OMB). OMB provided guidance on the preparation and submission of strategic plans as a new part of its Circular No. A-11—the basic instructions for preparing the president’s budget—to underscore the essential link between the Results Act and the budget process. The strategic plans are to include six elements: (1) a mission statement, (2) long-term goals and objectives, (3) approaches or strategies to achieve the goals and objectives, (4) a discussion of the relationship between long-term goals and annual performance goals, (5) key external factors affecting goals and objectives, and (6) evaluations used to establish goals and objectives and a schedule for future evaluations. OMB required agencies to submit major parts of their draft strategic plans during the spring of 1997. The completed strategic plan was due to OMB and the Congress by September 30, 1997. The act requires agencies to submit annual performance plans tied to the agencies’ budget request to reinforce the connections between the long-term strategic goals outlined in the strategic plans and the day-to-day activities of program managers and staff. Labor is expected to submit its first annual performance plan, which covers fiscal year 1999, this week along with its budget request. Labor’s decentralized structure makes it both more important and more difficult to ensure a system of accountability as envisioned in the Results Act. Labor’s September 30, 1997, strategic plan reflects Labor’s decentralized approach and the difficulty it presents for establishing departmentwide goals and monitoring their attainment. Labor has traditionally operated as a set of individual components, each working largely independently with limited central direction and control. This decentralized organizational structure may allow Labor more flexibility to meet a variety of needs and focus resources in the field. However, it also makes adopting the better management practices envisioned by the Results Act more challenging. That is, articulating a comprehensive departmentwide mission statement, which is linked to results-oriented goals, objectives, and performance measures, is difficult because of the historical lack of central planning and the existing decentralized organizational structure. Labor chose to present individual plans for 15 of its 24 component offices along with a strategic plan overview. This option was not inappropriate—it was specifically allowed by OMB. While OMB Circular A-11 strongly encourages agencies to submit a single, agencywide strategic plan, it states that an agency with disparate functions, such as Labor, may prepare several strategic plans for its major components or programs. Circular A-11 further provides that when an agency does prepare multiple strategic plans for component units, these should not be merely packaged together and submitted as a single strategic plan because the size and detail of such a compilation would reduce the plan’s usefulness. Moreover, the agency is to prepare an agencywide strategic overview that will link individual plans by giving an overall statement of the agency’s mission and goals. Labor’s overview contains six departmentwide goals. Five of these are results-oriented, and the sixth describes the process that will support the achievement of the other goals: lifelong learning and skill development; promoting welfare to work; enhancing pension and health benefits security; safe, healthy, and equal opportunity workplaces; helping working Americans balance work and family; and maintaining a departmental strategic management process. The strategic plan Labor submitted to OMB and to the Congress on September 30, 1997, addressed many of the concerns we raised in our review of the draft plan submitted to OMB and provided to the Congress for consultation 4 months earlier, and it incorporated many improvements that made it more responsive to the Results Act. Labor’s revised strategic overview and all but one of the 15 component unit plans include all six elements required by the act. Further, the overview’s mission statement provides a more complete description of Labor’s basic purpose. Moreover, discussions of strategies to achieve goals and external factors that could affect the achievement of goals are discussed alongside individual goals, which facilitates the understanding of how particular strategies and external factors are linked to each goal. The overview also attempts to address Labor’s traditionally decentralized management approach, which has posed numerous management challenges for Labor in the past. For example, the sixth departmentwide goal, maintaining a departmental strategic management process, was added to the formally submitted plan. This may be an indication of a renewed emphasis by Labor to develop a more strategic approach to departmental management, an improvement that we have recommended in the past. Other indications of this renewed approach to departmentwide leadership are evident in the similar organizational style of each of the component plans and the clear links between the strategic overview and the plans. For example, in the revised overview, the strategic goals of each of the units are highlighted under the appropriate departmentwide goal. Similarly, in the plans for each of the component units, the unit strategic goals are categorized according to the departmentwide goal to which they correspond. 15 agency goals listed under departmental goal 4—safe, healthy, and equal opportunity workplaces—are organization-specific rather than reflective of goals necessary to achieve the overall mission regardless of where the responsibility is placed organizationally. For example, there is no single stated goal of reducing workplace fatalities, injuries, and illnesses. Instead, four separate goals reflect that intended result in different kinds of workplaces where the Occupational Safety and Health Administration (OSHA), the Mine Safety and Health Administration (MSHA), the Employment and Training Administration (ETA), or the Office of the Assistant Secretary for Administration and Management (OASAM) has responsibility. A fifth goal reflects the responsibility of yet another component unit—the Employment Standards Administration (ESA)—to “minimize the human, social, and financial costs of work-related injuries” by encouraging the prompt return to work after injury in federal workplaces. Establishing goals that reflect organizational units is useful for traditional accountability purposes, such as monitoring resources, processes, and outputs, but less useful for results-oriented planning. A mission-focused rather than organizationally focused planning process would improve Labor’s ability to examine its operations to find a less costly, more effective means of meeting its mission. In past work, we have traced the management problems of many federal agencies to obsolete organizational structures that are inadequate for modern demands. For example, our work has shown that the effectiveness of federal program areas as diverse as employment assistance and training, rural development, early childhood development, and food safety has been plagued by fragmented or overlapping efforts. A frequently cited example of overlap and ineffectiveness is the federal food safety system, which took shape under as many as 35 laws and was administered by 12 different agencies, yet had not effectively protected the public from major foodborne illnesses. As federal agencies become more outcome-oriented, they sometimes find that outmoded organizations must be changed to better meet customer needs and address the interests of stakeholders. develop the plan, nor does it specify how future evaluations will help assess Labor’s success in achieving its stated goals. Instead, the overview discusses how evaluations in the regulatory agencies have lagged behind those in the employment and training area. In that respect, it is even more important that Labor provide schedules or timelines for future evaluations, identify the evaluations that will be done, and highlight how future program evaluations will be used to improve performance. Along those lines, we reported earlier that the experiences of OSHA as a pilot project could provide insight into how evaluations can be managed. OSHA has been involved in a number of activities geared toward making the management improvements intended by the Results Act. We believe that although not a requirement of the strategic planning process, it would be helpful for Labor to build on the experiences gained from the OSHA pilot project—identifying lessons learned and whether best practices or other lessons could be applied departmentwide or in units with similar functions. A focus on results, as envisioned by the Results Act, implies that federal programs that contribute to the same or similar results should be closely coordinated to ensure that goals are consistent and, as appropriate, program efforts are mutually reinforcing. In our review of the strategic plan, we noted that Labor should improve the management of crosscutting program efforts by ensuring that those programs are appropriately coordinated to avoid duplication, fragmentation, and overlap. For example, while Labor’s plan refers to a few other agencies with responsibilities in job training programs and notes that Labor plans to work with them, the plan contains no discussion of what specific coordination mechanism Labor will use to realize efficiencies and possible strategies to consolidate or coordinate job training programs to achieve a more effective job training system. Realizing the benefits of strategic planning will require that Labor has effective information management systems. Instead, we have found a lack of reliable and consistent information needed to monitor performance of individual programs and to disseminate information for use by others. Labor must also meet the challenge that faces all government agencies of ensuring information security, getting ready for the year 2000, and ensuring that it has an adequate systems architecture. management activities, which would include addressing these specific and general issues. Labor appointed a chief information officer in August 1996. In 1996, OMB raised a question regarding this individual also serving as the Assistant Secretary for Administration and Management, since the Clinger-Cohen Act requires that information resources management be the primary function of the chief information officer. Because it is unclear whether one individual can fulfill the responsibilities required by both positions, OMB has asked Labor to evaluate its approach and report to OMB by the end of fiscal year 1998. Performance measurement, one of the Results Act’s most important features, will require that Labor address a lack of reliable management information across the Department. Under the act, executive branch agencies are required to develop performance plans that use performance measurement to reinforce the connection between the long-term strategic goals outlined in their strategic plans and the day-to-day activities of their managers and staff. The annual performance plans are to include performance goals for an agency’s program activities as listed in the budget, a summary of the necessary resources to conduct these activities, the performance indicators that will be used to measure performance, and a discussion of how the performance data will be validated and verified. Successful performance measurement requires that agencies recognize that they must balance their ideal performance measurement systems against real-world considerations, such as the cost and effort involved in gathering and analyzing data, while ensuring that the data they do collect are sufficiently complete, accurate, and consistent to be useful in decisionmaking. Although we have not yet reviewed Labor’s performance plan for fiscal year 1999, our past reviews of individual programs throughout the agency have found critical program performance information to be lacking, unreliable, or inconsistent. Examples can be found in ETA and OSHA. management information that would allow it to monitor its performance in meeting the program’s statutory and regulatory deadlines. Without information on the extent and cause of missed time periods, ETA cannot ensure that agricultural employers have workers when they are needed. OSHA provides an example of the questionable reliability of some of Labor’s data. As we reported in December 1996, OSHA, in its Integrated Management Information System (IMIS), does not always appropriately characterize or fully capture information on settlement agreements it has reached with employers, nor does it always change inspection data in a timely manner to reflect the terms of a settlement agreement. As a result, information regarding the number or type of violations and penalty amounts associated with a particular inspection can be distorted or inaccurate because it may not include reductions in penalties that occur as part of the settlement process. In addition, the depiction within its database of the relationship between a fatality or injury and the violations detected can be misleading. Not only do unreliable data limit effective management of OSHA’s programs; they can also affect the private sector because, unlike some other government-maintained databases, OSHA’s IMIS database is publicly accessible. Academia relies on its accuracy in conducting policy research, while some private sector employers use its data in their commercial activities. For example, a database information service company based in Maplewood, New Jersey, offers standard reports and customized searches of Labor’s data to assist both public and private sector organizations with screening companies before contracting with them for products or services. In our work on Job Corps—administered by ETA—we also found that reported information did not provide an accurate picture of program activities and results. Our survey of employers who were reported as hiring Job Corps participants showed that about 15 percent of the job placements in our sample were potentially invalid: A number of employers reported that they had not hired students whom Labor had reported placed with their businesses, and other employers of Job Corps participants identified by Labor could not be found. many of the programs are administered by state and local agencies with federal funding and oversight, such as ETA’s Job Training Partnership Act (JTPA) programs. For example, as we reported in September 1996, we found a lack of consistency among Labor and other agencies administering employment-focused programs for the disabled. Those that collected data on program outcomes—such as data on whether participants got jobs and kept them, what wages they received, and whether they received employee benefits such as health insurance—used different definitions for key data. They also had different eligibility criteria, paperwork requirements, software, and confidentiality rules that limited comparisons of program performance. The need for consistent data is particularly significant given the challenges Labor faces in meeting the goals of workforce development within the context of an uncoordinated system of multiple employment training programs operated by numerous departments and agencies. For fiscal year 1995, we identified 163 federal employment training programs, with a total budget of $20.4 billion, operated by a total of 15 federal departments and agencies; Labor had responsibility for 37 of these programs. Although many of these programs had similar goals and overlapping missions, they often had inconsistent measures for program success—where there were measures at all. As a result, we do not know whether individual programs are effective or whether the federal government’s efforts to improve skills, employment, and wages of workers are successful. In carrying out its mission, Labor produces some information for use outside the Department by both government and private sector entities. Examples include the prevailing wage rates applicable under certain statutes and statistical data in the field of labor economics, such as the CPI. This information—like the performance management information Labor uses—can be affected by weaknesses in Labor’s information management systems. ESA, for example, sets prevailing wage rates under the Davis-Bacon Act for construction job classifications in some 3,000 individual counties or groups of counties and for four different types of construction. Employers on federal construction projects must pay workers wages at or above these rates. Wage rate determinations are based on voluntarily submitted wage and benefit data from employers and third parties, such as unions or trade groups, on construction projects. In May 1996, we reported that Labor’s wage determination process contained weaknesses that could permit the use of fraudulent or inaccurate data in the setting of prevailing wage rates. If these weaknesses allow the use of erroneous data, the result may be in either of two directions. If the wage rate is set too low, construction workers may be paid less than the amount to which they are entitled; if the rate is too high, the government may pay excessive construction costs. Labor has begun to address these process weaknesses. Its long-term strategy involves an initiative funded at about $4 million in its fiscal year 1997 budget to develop, evaluate, and implement alternative reliable wage determination methodologies that would provide accurate and timely wage determinations at reasonable cost. We recommended some additional steps, however, that would, in the short-term, improve the verification of wage data submitted by employers. The House Appropriations Committee subsequently directed Labor to ensure that an appropriate portion of the funds appropriated for the program in fiscal year 1997 is used to implement those recommendations and requested that we review the success of those efforts. We expect to begin this study in early 1998. weights were updated more frequently. Because BLS has updated these weights only every 10 years or so, we recommended more frequent updating of the market basket expenditure weights to make the CPI more timely in its representation of consumer expenditures. Information management is the subject of two new areas we have added this year to our list of areas at high risk of fraud, waste, abuse, or mismanagement: information security and the year 2000 problem, both of which apply to Labor as well as to all other government agencies. Information security generally involves an agency’s ability to adequately protect the information it collects from unauthorized access. Ensuring information security is an ongoing challenge for Labor, especially given the sensitivity of some of the employee information being collected. Ensuring confidentiality is also essential to the quality of the information collected, given the voluntary nature of many of the surveys that Labor administers, such as the wage reports used to set Davis-Bacon prevailing wage rates. The second area involves the need for computer systems to be changed to accommodate dates beyond the year 1999. This year 2000 problem stems from the common practice of abbreviating years by their last two digits. Thus, miscalculations in all kinds of activities, such as benefit payments, could occur because the computer system would interpret 00 as 1900 instead of 2000. Labor, along with other agencies that use dates to process information, is faced with the challenge of developing strategies to deal with this potential problem area in the near future. We have been asked to look at a number of efforts in individual Labor units to assess their progress toward making their computer systems capable of accommodating 21st century dates. challenge to obtain complete, reliable, and consistent information throughout the Department is formidable. However, while solutions to complex information management and technology problems are not simple, they do exist. For example, as computer-based information systems have become larger and more complex over the past 10 years, the importance of, and reliance on, what is called a “systems architecture” has correspondingly increased. Simply put, an architecture is the blueprint to guide and constrain the development and evolution of a collection of related systems. This is done first in logical terms, such as defining the organization’s functions, providing high-level descriptions of its information systems and their interrelationships, and specifying how and where information flows. Second, this blueprint explains operations in technical terms, such as specifying hardware, software, data communications, security, and performance characteristics. The Congress has recognized the importance of such architecture in improving the effectiveness and efficiency of federal information systems. The Clinger-Cohen Act of 1996 requires, among other provisions, that department-level chief information officers develop, maintain, and facilitate the implementation of integrated systems architecture. A sound systems architecture would ensure that data being collected and maintained within an organization are structured and stored in a manner that makes them accessible, understandable, and useful throughout the organization. Labor’s programs touch the lives of nearly every American because of the Department’s responsibilities for employment training, job placement, and income security for workers when they are unemployed, as well as workplace conditions. Labor’s mission is an urgent one. Each day or week or year of unemployment or underemployment is one too many for individuals and their families. Every instance of a worker’s being injured on the job or not paid legal wages is one that should not occur. Every employer frustrated in attempts to find competent workers or to understand and comply with complex or unclear regulations contributes to productivity losses our country can ill afford. And every dollar wasted in carrying out the Department’s mission is one we cannot afford to waste. Labor currently has a budget of $34.6 billion and about 16,700 staff to carry out its program activities. Over the years, our work on the effectiveness of these programs has called for more efficient use of these resources, and we have recommended that Labor improve its strategic planning process. The current federal effort to improve strategic planning seeks to shift the focus of government decision-making and accountability away from a preoccupation with activities—such as awarding grants and conducting inspections—to a focus on the results of those activities such as real gains in employability, safety, or program quality. Labor’s strategic planning efforts are still very much a work in progress. Like other agencies, Labor must focus more on the results of its activities and on obtaining the information it needs for a more focused, results-oriented management decision-making process. The Results Act provides a statutory framework needed to manage for results, and Labor has begun to improve its management practices in ways that are consistent with that legislation. The benefits of the Results Act can be particularly important for a decentralized department such as Labor. However, such an organizational structure provides challenges in meeting the legislation’s objectives. Today’s information systems offer the government unprecedented opportunities to deliver high-quality services, tailored to the public’s changing needs, more effectively, faster, and at lower cost. Moreover, better systems can enhance the quality and accessibility of important knowledge and information, both for the public and for federal managers. It is increasingly important that Labor take advantage of these opportunities and address its information management weaknesses as it implements the Results Act if the benefits envisioned are to be fully realized. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Subcommittee may have. H-2A Agricultural Guestworker Program: Changes Could Improve Services to Employers and Better Protect Workers (GAO/HEHS-98-20, Dec. 31, 1997). Job Corps: Participant Selection and Performance Measurement Need to Be Improved (GAO/T-HEHS-98-37, Oct. 23, 1997). The Results Act: Observations on Department of Labor’s June 1997 Draft Strategic Plan (GAO/HEHS-97-172R, July 11, 1997). Managing for Results: Using GPRA to Assist Congressional and Executive Branch Decisionmaking (GAO/T-GGD-97-43, Feb. 12, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, Feb. 1997). OSHA’s Inspection Database (GAO/HEHS-97-43R, Dec. 30, 1996). Information Technology Investment: Agencies Can Improve Performance, Reduce Costs, and Minimize Risks (GAO/AIMD-96-64, Sept. 30, 1996). Education and Labor: Information on the Departments’ Field Offices (GAO/HEHS-96-178, Sept. 16, 1996). People With Disabilities: Federal Programs Could Work Together More Efficiently to Promote Employment (GAO/HEHS-96-126, Sept. 3, 1996). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). Davis-Bacon Act: Process Changes Could Raise Confidence That Wage Rates Are Based on Accurate Data (GAO/HEHS-96-130, May 31, 1996). Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results (GAO/T-HEHS-95-53, Jan. 10, 1995). Multiple Employment Training Programs: Basic Program Data Often Missing (GAO/T-HEHS-94-239, Sept. 28, 1994). Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the: (1) Department of Labor's progress in strategic planning as envisioned by the Government Performance and Results Act of 1993; and (2) challenge Labor faces in ensuring the effective information management necessary for Labor to fully realize the benefits of that planning. GAO noted that: (1) Labor's decentralized management structure makes adopting the better management practices envisioned by the Results Act--that is, articulating a comprehensive departmentwide mission statement linked to obvious results-oriented goals, objectives, and performance measures--more challenging; (2) Labor's September 30, 1997, strategic plan reflected its decentralized approach and the difficulty it presents for establishing departmentwide goals and monitoring their attainment; (3) Labor chose to present individual plans for 15 of its 24 component offices along with a strategic plan overview; (4) the overview contained five departmentwide goals that are generally results-oriented and a departmentwide management goal; (5) however, GAO is concerned that the lack of a departmentwide perspective in the development of Labor's strategic plan makes it organizationally driven rather than focused on mission; (6) several of the goals of the component units responsible for ensuring safe and healthful workplaces are similar yet listed separately for each unit; (7) a more mission-focused approach would improve Labor's ability to identify ways in which its operations might be improved to minimize potential duplication and promote efficiencies; (8) in order to measure performance--the next step required under the Results Act--Labor will need information that is sufficiently complete, reliable, and consistent to be useful in decisionmaking; (9) GAO's work has raised questions about how well Labor is meeting this management challenge; (10) GAO has found data to be missing, unreliable, or inconsistent in agencies throughout the Department; (11) Labor, as well as all other federal agencies, must also address two information management issues GAO has described this year as high risk because of vulnerabilities to waste, fraud, abuse, and mismanagement; (12) the first, information security, involves the agency's ability to protect information from unauthorized access; (13) the second requires Labor to rapidly change its computer systems to accomodate dates in the 21st century; and (14) while Labor has appointed a chief information officer, as required under the Clinger-Cohen Act of 1996, to oversee these and other information management issues, questions remain as to whether or not other duties required of the individual appointed will allow her to devote the attention necessary to ensure success in this critical management area.
DHS’s mission is to lead the unified national effort to secure America by preventing and deterring terrorist attacks and protecting against and responding to threats and hazards to the nation. DHS is also responsible for ensuring that the nation’s borders are safe and secure, that they welcome lawful immigrants and visitors, and that they promote the free flow of commerce. Created in 2002, DHS assumed control of about 209,000 civilian and military positions from 22 agencies and offices that specialize in one or more aspects of homeland security. The purpose behind the merger was to improve coordination, communication, and information sharing among these multiple federal agencies. Figure 1 shows DHS’s organizational structure and table 1 identifies DHS’s principal organizations and describes their missions. Within the department’s Management Directorate, headed by the Under Secretary for Management, is the Office of the Chief Information Officer (CIO). The CIO’s responsibilities include setting departmental IT policies, processes, and standards, and ensuring that IT acquisitions comply with DHS IT management processes, technical requirements, and approved enterprise architecture, among other things. Additionally, the CIO chairs DHS’s Chief Information Officer Council (CIO Council), which is responsible for ensuring the development of IT resource management policies, processes, best practices, performance measures, and decision criteria for managing the delivery of IT services and investments, while controlling costs and mitigating risks. DHS spends billions of dollars each year on IT investments to perform both mission-critical and support functions that frequently must be coordinated among components, as well as among external entities. Of the $5.6 billion that DHS plans to spend on 363 IT-related investments in fiscal year 2012, $4.4 billion is planned for the 83 the agency considers to be a major investment; namely, costly, complex, and/or mission critical. Of these 83 major IT investments, 68 are under development and have planned fiscal year 2012 costs of approximately $4 billion. Examples of major investments under development that are being undertaken by DHS and its components include: U. S. Customs and Border Protection—Automated Commercial Environment/International Trade Data System will incrementally replace existing cargo processing technology systems with a single system for land, air, rail, and sea cargo and serve as the central data collection system for federal agencies needing access to international trade data in a secure, paper-free, web-enabled environment. Immigration and Customs Enforcement —TECS Modernization is to replace the legacy mainframe system developed by the U.S. Customs Service in the 1980s to support its inspections and investigations. Following the creation of DHS, those activities were assigned to CBP and ICE, respectively. CBP and ICE are now working to modernize their respective portions of the system in a coordinated effort with separate funding and schedules. ICE’s portion of the investment will include modernizing the investigative case management and related support modules of the legacy system. National Protection and Programs Directorate—National Cybersecurity Protection System, also referred to as EINSTEIN, is an integrated system that includes intrusion detection, analytics, intrusion prevention, and information sharing capabilities that are used to defend the federal executive branch civilian agencies’ IT infrastructure from cyber threats. It consists of the hardware, software, supporting processes, training, and services that are being developed and acquired to support DHS’s mission requirements. The success of major IT investments are judged by, among other things, the extent to which they deliver promised system capabilities and mission benefits on time and within cost. Consequently, our best practices research and extensive experience at federal agencies, as well as OMB guidance, stress the importance of federal IT investments meeting cost and schedule milestones. GAO’s Information Technology Investment Management guidancehighlights the need to regularly determine each IT project’s progress toward cost and schedule milestones using established criteria and calls for corrective efforts when milestones are not being met. The guidance also calls for such corrective efforts to be defined and documented. OMB plays a key role in helping federal agencies manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage their approved projects. In December 2010, OMB issued its 25 Point Implementation Plan to Reform Federal Information Technology Management, a plan to change IT management throughout the federal government by strengthening the role of investment review boards to enable them to more adequately manage agency IT portfolios, redefining the role of agency CIOs and the Federal CIO Council to focus on portfolio management, and implementing face-to-face reviews to identify IT investments that are experiencing performance problems and to select them for a TechStat session—a review of selected IT investments between OMB and agency leadership that is led by the Federal CIO. In addition, OMB provides agencies with tools to measure how effectively investments are meeting established cost and schedule parameters. Specifically, OMB requires federal agencies to provide information on their IT investments as a part of their yearly budget submissions, and to do so using an exhibit 53, in which they list all of their IT investments and their associated costs, and an exhibit 300, also called the Capital Asset Plan and Business Case, which includes an investment’s cost and schedule commitments. Further, in June 2009, OMB deployed the IT Dashboard, a website that displays near real-time information on, among other things, the cost and schedule performance of all of an agency’s major IT investments. The IT Dashboard provides, among other things, a cost and schedule performance rating for each major IT investment’s subsidiary project. These ratings are based on the extent to which the project is meeting its cost and schedule commitments. For example, projects experiencing a 10 percent or greater cost and/or schedule variance are considered to be at an elevated risk of not delivering promised capabilities on time and within budget, and, as such, require management attention. We have previously reported on the cost and schedule challenges associated with major DHS IT investments, such as those with CBP’s Secure Border Network (SBInet) and NPPD’s United States Visitor and Immigrant Status Indicator Technology (US-VISIT). For example, in 2007 we reported that the Secure Border Network had experienced significant cost and schedule shortfalls due, in part, to the project not having fully defined activities. In addition, in May 2010, we reported that continued delays to the investment were likely because, among other things, it had not developed a reliable integrated master schedule and the schedule did not adequately capture all necessary activities. In these reports, we made recommendations to strengthen the program weaknesses to keep the investment on schedule and within cost. With regard to the US-VISIT investment, we noted in a November 2009 report that officials had not adopted an integrated approach to scheduling, executing, and tracking the work that needed to be accomplished to deliver the Comprehensive Exit project to more than 300 ports of entry on schedule and within cost. Accordingly, we recommended that DHS strengthen management of the project by ensuring that it develop and maintain integrated scheduling plans in accordance with applicable key practices; DHS concurred with the recommendations. Further, in 2011, as a part of our High Risk series, because of acquisition weaknesses, major investments, such as the recently canceled SBInet, continued to be challenged in meeting capability, benefit, cost, and schedule expectations. Based on our prior work, we identified and provided to DHS key actions and outcomes critical to addressing this and other challenges. Most recently, we reported in July 2012 that DHS was making progress in developing and implementing a new IT governance process. We found that DHS had developed a new governance framework and that the associated policies and procedures were generally consistent with recent OMB guidance and with best practices for managing projects and portfolios identified in GAO’s Information Technology Investment Management framework; however, the agency had not yet finalized most policies and procedures and was not fully using best practices for the implementation. Accordingly, we made recommendations to DHS to, among other things, strengthen its new governance process and related IT management capabilities; the agency agreed to implement the recommendations. GAO, High Risk Series: An Update, GAO-11-278 (Washington, D.C.: February 2011). GAO, Information Technology: DHS Needs to Further Define and Implement Its New Governance Process, GAO-12-818 (Washington, D.C.: July 2012). As discussed previously, our best practices research and experience at federal agencies as well as OMB guidance stress the importance of investments meeting their cost and schedule commitments. OMB requires agencies to report to the IT Dashboard information on the cost and schedule performance of all their major IT investments. Our analysis of the cost and schedule performance for DHS’s 68 major IT investments shows that approximately two-thirds of these investments and their subsidiary projects were meeting cost and schedule commitments; the remaining one-third had at least one subsidiary project that was not meeting its commitments. Specifically, out of the 68 major investments under development, 47 were meeting their cost and schedule commitments. (See app. II for a listing of the 47 investments and subsidiary projects that are meeting their commitments.) The remaining 21 investments had one or more subsidiary projects that were not meeting cost and/or schedule commitments; the total planned cost for all projects in development for the 21 investments is approximately $1 billion. Table 2 lists the investments experiencing cost and/or schedule shortfalls, and the total planned project cost for each investment. A list of the investments and their subsidiary projects experiencing cost and/or schedule shortfalls is included in appendix III. Of the 21 investments with a shortfall, 5 had one or more subsidiary project with a cost shortfall, 18 had one or more project with a schedule shortfall, and 2 had a project with both a cost and schedule shortfall. These shortfalls potentially impact the total cost of investments and can delay the implementation of key systems. For example: TSA’s Federal Air Marshal Service Mission Scheduling and Notification System: project to modernize the core scheduling software component of the system, which, among other things, determines the allocation of federal air marshals to flights and coordinates and communicates mission assignments, was delayed. NPPD’s Critical Infrastructure Technology and Architecture investment: project to develop an information-sharing application to be used by federal, state, and local stakeholders to increase their capability to combat terrorist use of improvised explosive devices had cost overruns of approximately 16 percent ($296,000). CBP’s Northern Border, Remote Video Surveillance System investment: project to incorporate IT Security improvements to the remote video surveillance systems in Buffalo, New York, and Detroit, Michigan, was delayed by approximately 2 months. FEMA’s Disaster Assistance Improvement Plan: a subsidiary project—site usability enhancements—that included enhancements to the DisasterAssistance.gov website to improve usability by making it easier and more intuitive for users to apply for and find information about disaster assistance from federal, state, local, tribal, and private nonprofit organizations was delayed. The primary causes of the shortfalls in cost and schedule associated with DHS’s 21 major IT investments were (in descending order of frequency): inaccurate preliminary cost and schedule estimates, technical issues in the development phase, changes in agency priorities, lack of understanding of user requirements, and dependencies on other investments that had schedule shortfalls. A summary of these causes by investment and the associated component are shown in table 3 and are followed by (1) our analysis of these causes by category and (2) discussion of our past work on the department’s major investments and related IT management processes where we identified some of these same causes and made recommendations to strengthen management in these areas. Specifically, our analysis of these causes by category showed: Inaccurate preliminary cost and schedule estimates: Inaccurate cost and schedule estimates in eight investments resulted in significant cost and schedule increases. For example: Preliminary schedule estimates for a project under CBP’s Non- Intrusive Inspection Systems Program investment—which supports the detection and prevention of contraband from entering the country—were inaccurate due to underestimating the time needed to complete a key task. Specifically, project officials did not accurately estimate how long it would take to complete an environmental assessment because they did not consider all requirements in their initial planning, thus resulting in a schedule delay of approximately 2 months. The NPPD investment called Critical Infrastructure Technology and Architecture had a project—integral to developing an information sharing application to be used by federal, state, and local stakeholders to increase their capability to combat terrorist use of improvised explosive devices—where actual costs for completing critical tasks were about 16 percent over the cost estimated at project initiation. According to investment officials, this was due in part to project staff developing the cost estimates very quickly and not fully validating them before proceeding with the project. TSA’s Hazmat Threat Assessment Program (which performs a threat assessment on commercial truck drivers who transport hazardous materials to determine the threat status to transportation security) had a schedule shortfall with a project, because, in part, the time needed to modify a contract was not accurately estimated, which led to a schedule delay of nearly 3 months. Technology issues in the development phase: Technical issues in the development phase caused cost or schedule slippages in six investments. Examples include: Changes made to one part of ICE’s Detention and Removal Operations Modernization investment, which is designed to upgrade IT capabilities to support efficient detention and removal of non-U.S. citizens, created a cascading effect, leading to changes to other parts of the system and contributed to delays of more than a month. Issues in establishing a testing and development environment that matched the production environment delayed project testing in several projects under FEMA’s Disaster Assistance Improvement Plan investment (which is to ease the burden on disaster survivors by providing them with a mechanism to access and apply for disaster assistance). Technical complications during deployment caused the schedule to slip by 79 days on a project under CBP’s Land Border Integration investment, which assists with the processing of inbound and outbound travel at border patrol checkpoints nationwide. Specifically, the handheld devices used for scanning license plates used a wireless spectrum that had interference problems at certain sites, and resolving this issue took more time than had been planned for. Changes in agency priorities: Four investments experienced cost and schedule slippages due to changing priorities at the agency level. In particular, The schedules were delayed for two NPPD US-VISIT investments: the Arrival and Departure Information System, which collects arrival and departure information on non-U.S. citizens traveling to the United States as well as current immigration status updates for each traveler, and the Automated Biometric Identification System, a fingerprint repository and biometric-matching system. Delays were due to a management decision to focus on accelerating the development of other investments or projects, which took resources (i.e., personnel) away from the investment. Consequently, the Arrival and Departure Information System’s fiscal year 2011 maintenance release project was delayed approximately 3 months, and the Automated Biometric Identification System’s fiscal year 2011 product support project was delayed by approximately 7 months. A critical subsidiary project to deliver predictive analytical capabilities under USCG’s Business Intelligence investment, which is designed to reduce organizational uncertainty and risk in decision making, had a schedule delay of approximately 3 months due to changing priorities. Project officials said that Coast Guard management directed resources to other projects with a higher priority, thus limiting the ability to work on the predictive analytics capability project. Lack of understanding user requirements: Three investments had slippages resulting from misunderstanding or inadequately developed user requirements and expectations. A project under USCIS’s Claims 4 investment, which is a processing system for the adjudication of naturalization applications, was delayed by 2 weeks because inadequate user requirements led to a design flaw that required additional time to address. Customer priorities and expectations for ICE’s Detention and Removal Operations Modernization investment changed over time, which contributed to schedule delays of more than a month. The schedule for a CBP TECS Modernization investment project, which supports the screening of travelers entering the United States, was delayed due to users requesting that the application in development interface with a separate system. The project was delayed by 3 months while program officials developed new requirements. Dependencies on other component’s investments that had schedule shortfalls: Investments also encountered schedule slippages when interdependent investments encountered delays. For example: USSS’s Information Integration and Technology Transformation investment to provide advanced security measures to electronically send, receive, and track access to USSS’s unclassified and classified information was delayed approximately 6 months due to a component’s project being delayed. Costs for a project under FEMA’s Disaster Assistance Improvement Plan investment rose approximately 27 percent ($210,000) due, in part, to the delayed deployment of another investment. Other causes of cost and schedule slippages that were cited by department officials included delays in receiving funding and gaps in leadership due to key management turnover. Specifically, The schedule for three projects under TSA’s Federal Air Marshal Service Mission Scheduling and Notification investment was delayed due to delays in receiving full funding. DHS had provided the investment with partial funding, and thus investment officials produced an investment plan based on that funding level; when full funding was subsequently restored, the plan had to be updated, which resulted in delays. The costs for a key subsidiary project of NPPD’s Infrastructure Security Compliance, Chemical Security Assessment Tool, which is to provide for the electronic submission of chemical facility data and controlled use of such data, rose approximately 20 percent ($719,000) due, in part, to multiple director-level program changes, which led to corresponding changes in the investment’s vision and direction. In our past work on DHS’s investments and related IT management processes, we have identified some of these same causes and made recommendations to strengthen management in these areas. For example, with regard to cost estimating, we reported that forming a reliable estimate of costs provides a sound basis for measuring against actual cost performance and that the lack of such a basis contributes to variances. To help agencies establish such a capability, we issued a guide in March 2009 that was based on the practices of leading organizations. In a July 2012 report examining how well DHS is implementing these practices, we reported that the department had weaknesses in cost estimating. Accordingly, we made recommendations to DHS to strengthen its cost estimating capabilities, and the department has plans and efforts under way to implement our recommendations. GAO, Department of Homeland Security: Assessments of Selected Complex Acquisitions, GAO-10-588SP (Washington, D.C.: June 2010). A variety of best practices exist to guide the successful acquisition of IT investments, including how to develop and document corrective actions for projects experiencing cost and schedule shortfalls. In particular, GAO’s Information Technology Investment Management framework calls for agencies to develop and document corrective efforts for underperforming projects. It also states that agencies are to ensure that, as projects develop and costs rise, the project continues to meet mission needs at the expected levels of cost and risk; if projects are not meeting expectations or if problems have arisen, agencies are to quickly take steps to address the deficiencies. In addition, DHS policy requires corrective actions when cost or schedule variances exceed 8 percent. DHS developed and documented corrective efforts for 12 of the 21 major investments with a shortfall, but the remaining 9 did not have documented corrective efforts. Table 4 depicts the investments with shortfalls and whether corrective efforts had been developed and documented. DHS took corrective actions to address investment shortfalls in 12 investments. Actions for the 12 included: CBP: Automated Commercial Environment /International Trade Data System had schedule shortfalls due to an inadequate testing and development environment; they were resolved by leveraging CBP’s disaster recovery site to perform the testing. CBP: Land Border Integration investment schedule delays due to technical complications were resolved through the use of risk management processes (e.g., identification, assessment, tracking, and mitigation of risks) identified in the investment’s July 2011 risk management plan, which addressed the cause of the investment shortfalls. CBP: Non-Intrusive Inspection Systems Program, which had schedule shortfalls due to inaccurate estimates, tracked the project status and risks via project health status reports and other mitigation strategies. CBP: The Remote Video Surveillance System investment officials developed and documented an investment rebaseline to address the investment’s schedule shortfall, which was due to an inaccurate initial project schedule estimate. CBP: The TECS Modernization investment—which had schedule shortfalls due to system development being delayed due to questions about whether planned enhancements duplicated functions performed by another agency system—program officials briefed key management on the differences between the system functions and development was allowed to continue. FEMA: Disaster Assistance Improvement Plan cost and schedule shortfalls were due to, among other things, dependencies on other investments. DHS developed a remediation plan for each shortfall to limit the negative impact. ICE: Detention and Removal Operations Modernization investment schedule shortfalls were due in part to a lack of understanding of user requirements; to address these issues, investment officials worked with key stakeholders to engage users to more thoroughly identify user requirements. NPPD: Critical Infrastructure Technology and Architecture investment’s cost shortfalls, which were due to inaccurate initial cost estimates, were resolved by investment officials through several corrective efforts, including completing the project’s life cycle cost estimate. NPPD: Infrastructure Security Compliance, Chemical Security Assessment Tool investment schedule was delayed due to multiple changes in leadership and in the investment’s direction. Project officials developed and documented an investment rebaseline, which was approved in February 2012. It was intended to, among other things, develop a more accurate schedule. TSA: The Hazmat Threat Assessment Program investment schedule was delayed because the time needed to adjust a contract had not been accurately estimated. In response, investment officials documented an investment rebaseline, which was approved in March 2012. TSA: The Security Technology Integrated Program investment had schedule shortfalls from inaccurate estimates of the time needed to revise a contract. To resolve these issues, officials developed and documented an initiative to improve methods used to identify and track risks and resolve the schedule shortfalls. This effort is intended to help the investment avoid additional schedule changes. USSS: The Information Integration and Technology Transformation investment had one project with schedule shortfalls due to dependencies on another component’s investment that had schedule slippages; issues were addressed by following the mitigation actions detailed in the investment’s risk management plan. With regard to the remaining nine investments, three were unable to provide us with documentation, even though project officials stated that they had developed some corrective efforts, and six did not engage in corrective efforts to address shortfalls. Of the three investments, officials from TSA’s Federal Air Marshal Service Mission Scheduling and Notification System investment, for example, reported that they had addressed the project’s schedule shortfall—which was due, in part, to a support contractor not having adequate staffing—by performing the work within the agency instead of relying on the contractor. Further, according to TSA officials, the cost and schedule shortfalls on the Air Cargo Security investment, which were due to technical complications and dependencies on other investments, were addressed by establishing a new cost and schedule baseline. Nonetheless, this lack of documentation is inconsistent with the direction of DHS’s guidance and related best practices, and it shows a lack of process discipline and attention to key details, which raises concern about the thoroughness of corrective efforts. Of the six investments without any corrective efforts, officials from these investments (namely, the Office of the Chief Information Officer’s Human Resources IT investment, NPPD’s US-VISIT Automated Biometric Identification System and Arrival and Departure Information System investments, USCG’s Business Intelligence investment, NPPD’s National Cybersecurity Protection System, and USCIS’s Claims 4 investment), stated that they did not develop and document corrective efforts because they believed DHS’s guidance does not call for it in their circumstances. Specifically, the officials said that although DHS’s guidance calls for corrective actions to be developed and documented when an investment or its projects experiences a life cycle cost or schedule variance of 8 percent or greater, the variances on their project activities thus far were not large enough to constitute such a life cycle variance. The impact of this is that multiple projects can continue to experience shortfalls—which increases the risk that investments will experience serious lifecycle cost and schedule variances—without having to develop and document corrective actions and thus alert top management about potential problems and associated risks. This is inconsistent with the direction of OMB, which requires agencies to report (via the IT Dashboard) on the cost and schedule performance of their projects and considers those projects with a 10 percent or greater variance to be at an increased level of risk of not being able to deliver promised capabilities on time and within budget, and thus they require special attention from management. It is also inconsistent with our best practices research and experience at federal agencies, which stresses that agencies report to management when projects are not meeting expectations or when problems arise and quickly develop and document corrective efforts to address the problems. Further, our research and work at agencies has shown that waiting to act until significant life cycle variances occur can sometimes be risky and costly, as life cycle schedules are typically for multiyear periods, allowing the potential for underperforming projects to continue to vary from their cost and schedule goals for an extended amount of time without any requirement for corrective efforts. Consequently, until these guidance shortcomings are addressed and each underperforming project has defined and documented corrective actions, the department’s major investments these projects support will be at an increased risk of cost and schedule shortfalls. Most of the projects comprising DHS’s 68 major IT investments are meeting their cost and schedule commitments, but 21 major investments—integral to DHS’s mission and costing approximately $1 billion—have projects that are experiencing significant cost and schedule shortfalls. These shortfalls place these investments at increased risk of not delivering promised capabilities on time and within budget, which, in turn, pose a risk to DHS’s ability to fully meet its mission of securing the homeland. DHS guidance does not require projects experiencing significant cost and schedule shortfalls to develop and document corrective efforts until they cause a life cycle cost and schedule variance. This increases risk and is contrary to effective IT investment practices. Given that DHS is currently establishing and implementing new IT governance processes, the department is positioned to address the guidance shortfalls. We recommend that the Secretary of Homeland Security direct the appropriate officials to: Establish guidance that provides for developing corrective efforts for major IT investment projects that are experiencing cost and schedule shortfalls of 10 percent or greater, similar to those identified in this report. Ensure that major IT investment projects with shortfalls of 10 percent or greater have defined and documented corrective efforts. In its written comments signed by the Director for the Departmental GAO- OIG Liaison Office and reprinted in appendix IV, DHS concurred with our recommendations and estimated that it would implement the first recommendation by September 30, 2013, and the second one immediately. It also commented that the department was pleased that the report positively acknowledged that DHS (1) is meeting cost and schedule commitments for most of its major IT investments and (2) has plans and efforts under way to improve cost estimating capabilities and implement a center of excellence for requirements engineering. The department also provided technical comments, which we have incorporated where appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of our review were to determine the (1) extent to which Department of Homeland Security (DHS) IT investments are meeting their cost and schedule commitments, (2) primary causes of any commitment shortfalls, and (3) adequacy of DHS’s efforts to address these shortfalls and causes. To address our first objective, we analyzed how each of DHS’s 68 major investments under development was performing against its cost and schedules commitments, as reported by the agency to the Office of Management and Budget (OMB) for inclusion on OMB’s federal IT Dashboard. More specifically, we analyzed the extent to which each of these investments had met or exceeded, as of March 2012, cost and schedule commitments established when the investment was initiated. In doing this, we identified investments that had a project exceeding 10 percent of its cost and schedule commitments. We focused on these investments and their subsidiary projects because OMB considers them to be at an increased level of risk of not being able to deliver promised capabilities on time and within budget, and thus requiring special attention from management. To assess the reliability of the IT Dashboard data we analyzed, we corroborated the data by interviewing investment and other DHS officials to determine whether the information on the dashboard was consistent with that reported by DHS. In addition, we followed up on the status of implementation of previous GAO recommendations to improve the quality Specifically, we analyzed of information on OMB’s federal IT Dashboard.plans and related documentation describing efforts by DHS to increase the scrutiny and quality of data submitted to the IT Dashboard. As part of this, we also interviewed department officials including those from the Office of the Chief Information Officer who are responsible for reviewing and submitting DHS’s investment cost and schedule data to the federal IT Dashboard. The documentation and interviews provided us a level of assurance that the data we used for this engagement were, in fact, reliable. For our second objective, we used a structured interview instrument to survey the DHS and component officials responsible for the investments experiencing cost and schedule shortfalls in order to identify the causes of the shortfalls. As part of surveying these officials, we analyzed project and related documentation to corroborate the causes reported to us via the survey. We then analyzed these causes for commonalities, grouped them accordingly, and tallied the frequency of each cause by investment. In addition, we compared the causes to our prior reports on major DHS investments and related IT management processes to identify the extent to which we had made recommendations to address the causes associated with the department’s investment cost and schedule shortfalls. To address our third objective, we initially identified and reviewed relevant criteria on developing and documenting corrective actions to address investment shortfalls. Specifically, these criteria included DHS’s Acquisition Directive 102 (AD-102), DHS’s Capital Planning and Investment Control Guide, and GAO’s Information Technology Investment Management guide. We then used a structured interview instrument to survey DHS and component officials responsible for those investments experiencing shortfalls; we used the survey to identify whether any corrective actions had been developed and documented to address investment shortfalls. We also reviewed investment planning and execution documentation (e.g., project plans, project status reports, program meeting minutes, and acquisition program baselines) to corroborate information provided by the officials during the survey process. We then compared these corrective efforts to the criteria to identify any gaps and in those cases where there were, we reviewed documentation and interviewed agency officials to assess the reason for the gaps and any negative impacts. GAO, Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity, GAO-04-394G (Washington, D.C.: Mar. 1, 2004). obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. Table 5 lists the DHS major IT investments that were meeting their cost and schedule commitments. Table 6 lists the DHS major IT investments with cost and/or schedule shortfalls, their costs and subsidiary projects, whether or not they had a cost and/or schedule shortfall, and their planned costs. In addition to the contact named above, the following staff also made key contributions to this report: Gary Mountjoy (assistant director), Scott Borre, Camille Chaires, and Nancy Glover.
DHS has responsibility for the development and management of the IT systems for the 22 federal agencies and offices under its jurisdiction. Of its 363 IT investments, 68 are in development and are classified by DHS as a “major” investment that requires special management attention because of its mission importance. Given the size and significance of these investments, GAO was asked to determine the (1) extent to which DHS IT investments are meeting their cost and schedule commitments, (2) primary causes of any commitment shortfalls, and (3) adequacy of DHS’s efforts to address these shortfalls and their associated causes. To address these objectives, GAO analyzed recent cost and schedule performance for DHS’s major IT investments, as reported to OMB. To identify the primary cause(s) of any shortfalls and whether any corrective efforts were being taken to address them, GAO analyzed project plans and related documentation and interviewed responsible DHS officials and compared the corrective efforts to applicable criteria to assess their adequacy. Approximately two-thirds of the Department of Homeland Security’s (DHS) major information technology (IT) investments are meeting their cost and schedule commitments (i.e., goals). Specifically, out of 68 major IT investments in development, 47 were meeting cost and schedule commitments. The remaining 21—which total about $1 billion in spending—had one or more subsidiary projects that were not meeting cost and/or schedule commitments (i.e., they exceeded their goals by at least 10 percent, which is the level at which the Office of Management and Budget (OMB) considers projects to be at increased risk of not being able to deliver planned capabilities on time and within budget.) The primary causes for the cost and schedule shortfalls were (in descending order of frequency): inaccurate preliminary cost and schedule estimates, technology issues in the development phase, changes in agency priorities, lack of understanding of user requirements, and dependencies on other investments that had schedule shortfalls. Eight investments had inaccurate cost and schedule estimates. For example, DHS’s Critical Infrastructure Technology investment had a project where actual costs were about 16 percent over the estimated cost, due in part to project staff not fully validating cost estimates before proceeding with the project. In addition, six investments had technical issues in the development phase that caused cost or schedule slippages. For example, DHS’s Land Border Integration investment had problems with wireless interference at certain sites during deployment of handheld devices used for scanning license plates, which caused a project to be about 2.5 months late. In past work on DHS investments, GAO has identified some of the causes of DHS’s shortfalls and made recommendations to strengthen management in these areas (e.g., cost estimating, requirements), and DHS has initiated efforts to implement the recommendations. DHS often did not adequately address shortfalls and their causes. GAO’s investment management framework calls for agencies to develop and document corrective efforts to address underperforming investments. DHS policy requires documented corrective efforts when investments experience cost or schedule variances. Although 12 of the 21 investments with shortfalls had defined and documented corrective efforts, the remaining 9 did not. Officials responsible for 3 of the 9 investments said they took corrective efforts but were unable to provide plans or any other related documentation showing such action had been taken. Officials for the other 6 investments cited criteria in DHS’s policy that excluded their investments from the requirement to document corrective efforts. This practice is inconsistent with the direction of OMB guidance and related best practices that stress developing and documenting corrective efforts to address problems in such circumstances. Until DHS addresses its guidance shortcomings and ensures each of these underperforming investments has defined and documented corrective efforts, these investments are at risk of continued cost and schedule shortfalls. GAO is recommending that the Secretary of Homeland Security direct the appropriate officials to address the guidance shortcomings and develop corrective actions for all major IT investment projects having cost and schedule shortfalls. In commenting on a draft of this report, DHS concurred with GAO’s recommendations.
NEDCTP’s mission is to deter and detect the introduction of explosive devices into U.S. transportation systems. As of February 2016, NEDCTP has deployed 787 of the 997 canine teams for which it has funding available in fiscal year 2016 across transportation systems. There are four types of LEO canine teams: aviation, mass transit, maritime, and multimodal; and two types of TSI canine teams: multimodal and PSC. Table 1 shows the number of canine teams by type for which funding is available, describes their roles and responsibilities, and costs per team to TSA. TSA’s start-up costs for LEO teams include the costs of training the canine and handler, and providing the handler’s agency a stipend. The annual costs to TSA for LEO teams reflect the amount of the stipend. TSA’s start-up and annual costs for TSI canine teams are greater than those for LEO teams, because TSI handlers are TSA employees and therefore the costs include the handlers’ pay and benefits, service vehicles, and cell phones, among other things. PSC teams come at an increased cost to TSA compared with other TSI teams because of the additional 2 weeks of training and costs associated with providing decoys (i.e., persons pretending to be passengers who walk around the airport with explosive training aids). In fiscal year 2016, approximately $121.7 million of amounts appropriated to TSA were available for its canine program. For fiscal year 2017, TSA is requesting approximately $131.4 million, a $9.7 million increase compared to the prior fiscal year. According to a TSA official, the increase is for projected pay increases and 16 additional positions to support canine training and operations, among other things. Figure 1 shows LEO, TSI, and PSC teams performing searches in different environments. Conventional canines undergo 15 weeks of explosives detection training, and PSCs 25 weeks, before being paired with a handler at TSA’s Canine Training Center (CTC), located at Lackland Air Force Base. Conventional canine handlers attend a 10-week training course, and PSC handlers attend a 12-week training course. The 2 additional weeks are used to train PSC teams in actual work environments. Canines are paired with a LEO or TSI handler during their training course. After canine teams complete this training, and obtain initial certification, they acclimate to their home operating environment for a 30-day period. Upon completion of the acclimation period, CTC conducts a 3-day operational transitional assessment to ensure canine teams are not experiencing any performance challenges in their home operating environment. After initial certification, canine teams are evaluated on an annual basis to maintain certification. During conventional explosives detection evaluations, canine teams must demonstrate their ability to detect all the explosive training aids the canines were trained to detect in five search areas (e.g., aircraft). The five search areas are randomly selected among all the possible types of search areas, but according to CTC, include the area that is most relevant to the type of canine team. For example, teams assigned to airports will be evaluated in areas such as aircraft and cargo. Canine teams must find a certain percentage of the explosive training aids to pass their annual conventional evaluation. In addition, a specified number of nonproductive responses—when a canine responds to a location where no explosives odor is present—are allowed. After passing the conventional evaluation, PSC teams are required to undergo an additional annual evaluation that includes detecting explosives on a person, or being carried by a person. PSC teams are tested in different locations within the sterile areas and passenger screening checkpoints of an airport. A certain number of persons with explosive training aids must be detected, and a specified number of nonproductive responses are allowed for PSC certification. TSA has taken steps to enhance NEDCTP since we issued our 2013 report. For example, TSA has used data, such as the results of covert tests, to assess the proficiency and utilization of its canine teams. However, further opportunities exist for TSA to assess its program related to the use and cost of PSC teams. In January 2013, we reported that TSA collected and used key canine program data in its Canine Website System (CWS), a central management database, but it could better analyze these data to identify program trends. For example, we found that TSA did not analyze training minute data over time (from month to month) and therefore was unable to determine trends related to canine teams’ compliance with the requirement to train 240 minutes each month. Similarly, TSA collected monthly data on the amount of cargo TSI teams screened in accordance with the agency’s requirement, but had not analyzed these data over time to determine if, for example, changes were needed in the screening requirement or the number of teams deployed. Table 2 highlights some of the key data elements included in CWS at the time of our prior review. In January 2013, we recommended that TSA regularly analyze available data to identify program trends and areas that are working well and those in need of corrective action to guide program resources and activities. These analyses could include, but not be limited to, analyzing and documenting trends in proficiency training minutes, canine utilization, results of short notice assessments (covert tests) and final canine responses, performance differences between LEO and TSI canine teams, as well as an assessment of the optimum location and number of canine teams that should be deployed to secure the U.S. transportation system. TSA concurred with our recommendation, and in June 2014 we reported on some of the steps it had taken to implement the recommendation. Specifically, TSA monitored canine teams training minutes over time by producing annual reports. For example, TSA analyzed canine teams’ compliance with the training requirement throughout fiscal year 2013 to identify teams repeatedly not in compliance with the monthly requirement. Field Canine Coordinators subsequently completed comprehensive assessment reviews for their canine teams, which involved reporting on the teams that did not meet the requirement. TSA also reinstated short notice assessments in July 2013, since they had suspended them in May 2012. We reported that in the event a team fails a short notice assessment, the Field Canine Coordinator completes a report that includes an analysis of the team’s training records to identify an explanation for the failure. According to TSA officials, in March 2014, NEDCTP stood up a new office, known as the Performance Measurement Section, to perform analyses of canine team data. Those actions, among others, addressed the intent of our recommendation by positioning TSA to identify program trends to better target resources and activities based on what is working well and what may need corrective action. Therefore, we closed the recommendation as implemented in August 2014. Since we closed the recommendation, according to TSA officials, the agency has continued to take steps to enhance its canine program. For example, TSA eliminated the monthly 240-minute training requirement and instead requires canine teams to train on all explosives training aids they must be able to detect, in all search areas (e.g., aircraft), every 45 days. In April 2015, TSA also eliminated canine teams’ requirement to screen a certain volume of air cargo. Instead, TSA requires TSI-led canine teams to spend at least 40 percent of their time on utilization activities, such as patrolling airport terminals and screening air cargo. Canine teams can spend the rest of the time on administrative activities, such as taking their canine to the veterinarian. Handlers record their daily activities in a web-based system, which allow TSA to assess how the canine teams are being used. According to TSA, utilization time increased five percent in fiscal year 2015 since the requirement changed. In February 2016, TSA officials told us that starting in fiscal year 2016, TSA increased the number of short notice assessments required from two to five per year for each state and local law enforcement agency that participates in NEDCTP. According to a TSA official, the number was increased since TSA believes such assessments are helpful in determining the proficiency of canine teams. Furthermore, CTC placed 34 Regional Canine Training Instructors in the field to review canine teams’ training records and assist them in resolving any performance challenges, such as challenges in detecting a particular explosive aid. We also reported in January 2013 that TSA’s 2012 Strategic Framework called for the deployment of PSC teams based on risk; however, airport stakeholder concerns about the appropriateness of TSA’s protocols for resolving PSC team responses resulted in these teams not being deployed to the highest-risk airports or utilized for passenger screening. We recommended that TSA coordinate with airport stakeholders to deploy future PSC teams to the highest-risk airports, and ensure that deployed PSC teams are utilized as intended, consistent with the agency’s statutory authority to provide for the screening of passengers and their property. TSA concurred with our recommendation, and in June 2014, we reported that the PSC teams for which TSA had funding and not already deployed to a specific airport at the time our 2013 report had been deployed to or allocated to the highest-risk airports. We also reported that, according to TSA officials, of all the airports where PSC teams had been deployed, all but one airport had agreed to allow TSA to conduct screening of individuals using PSC teams at passenger screening checkpoint queues. According to TSA, the agency was successful in deploying PSC teams to airports where they were previously declined by aviation stakeholders for various reasons. For example, TSA officials explained that stakeholders have realized that PSCs are an effective means for detecting explosives odor, and no checkpoints have closed because of a nonproductive response. In January 2015, we closed the recommendation as implemented after TSA deployed all remaining PSC teams (those which had previously been allocated) to the highest-risk airports and all PSC teams were being utilized for passenger screening. Since we closed the recommendation, TSA has continued to allocate and deploy additional PSC teams for which it has received funding to the highest-risk airports based on its assessment of how high the risks are to particular airports. In addition, from November 2015 to January 2016, TSA relocated PSC teams located at 7 lower-risk airports to higher-risk airports. As a result, TSA has PSC teams deployed at nearly all category X airports, which are generally higher-risk airports. According to TSA officials, all category X airports will have PSC teams by the end of calendar year 2016. In our January 2013 report, we found that TSA began deploying PSC teams in April 2011 prior to determining the teams’ operational effectiveness, and had not completed an assessment to determine where within the airport PSC teams would be most effectively utilized. In June 2012, the DHS Science and Technology Directorate (S&T) and TSA began conducting effectiveness assessments to help demonstrate the effectiveness of PSC teams, but the assessment was not inclusive of all areas of the airport (i.e., the sterile area, passenger screening checkpoint, and public side of the airport). During the June 2012 assessment of PSC teams’ effectiveness, TSA conducted one of the search exercises used for the assessment with three conventional canine teams. Although this assessment was not intended to be included as part of DHS S&T and TSA’s formal assessment of PSC effectiveness, the results of this assessment suggested, and TSA officials and DHS S&T’s Canine Explosives Detection Project Manager agreed, that a systematic assessment with both PSCs and conventional canines could provide TSA with information to determine whether PSCs provide an enhanced security benefit compared with conventional LEO aviation canine teams that have already been deployed to airport terminals. As a result, we recommended that TSA expand and complete testing, in conjunction with DHS S&T, to assess the effectiveness of PSCs and conventional canines in all airport areas deemed appropriate prior to making additional PSC deployments to help (1) determine whether PSCs are effective at screening passengers, and resource expenditures for PSC training are warranted, and (2) inform decisions regarding the type of canine team to deploy and where to optimally deploy such teams within airports. TSA concurred, and we testified in June 2014 that through its PSC Focused Training and Assessment Initiative—a two-cycle assessment to establish airport-specific optimal working areas, assess team performance, and train teams on best practices—TSA had determined that PSC teams are effective and should be deployed at the passenger checkpoint queue. Furthermore, in February 2014, TSA launched a third PSC assessment cycle to increase the amount of time canines can work and enhance their ability to detect explosives placed in areas more challenging to detect. Since our June 2014 testimony, TSA has continued to carry out the third assessment cycle. According to TSA officials, as of February 2016, 68 PSC teams have undergone the assessment. Additionally, TSA officials told us they began a fourth assessment cycle in January 2016 to test PSC teams and all other canine teams on threats identified through intelligence. Although TSA has taken steps to determine whether PSC teams are effective and where in the airport environment to optimally deploy such teams, TSA has not compared the effectiveness of PSCs and conventional canines in order to determine if the greater cost of training canines in the passenger screening method is warranted. In June 2014, we reported that TSA did not plan to include conventional canine teams in PSC assessments because conventional canines have not been through the process used with PSCs to assess their temperament and behavior when working in proximity to people. We acknowledged TSA’s position that half of deployed conventional canines are of a breed not accepted for use in the PSC program, but noted that other conventional canines are suitable breeds, and have been paired with LEO aviation handlers working in proximity with people since they patrol airport terminals, including ticket counters and curbside areas. In December 2014, TSA reported that it did not intend to include conventional canine teams in PSC assessments and cited concerns about the liability of operating conventional canines in an unfamiliar passenger screening environment. In January 2015, we closed the recommendation as not implemented, reiterating that conventional canines paired with LEO handlers work in close proximity with people since, like PSCs, they also patrol airport terminals. Consistent with our recommendation, we continue to believe that opportunities exist for TSA to conduct an assessment to determine whether conventional canines are as effective at detecting explosives odor on passengers when compared to PSC teams working in specific areas, such as the passenger checkpoint queue. If such an assessment were to indicate that conventional canines are equally as effective at detecting explosives odor on passengers as PSCs, then limiting proficiency training requirements of PSCs to those that currently apply to conventional canine teams could save TSA costs associated with maintaining PSC teams. Also, as we reported in January 2013, TSA was considering providing some PSCs to LEOs to work on the public side of the airport. Should TSA determine that the additional investment for PSCs is warranted, it could reduce the agency’s program costs if it deployed PSCs with LEO handlers rather than TSI handlers. Specifically, TSA could save approximately $100,000 per team each year, as a PSC team led by a LEO handler would cost TSA about $54,000 annually (the amount of the stipend), compared with about $154,000, the annual cost per TSI-led PSC team (see table 1). Chairman Johnson, Ranking Member Carper, and Members of the committee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Chris Ferencik (Assistant Director), Chuck Bausell, Lisa Canini, Michele Fejfar, Eric Hauswirth, Susan Hsu, Richard Hung, Brendan Kretzschmar, Thomas Lombardi, and Ben Nelson. Key contributors for the previous work that this testimony is based on are listed in those products. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
TSA has implemented a multilayered system composed of people, processes, and technology to protect the nation's transportation systems. One of TSA's security layers is comprised of nearly 800 deployed explosives detection canine teams—a canine paired with a handler. These teams include PSC teams trained to detect explosives on passengers and conventional canines trained to detect explosives in objects, such as cargo. In January 2013, GAO issued a report on TSA's explosives detection canine program. This testimony addresses the steps TSA has taken since 2013 to enhance its canine program and further opportunities to assess the program. This statement is based on GAO's January 2013 report, a June 2014 testimony, and selected updates conducted in February 2016 on canine training and operations. The products cited in this statement provide detailed information on GAO's scope and methodology. For the selected updates, GAO reviewed the president's fiscal year 2017 budget request for TSA and interviewed TSA officials on changes made to NEDCTP since June 2014, the last time GAO reported on the program. The Transportation Security Administration (TSA) has taken steps to enhance its National Explosives Detection Canine Team Program (NEDCTP) since GAO's 2013 report, but further opportunities exist for TSA to assess its canine program and potentially reduce costs. TSA Uses Data to Assess Canine Team Proficiency and Utilization: In January 2013, GAO reported that TSA needed to take actions to analyze NEDCTP data and ensure canine teams are effectively utilized. GAO recommended that TSA regularly analyze available data to identify program trends and areas that are working well and those in need of corrective action to guide program resources and activities. TSA concurred, and in June 2014, GAO reported that the agency had taken actions that address the recommendation. GAO subsequently closed the recommendation as implemented in August 2014. Since then, according to TSA officials, the agency has continued to enhance its canine program. For example, TSA reported that it requires canine teams to train on all explosives training aids they must be able to detect—any explosive used to test and train a canine—in all search areas (e.g., aircraft), every 45 days. TSA has Deployed PSC Teams to the Highest-Risk Airports: GAO found in January 2013 that passenger screening canine (PSC) teams were not being deployed to the highest-risk airports as called for in TSA's 2012 Strategic Framework or utilized for passenger screening. GAO recommended that TSA coordinate with airport stakeholders to deploy future PSC teams to the highest-risk airports and ensure that deployed teams were utilized as intended. TSA concurred, and in June 2014, GAO reported that PSC teams had been deployed or allocated to the highest-risk airports. In January 2015, GAO closed the recommendation as implemented after TSA deployed all remaining PSC teams to the highest-risk airports and all teams were being utilized for passenger screening. Opportunities May Exist for TSA to Reduce Canine Program Costs : GAO reported in 2013 that TSA began deploying PSC teams prior to determining their operational effectiveness and identifying where within the airport these teams would be most effectively utilized. GAO recommended that TSA take actions to comprehensively assess the effectiveness of PSCs. TSA concurred and has taken steps to determine the effectiveness of PSC teams and where in the airport to optimally deploy such teams. However, TSA did not compare the effectiveness of PSCs and conventional canines in detecting explosives odor on passengers to determine if the greater cost of training PSCs is warranted. In December 2014, TSA reported that it did not intend to do this assessment because of the liability of using conventional canines to screen persons when they had not been trained to do so. GAO closed the recommendation as not implemented, stating that conventional canines currently work in close proximity with people as they patrol airport terminals, including ticket counters and curbside areas. GAO continues to believe that opportunities may exist for TSA to reduce costs if conventional canines are found to be as effective at detecting explosives odor on passengers as PSCs. GAO is making no new recommendations in this statement.
In our earlier work on studies comparing federal and non-federal pay, we noted how the composition of the federal workforce has changed over the past 30 years, with the need for clerical and blue collar roles diminishing, and the need for professional, administrative, and technical roles increasing. Today’s federal jobs require more advanced skills at higher grade levels than federal jobs in years past. As a result, a key management challenge facing the federal government in an era of fiscal austerity is balancing the size and composition of the federal workforce so it is able to deliver the high quality services that taxpayers demand, within the budgetary realities of what the nation can afford. As we have previously stated, inadequate planning prior to personnel reductions jeopardizes the ability of agencies to carry out their missions. For example, in the wake of extensive federal downsizing in the 1990s— done largely without adequate planning or sufficient consideration of the strategic consequences—agencies faced challenges deploying the right skills when and where they were needed. More recently, this management challenge has been exacerbated by the fact that today’s federal workforce consists of a large number of employees who are eligible for retirement. Various factors affect when individuals actually retire. Some amount of retirement and other forms of attrition can be beneficial because it creates opportunities to bring fresh skills on board and allows organizations to restructure themselves in order to better meet program goals and fiscal realities. But if turnover is not strategically managed and monitored, gaps can develop in an organization’s institutional knowledge and leadership as experienced employees leave. We have previously reported that the needs and missions of individual agencies should determine their approach to workforce planning. GAO-09-632T. Our prior work has shown that strategic human capital management has been a pervasive challenge facing the federal government, and has led to government-wide and agency-specific skills gaps. Our February 2011 update to our high risk list noted that federal strategic human capital management was a high risk area because current and emerging mission critical skills gaps were undermining agencies’ abilities to meet their vital missions. To help close these skills gaps, we reported that actions were needed in three broad areas: planning, to identify the causes of, and solutions for, skills gaps and to identify the steps to implement those solutions; implementation, to put in place corrective actions to narrow skills gaps through talent management and other strategies; and measurement and evaluation, to assess the performance of initiatives to close skills gaps. Since our February 2011 update, OPM, individual agencies, and Congress have taken a number of steps to close mission critical skills gaps, but as we noted in our 2013 High Risk update, additional actions were needed, as our work found that skills gaps were continuing in such areas as cybersecurity, acquisition management, and aviation safety, among others. These actions included reviewing the extent to which new capabilities were needed, in order to give OPM and other agencies greater visibility over government-wide skills gaps so that agencies could take a more coordinated approach to remediating them. OPM agreed that these were important areas for consideration. Since our 2011 High Risk update, OPM’s efforts to address mission critical skill gaps have included establishing the Chief Human Capital Officers Council Working Group in order to identify and mitigate critical skills gaps for both government-wide and agency-specific occupations and competencies. Moreover, the Working Group’s efforts were designated a cross-agency priority goal within the administration’s fiscal year 2013 federal budget; OPM is partnering with the Chief Human Capital Officer’s Council to create a government-wide Human Resources Information Technology strategy that can provide greater visibility to OPM and agencies regarding current and emerging skills gaps. From 2004 to 2012, the non-postal civilian workforce grew from 1.88 million to 2.13 million, an increase of 14 percent, or 258,882 individuals. Most of the total increase (94 percent) was from 2007 through 2012. The number of permanent career executive branch employees grew by 256,718, from about 1.7 million in 2004 to 1.96 million in 2012 (an increase of 15 percent). Of the 24 CFO Act agencies, 13 had more permanent career employees in 2012 than they did in 2004, 10 had fewer, and one agency was unchanged. Three agencies (DOD, DHS, and VA) accounted for 94 percent of the growth between 2004 and 2012. These three agencies employed 62 percent of all executive branch permanent career employees in 2012. A number of factors contributed to the overall growth of the civilian workforce: For example, at DOD, according to agency officials, converting certain positions from military to civilian, as well as the growth of the agency’s acquisition and cybersecurity workforce contributed to this overall increase. At VA, according to agency officials, approximately 80 percent of employees hired from 2004 through 2012 were hired by the Veterans Health Administration (VHA), primarily to meet increased demand for medical and health-related services for military veterans. At DHS, the increase in civilian permanent career employment was due to increased staffing to secure the nation’s borders. Employees in professional or administrative positions account for most of the overall increase in federal civilian employment. For example, the number of employees working in professional positions increased by 97,328 (from 394,981 in 2004 to 492,309 in 2012). This growth accounts for nearly 38 percent of the 256,718 total government-wide increase in permanent career employees during this period. In comparison, employees in administrative positions increased by 153,914 (from 582,509 in 2004 to 736,423 in 2012). This growth accounts for 60 percent of the total government-wide increase during this period. Technical, clerical, blue collar, and other white collar positions accounted for the remaining 2 percent of those full-time permanent positions added from 2004 to 2012. The retirement rate of federal civilian employees rose from 3.2 percent in 2004 to a high of 3.6 percent in 2007 when, according to data from the National Bureau of Economic Research, the recession began. During the recession, the total attrition rate dropped to a low of 2.5 percent in 2009 before rebounding to pre-recession levels in 2011 and 2012. Beginning at the end of 2007, the recession saw retirement rates decline to 3.3 percent in 2008, 2.5 percent in 2009, and 2.7 percent in 2010, before increasing again to 3.5 percent in 2012. With respect to retirement eligibility, of the 1.96 million permanent career employees on board as of September 2012, nearly 270,000 (14 percent) were eligible to retire. By September 2017, nearly 600,000 (31 percent) of on board staff will be eligible to retire. Not all agencies will be equally affected. By 2017, 20 of the 24 CFO Act agencies will have a higher percentage of staff eligible to retire than the current overall average of 31 percent. About 21 percent of DHS staff on board as of September 2012 will be eligible to retire in 2017, while over 42 percent will be eligible to retire at both the Department of Housing and Urban Development (HUD) and the Small Business Administration (SBA). Certain occupations—such as air traffic controllers and those involved in program management—will also have particularly high retirement eligibility rates by 2017. With respect to pay and benefits as measured by each full-time equivalent (FTE) position, total government-wide compensation grew by an average of 1.2 percent per year from 2004 to 2012 ($106,097 to $116,828—about a 10 percent overall increase). Much of this growth was driven by increased cost of personnel benefits, which rose at a rate of 1.9 percent per year (a 16.3 percent increase overall). According to OMB, the government’s contribution to the Federal Employee Health Benefits (FEHB) program rose, on average, 5.2 percent from 2004 to 2011 and 4.7 percent from 2011 to 2012. One study showed that employer contributions for premiums for family insurance coverage nationwide grew by about 58 percent from 2004 through 2012, for an average annual increase of around 5 percent.spending rose at an average annual rate of 1 percent per year (a 7.9 percent increase overall). While government-wide spending on pay and benefits rose slightly, some agencies had significant increases in their spending on compensation per FTE. For example, the Department of State’s spending on pay and benefits per FTE increased by 4.5 percent per year, on average, from 2004 through 2012. In total, government-wide spending on pay and benefits increased by $51 billion, from $193.2 billion to $244.3 billion (an average annual increase of 3 percent and an overall increase of 26.4 percent) from 2004 to 2012. In terms of employee pay per FTE, Spending on pay and benefits as a proportion of the federal discretionary budget remained relatively constant (at about 14 percent) from 2004 to 2010, with slight increases in 2011 and 2012. Specifically, the proportion spent on pay increased by 0.6 percent and the proportion spent on benefits increased by 0.5 percent from 2004 to 2012. According to OMB, a portion of this increase can be attributed to an increase in the growth in federal civilian employment at certain agencies, locality pay adjustments, across-the-board pay increases, and (as previously stated) increases in the government’s share of FEHB program premiums. Government-wide, while the proportion of the discretionary budget spent on compensation remained constant, certain agencies had increases from 2004 to 2012. Three agencies—DOD, DHS, and VA—accounted for 77 percent of the total government-wide increase in compensation from 2004 to 2012, largely due to increased hiring. DOD increased its spending on compensation by $19.9 billion (about 39 percent of the total increase), VA increased its spending by $10.5 billion (about 21 percent of the total increase), and DHS increased its spending by $8.8 billion (about 17 percent of the total increase). With respect to occupational categories,increase in spending on pay from 2004 to 2012 was due to more employees working in professional or administrative positions, which often require specialized knowledge and advanced skills and degrees, and thus, higher pay. Specifically, the percentage of those employees grew from 56 percent of the federal civilian workforce in 2004 to 62 percent in 2012. Even if there had been no change in pay for the occupations, the changing composition of the federal workforce alone would have caused average federal pay to increase from $70,775 in 2004 to $73,229 in 2012, as opposed to the actual 2012 average of $75,947. 48 percent of the overall Appendix I provides more detail on each of our objectives and related findings. While the size of the civilian federal workforce grew moderately during the period of our study, most of this growth was concentrated in a few large agencies and reflects some of our nation’s pressing priorities. The cost of compensating the civilian workforce has remained relatively constant as a percentage of the discretionary budget during the past decade; however, nearly half of the increased pay and benefits costs can be attributed to a shift toward more employees serving in professional and administrative capacities, in jobs that require specialized knowledge and higher levels of education. Although employment levels have grown, large numbers of retirement-eligible employees may be cause for concern among agencies, decision-makers, and other stakeholders, because they could produce mission critical skills gaps if turnover is not strategically managed and monitored. Replacing retiring workers, both in terms of training and hiring costs, and in terms of the largely unquantifiable costs of losing experienced, high- level employees, could be problematic given the era of flat or declining budgets that the government is experiencing. At the same time, retirement-eligible employees present an opportunity for agencies to align their workforces with current and future mission needs. Indeed, as the federal government faces an array of current and future challenges, agencies will be confronted with going beyond simply replacing retiring individuals by engaging in broad, integrated planning and management efforts that will bolster their ability to meet both current and evolving mission requirements. Combined, these challenges underscore the importance of strategic workforce planning and early preparation to help ensure agencies maintain their capacity to carry out their vital functions. Thus, as we have reported in our prior work, agencies should (1) take such key steps as determining the critical skills and competencies that will be needed to achieve current and future programmatic results; (2) develop appropriate talent management strategies to address any gaps in the number, deployment, and alignment of skills; and (3) monitor and evaluate their progress toward their human capital goals. In short, understanding the dynamics of the federal workforce and the drivers of agencies’ compensation costs will help guide decision-making on workforce composition and budgeting. We provided a draft of this report to the Director of OMB and the Director of OPM for their review and comment. In addition, we provided sections of this report to DOD, DHS, and VA. GAO received technical comments on a draft of this report from OMB, OPM, DOD, DHS, and VA, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to OMB, OPM, DOD, DHS, VA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To analyze workforce and turnover trends, we used OPM’s Enterprise Human Resources Integration Statistical Data Mart (EHRI-SDM), which contains personnel action and on board data for most federal civilian employees. We analyzed agency-level EHRI data for the 24 Chief Financial Officers (CFO) Act agencies, which represent the major departments (such as the Department of Defense) and most of the executive branch workforce. We analyzed EHRI data starting with fiscal year 2004 because personnel data for DHS (which was formed in 2003 with a mix of new hires and transfers from other agencies) had stabilized by 2004. We selected 2012 as the endpoint because it was the most recent, complete fiscal year of data available during most of our review. We analyzed on board trends for most of the executive branch workforce, including temporary and term limited employees. However, we focused on career permanent employees in our analysis of separation trends, retirement eligibility, and changes in occupational categories and education levels because these employees comprise most of the federal workforce and become eligible to retire with a pension, for which temporary and term limited employees are ineligible. To calculate the number of federal civilian employees, we included all on board staff, regardless of their pay status. In addition, we excluded foreign service workers at the State Department since those employees were not included in OPM data for the years after 2004. We examined on board, attrition, and retirement eligibility trends by agency, occupation, and education level. Occupational categories include Professional, Administrative, Technical, Clerical, Blue Collar, and Other white-collar (PATCO) groupings and are defined by the educational requirements of the occupation and the subject matter and level of difficulty or responsibility of the work assigned. Occupations within each category are defined by OPM and education levels are defined by OPM as the extent of an employee’s educational attainment from an accredited institution. We grouped education levels to reflect categories of degree attainment, such as a bachelor’s or advanced degree. To calculate attrition rates, we added the number of career permanent employees with personnel actions indicating they had separated from federal service (for example, resignations, retirements, terminations, and deaths) and divided that by the 2-year on board average. To calculate retirement eligibility for the next 5 years, we computed the date at which the employee would be eligible for voluntary retirement at an unreduced annuity, using age at hire, years of service, birth date, and retirement plan coverage. We assessed the reliability of the EHRI data through electronic testing to identify missing data, out of range values, and logical inconsistencies. We also reviewed our prior work assessing the reliability of these data and interviewed OPM officials knowledgeable about the data to discuss the data’s accuracy and the steps OPM takes to ensure reliability. On the basis of this assessment, we believe the EHRI data we used are sufficiently reliable for the purpose of this report. To assess the extent to which federal civilian employee compensation has changed as a percentage of total discretionary spending from fiscal year 2004 through 2012, we analyzed discretionary outlays from OMB’s MAX Information System , which captures compensation costs as gross obligations, hereafter referred to as “spending.” We analyzed spending on employee compensation as a ratio of federal discretionary spending (as opposed to other baseline measures, such as total federal spending or gross domestic product) because discretionary spending—that is, spending that is decided upon by Congress each fiscal year through annual appropriations acts—includes personnel costs as well as other operational and program expenses (such as equipment and contracts) that agencies incur to carry out their mission. As a result, the ratio of compensation to discretionary spending enabled us to compare personnel costs to other agency spending. Moreover, using discretionary spending as a baseline allowed us to present this information for both the entire federal government as well as for an individual agency. Obligations data are reported in object classes, which are categories that present obligations by the type of expenditure. We analyzed the object class, “personnel compensation and benefits,” for executive branch agencies in our analysis.and discretionary spending categories when reporting on budget obligations, we used outlays as a proxy for pay and benefits obligations. According to a senior OMB official, this approach is appropriate for pay and benefits spending categories because most (or all) of the budget authority for these categories is obligated in the same year that it is authorized, resulting in similar numbers between outlays and obligations. Because OMB does not distinguish between mandatory We analyzed pay and benefits per full time equivalent (FTE) based on the MAX database designation for FTEs. To assess the reliability of the MAX data, we performed electronic testing and cross-checked it against the numbers reported in the President’s Budget. In addition, we interviewed OMB officials to understand any discrepancies in the data. For example, we met with OMB officials and provided them our initial results to determine whether we were accurately representing spending on pay and benefits. Based on these discussions, we made adjustments to our scope and methodology, as appropriate. Based on our assessment, we believe these data are sufficiently reliable for the purpose of this report. To determine the factors contributing to workforce, turnover, and compensation trends in the civilian workforce from 2004 to 2012, we interviewed officials at the Office of Personnel Management (OPM), Office of Management and Budget (OMB), Department of Defense (DOD), Veterans Administration (VA), and the Department of Homeland Security (DHS). We conducted this performance audit from June 2012 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Robert Goldenkoff, (202) 512-2757 or [email protected]. Robert Goldenkoff (Director), Trina Lewis (Assistant Director), and Chelsa Gurkin (Assistant Director) managed this assignment. Jeffrey Schmerling (Analyst-in-Charge) and Wesley Sholtes (Analyst) made key contributions to all aspects of the work. Ben Bolitzer, Sara Daleski, and John Mingus provided assistance with data analysis. Karin Fangman and Sabrina Streagle provided legal support; Robert Gebhart provided key assistance with message development and writing. Robert Robinson provided key assistance with graphics and Rebecca Shea provided methodological assistance.
Skilled federal workers are critical to the successful operation of government. At the same time, personnel costs for current and former federal civilian employees represented about 26 percent of total discretionary spending in 2012; these personnel costs are outlays from budget authority authorized by appropriations acts. Given the need to control agencies' personnel costs while also maintaining agencies' high performance, a thorough understanding of employment and compensation trends is a critical component of strategic workforce planning. GAO was asked to provide data on federal employment and compensation trends. This report examines (1) employment trends of federal civilian personnel from 2004 to 2012 and some factors that affect these trends, and (2) the extent to which federal civilian employee compensation has changed (as a percentage of total discretionary spending) and some reasons for this change. For this report, GAO analyzed government-wide executive branch civilian personnel data from 2004 to 2012. GAO also interviewed Office of Personnel Management (OPM), Office of Management and Budget (OMB), and other selected agency officials. GAO also reviewed relevant literature, such as studies on attrition. GAO is not making any recommendations in this report. GAO received technical comments on a draft of this report from OMB, OPM, and the Departments of Defense, Homeland Security, and Veterans Affairs; comments were incorporated as appropriate. From 2004 to 2012, the federal non-postal civilian workforce grew by 258,882 employees, from 1.88 million to 2.13 million (14 percent). Permanent career employees accounted for most of the growth, increasing by 256,718 employees, from 1.7 million in 2004 to 1.96 million in 2012 (15 percent). Three agencies--the Departments of Defense (DOD), Homeland Security (DHS), and Veterans Affairs (VA)--accounted for about 94 percent of this increase. At DOD, officials said that converting certain positions from military to civilian, as well as the growth of the agency's acquisition and cybersecurity workforce, contributed to this overall increase. At VA, officials said the increased demand for medical and health-related services for military veterans drove most of the growth in personnel levels. DHS officials said the increase in employment was due in large part to the nation's border security requirements. (In contrast, ten agencies had fewer career permanent employees in 2012 than they did in 2004). Government-wide, most of the increase in employment from 2004 to 2012 occurred within occupational categories that require higher skill and educational levels. These categories include professional occupations (e.g., doctors and scientists), and administrative occupations (e.g., financial and program managers), as opposed to clerical, technical, and blue collar occupations (which remained stable). In terms of turnover, retirement rates remained relatively flat (at around 3.5 percent) from 2004 until the start of the recession in December 2007. Retirement rates fell to a low of around 2.5 percent during the recession in 2009, and then increased to pre-recession rates in 2011 and 2012. With respect to retirement eligibility, of the 1.96 million permanent career employees on board as of September 2012, nearly 270,000 (14 percent) were eligible to retire. By September 2017, nearly 600,000 (around 31 percent) will be eligible to retire, government-wide. Spending on total government-wide compensation for each full-time equivalent (FTE) position grew by an average of 1.2 percent per year, from $106,097 in 2004 to $116,828 in 2012. Much of this growth was driven by increased personnel benefits costs, which rose at a rate of 1.9 percent per year. Other factors included locality pay adjustments, as well as a change in the composition of the federal workforce (with a larger share of employees working in professional or administrative positions, requiring advanced skills and degrees). In terms of employee pay per FTE, spending rose at an average annual rate of 1 percent per year (a 7.9 percent increase overall). However, as a proportion of governmentwide federal discretionary spending, spending on compensation remained constant from 2004 to 2010 (at 14 percent), with slight increases in 2011 and 2012. While the federal civilian workforce grew in size from 2004 to 2012, most of the growth was concentrated in three federal agencies and was driven by the need to address some of the nation's pressing priorities. At the same time--as GAO reported in February 2013--large numbers of retirement-eligible employees in the years ahead may be cause for concern: Their retirement could produce mission critical skills gaps if left unaddressed. As GAO reported in its February 2013 High Risk update, strategic human capital planning that is integrated with broader organizational strategic planning will be essential for ensuring that-- going forward--agenices have the talent, skill, and experience mix they need to cost-effectively execute their mission and program goals.
From May 2003 through June 2004, the CPA, led by the United States and the United Kingdom, was the UN-recognized coalition authority responsible for the temporary governance of Iraq and for overseeing, directing, and coordinating the reconstruction effort. In May 2003, the CPA dissolved the military organizations of the former regime and began the process of creating or reestablishing new Iraqi security forces, including the police and a new Iraqi army. Over time, multinational force commanders assumed responsibility for recruiting and training some Iraqi defense and police forces in their areas of responsibility. In May 2004, the President issued a National Security Presidential Directive, which stated that, after the transition of power to the Iraqi government, the Department of State (State), through its ambassador to Iraq, would be responsible for all U.S. activities in Iraq except for security and military operations. U.S. activities relating to security and military operations would be the responsibility of the Department of Defense (DOD). The Presidential Directive required the U.S. Central Command (CENTCOM) to direct all U.S. government efforts to organize, equip, and train Iraqi security forces. The Multi-National Security Transition Command-Iraq, which operates under Multi-National Force-Iraq (MNF-I), now leads coalition efforts to train, equip, and organize Iraqi security forces. Other U.S. government agencies also play significant roles in the reconstruction effort. The U.S. Agency for International Development (USAID) is responsible for projects to restore Iraq’s infrastructure, support healthcare and education initiatives, expand economic opportunities for Iraqis, and foster improved governance. The U.S. Army Corps of Engineers provides engineering and technical services to USAID, State, and military forces in Iraq. In December 2005, the responsibilities of the Project Contracting Office (PCO), a temporary organization responsible for program, project, asset, and financial management of construction and nonconstruction activities, were merged with those of the U.S. Army Corps of Engineers Gulf Region Division. On June 28, 2004, the CPA transferred power to an interim sovereign Iraqi government, the CPA was officially dissolved, and Iraq’s transitional period began. Under Iraq’s transitional law, the transitional period included the completion of a draft constitution in October 2005 and two subsequent elections—a referendum on the constitution and an election for a permanent government. The Iraqi people approved the constitution on October 15, 2005, and voted for representatives to the Iraq Council of Representatives on December 15, 2005. As of February 3, 2006, the Independent Electoral Commission of Iraq had not certified the election results for representatives. Once certified, the representatives are to form a permanent government. According to U.S. officials and Iraqi constitutional experts, the new Iraqi government is likely to confront the same issues it confronted prior to the referendum—the power of the central government, control of Iraq’s natural resources, and the application of Islamic law. According to U.S. officials, once the Iraqi legislature commences work, it will form a committee that has 4 months to recommend amendments to the constitution. To take effect, these proposed amendments must be approved by the Iraqi legislature and then Iraqi citizens must vote on them in a referendum within 2 months. The United States faces three key challenges in stabilizing and rebuilding Iraq. First, the unstable security environment and the continuing strength of the insurgency have made it difficult for the United States to transfer security responsibilities to Iraqi forces and to engage in rebuilding efforts. Second, inadequate performance data and measures make it difficult to determine the overall progress and impact of U.S. reconstruction efforts. Third, the U.S. reconstruction program has encountered difficulties with Iraq’s inability to sustain new and rehabilitated infrastructure projects and to address maintenance needs in the water, sanitation, and electricity sectors. U.S. agencies are working to develop better performance data and plans for sustaining rehabilitated infrastructure. Over the past 2½ years, significant increases in attacks against the coalition and coalition partners have made it difficult to transfer security responsibilities to Iraqi forces and to engage in rebuilding efforts in Iraq. The insurgency in Iraq intensified through October 2005 and has remained strong since then. Poor security conditions have delayed the transfer of security responsibilities to Iraqi forces and the drawdown of U.S. forces in Iraq. The unstable security environment has also affected the cost and schedule of rebuilding efforts and has led, in part, to project delays and increased costs for security services. Recently, the administration has taken actions to integrate military and civilian rebuilding and stabilization efforts. The insurgency intensified through October 2005 and has remained strong since then. As we reported in March 2005, the insurgency in Iraq— particularly the Sunni insurgency—grew in complexity, intensity, and lethality from June 2003 through early 2005. According to a February 2006 testimony by the Director of National Intelligence, insurgents are using increasingly lethal improvised explosive devices and continue to adapt to coalition countermeasures. As shown in figure 1, enemy-initiated attacks against the coalition, its Iraqi partners, and infrastructure increased in number over time. The highest peak occurred during October 2005, around the time of Ramadan and the October referendum on Iraq’s constitution. This followed earlier peaks in August and November 2004 and January 2005. According to a senior U.S. military officer, attack levels ebb and flow as the various insurgent groups—almost all of which are an intrinsic part of Iraq’s population— rearm and attack again. As the administration has reported, insurgents share the goal of expelling the coalition from Iraq and destabilizing the Iraqi government to pursue their individual and, at times, conflicting goals. Iraqi Sunnis make up the largest portion of the insurgency and present the most significant threat to stability in Iraq. In February 2006, the Director of National Intelligence reported that the Iraqi Sunnis’ disaffection is likely to remain high in 2006, even if a broad, inclusive national government emerges. These insurgents continue to demonstrate the ability to recruit, supply, and attack coalition and Iraqi security forces. Their leaders continue to exploit Islamic themes, nationalism, and personal grievances to fuel opposition to the government and recruit more fighters. According to the Director, the most extreme Sunni jihadists, such as al-Qaeda in Iraq, will remain unreconciled and continue to attack Iraqi and coalition forces. The remainder of the insurgency consists of radical Shia groups, some of whom are supported by Iran, violent extremists, criminals, and, to a lesser degree, foreign fighters. According to the Director of National Intelligence, Iran provides guidance and training to select Iraqi Shia political groups and weapons and training to Shia militant groups to enable anticoalition attacks. Iran also has contributed to the increasing lethality of anticoalition attacks by enabling Shia militants to build improvised explosive devices with explosively formed projectiles, similar to those developed by Iran and Lebanese Hizballah. The continuing strength of the insurgency has made it difficult for the multinational force to develop effective and loyal Iraqi security forces, transfer security responsibilities to them, and progressively draw down U.S. forces in Iraq. The Secretary of Defense and MNF-I recently reported progress in developing Iraqi security forces, saying that these forces continue to grow in number, take on more responsibilities, and increase their lead in counterinsurgency operations in some parts of Iraq. For example, in December 2005 and January 2006, MNF-I reported that Iraqi army battalions and brigades had assumed control of battle space in parts of Ninewa, Qadisiyah, Babil, and Wasit provinces. According to the Director for National Intelligence, Iraqi security forces are taking on more- demanding missions, making incremental progress toward operational independence, and becoming more capable of providing security. In the meantime, coalition forces continue to support and assist the majority of Iraqi security forces as they develop the capability to operate independently. However, recent reports have recognized limitations in the effectiveness of Iraqi security forces. For example, DOD’s October 2005 report notes that Iraqi forces will not be able to operate independently for some time because they need logistical capabilities, ministry capacity, and command and control and intelligence structures. In the November 2005 National Strategy for Victory in Iraq, the administration cited a number of challenges to developing effective Iraqi security forces, including the need to guard against infiltration by elements whose first loyalties are to institutions other than the Iraqi government and to address the militias and armed groups that are outside the formal security sector and government control. Moreover, according to the Director of National Intelligence’s February 2006 report, Iraqi security forces are experiencing difficulty in managing ethnic and sectarian divisions among their units and personnel. GAO’s classified report on Iraq’s security situation provided further information and analysis on the challenges to developing Iraqi security forces and the conditions for the phased drawdown of U.S. and other coalition forces. The security situation in Iraq has affected the cost and schedule of reconstruction efforts. Security conditions have, in part, led to project delays and increased costs for security services. Although it is difficult to quantify the costs and delays resulting from poor security conditions, both agency and contractor officials acknowledged that security costs have diverted a considerable amount of reconstruction resources and have led to canceling or reducing the scope of some reconstruction projects. For example, in March 2005, USAID cancelled two task orders related to power generation that totaled nearly $15 million to help pay for the increased security costs incurred at another power generation project in southern Baghdad. In another example, work was suspended at a sewer repair project in central Iraq for 4 months in 2004 due to security concerns. In January 2006, State reported that direct and indirect security costs represent 16 to 22 percent of the overall cost of major infrastructure reconstruction projects. In addition, the security environment in Iraq has led to severe restrictions on the movement of civilian staff around the country and reductions of a U.S. presence at reconstruction sites, according to U.S. agency officials and contractors. For example, the Project Contracting Office reported in February 2006, the number of attacks on convoys and casualties had increased from 20 convoys attacked and 11 casualties in October 2005 to 33 convoys attacked and 34 casualties in January 2006. In another example, work at a wastewater plant in central Iraq was halted for approximately 2 months in early 2005 because insurgent threats drove away subcontractors and made the work too hazardous to perform. In the assistance provided to support the electoral process, U.S.-funded grantees and contractors also faced security restrictions that hampered their movements and limited the scope of their work. For example, IFES was not able to send its advisors to most of the governorate-level elections administration offices, which hampered training and operations at those facilities leading up to Iraq’s Election Day on January 30, 2005. While poor security conditions have slowed reconstruction and increased costs, a variety of management challenges also have adversely affected the implementation of the U.S. reconstruction program. In September 2005, we reported that management challenges such as low initial cost estimates and delays in funding and awarding task orders have led to the reduced scope of the water and sanitation program and delays in starting projects. In addition, U.S. agency and contractor officials have cited difficulties in initially defining project scope, schedule, and cost, as well as concerns with project execution, as further impeding progress and increasing program costs. These difficulties include lack of agreement among U.S. agencies, contractors, and Iraqi authorities; high staff turnover; an inflationary environment that makes it difficult to submit accurate pricing; unanticipated project site conditions; and uncertain ownership of project sites. Our ongoing work on Iraq’s energy sectors and the management of design- build contracts will provide additional information on the issues that have affected the pace and costs of reconstruction. The Administration has taken steps to develop a more comprehensive, integrated approach to combating the insurgency and stabilizing Iraq. The National Strategy for Victory in Iraq lays out an integrated political, military, and economic strategy that goes beyond offensive military operations and the development of Iraqi security forces in combating the insurgency. Specifically, it calls for cooperation with and support for local governmental institutions, the prompt dispersal of aid for quick and visible reconstruction, and central government authorities who pay attention to local needs. Toward that end, U.S. agencies are developing tools for integrating political, economic, and security activities in the field. For example, USAID is developing the Focused Stabilization Strategic City Initiative that will fund social and economic stabilization activities in communities within 10 strategic cities. The program is intended to jump-start the development of effective local government service delivery by directing local energies from insurgency activities toward productive economic and social opportunities. The U.S. embassy in Baghdad and MNF-I are also developing provincial assistance teams as a component of an integrated counterinsurgency strategy. These teams would consist of coalition military and civilian personnel who would assist Iraq’s provincial governments with (1) developing a transparent and sustained capability to govern; (2) promoting increased security, rule of law, and political and economic development; and (3) providing the provincial administration necessary to meet the basic needs of the population. It is unclear whether these two efforts will become fully operational, as program documents have noted problems in providing funding and security for them. State has set broad goals for providing essential services, and the U.S. program has undertaken many rebuilding activities in Iraq. The U.S. program has made some progress in accomplishing rebuilding activities, such as rehabilitating some oil facilities to restart Iraq’s oil production, increasing electrical generation capacity, restoring some water treatment plants, and building Iraqi health clinics. However, limited performance data and measures make it difficult to determine and report on the progress and impact of U.S. reconstruction. Although information is difficult to obtain in an unstable security environment, State reported that it is currently finalizing a set of metrics to track the impact of reconstruction efforts. In the water and sanitation sector, the Department of State has primarily reported on the numbers of projects completed and the expected capacity of reconstructed treatment plants. However, we found that the data are incomplete and do not provide information on the scope and cost of individual projects nor do they indicate how much clean water is reaching intended users as a result of these projects. Moreover, reporting only the number of projects completed or under way provides little information on how U.S. efforts are improving the amount and quality of water reaching Iraqi households or their access to sanitation services. Information on access to water and its quality is difficult to obtain without adequate security or water-metering facilities. Limitations in health sector measurements also make it difficult to relate the progress of U.S. activities to its overall effort to improve the quality and access of health care in Iraq. Department of State measurements of progress in the health sector primarily track the number of completed facilities, an indicator of increased access to health care. However, the data available do not indicate the adequacy of equipment levels, staffing levels, or quality of care provided to the Iraqi population. Monitoring the staffing, training, and equipment levels at health facilities may help gauge the effectiveness of the U.S. reconstruction program and its impact on the Iraqi people. In the electricity sector, U.S. agencies have primarily reported on generation measures such as levels of added or restored generation capacity and daily power generation of electricity; numbers of projects completed; and average daily hours of power. However, these data do not show whether (1) the power generated is uninterrupted for the period specified (e.g., average number of hours per day); (2) there are regional or geographic differences in the quantity of power generated; and (3) how much power is reaching intended users. Information on the distribution and access of electricity is difficult to obtain without adequate security or accurate metering capabilities. Opinion surveys and additional outcome measures have the potential to gauge the impact of the U.S. reconstruction efforts on the lives of Iraqi people and their satisfaction with these sectors. A USAID survey in 2005 found that the Iraqi people were generally unhappy with the quality of their water supply, waste disposal, and electricity services but approved of the primary health care services they received. In September 2005, we recommended that the Secretary of State address this issue of measuring progress and impact in the water and sanitation sector. State agreed with our recommendation and stated in January 2006 that it is currently finalizing a set of standard methodologies and metrics for water and other sectors that could be used to track the impact of U.S. reconstruction efforts. The U.S. reconstruction program has encountered difficulties with Iraq’s ability to sustain the new and rehabilitated infrastructure and address maintenance needs. In the water, sanitation, and electricity sectors, in particular, some projects have been completed but have sustained damage or become inoperable due to Iraq’s problems in maintaining or properly operating them. State reported in January 2006 that several efforts were under way to improve Iraq’s ability to sustain the infrastructure rebuilt by the United States. In the water and sanitation sector, U.S. agencies have identified limitations in Iraq’s capacity to maintain and operate reconstructed facilities, including problems with staffing, unreliable power to run treatment plants, insufficient spare parts, and poor operations and maintenance procedures. The U.S. embassy in Baghdad stated that it was moving from the previous model of building and turning over projects to Iraqi management toward a “build-train-turnover” system to protect the U.S. investment. However, these efforts are just beginning, and it is unclear whether the Iraqis will be able to maintain and operate completed projects and the more than $1 billion in additional large-scale water and sanitation projects expected to be completed through 2008. In September 2005, we recommended that the Secretary of State address the issue of sustainability in the water and sanitation sector. State agreed with our recommendation and stated that it is currently working with the Iraqi government to assess the additional resources needed to operate and maintain water and sanitation facilities that have been constructed or repaired by the United States. In the electricity sector, the Iraqis’ capacity to operate and maintain the power plant infrastructure and equipment provided by the United States remains a challenge at both the plant and ministry levels. As a result, the infrastructure and equipment remain at risk of damage following their transfer to the Iraqis. In our interviews with Iraqi power plant officials from 13 locations throughout Iraq, the officials stated that their training did not adequately prepare them to operate and maintain the new U.S.- provided gas turbine engines. Due to limited access to natural gas, some Iraqi power plants are using low-grade oil to fuel their natural gas combustion engines. The use of oil-based fuels, without adequate equipment modification and fuel treatment, decreases the power output of the turbines by up to 50 percent, requires three times more maintenance, and could result in equipment failure and damage that significantly reduces the life of the equipment, according to U.S. and Iraqi power plant officials. U.S. officials have acknowledged that more needs to be done to train plant operators and ensure that advisory services are provided after the turnover date. In January 2006, State reported that it has developed a strategy with the Ministry of Electricity to focus on rehabilitation and sustainment of electricity assets. Although agencies have incorporated some training programs and the development of operations and maintenance capacity into individual projects, problems with the turnover of completed projects, such as those in the water and sanitation and electricity sectors, have led to a greater interagency focus on improving project sustainability and building ministry capacity. In May 2005, an interagency working group including State, USAID, PCO, and the Army Corps of Engineers was formed to identify ways to address Iraq’s capacity-development needs. The working group reported that a number of critical infrastructure facilities constructed or rehabilitated under U.S. funding have failed, will fail, or will operate in suboptimized conditions following handover to the Iraqis. To mitigate the potential for project failures, the working group recommended increasing the period of operational support for constructed facilities from 90 days to up to 1 year. In January 2006, State reported that it has several efforts under way focused on improving Iraq’s ability to operate and maintain facilities over time. As part of our ongoing review of Iraq’s energy sector, we will be assessing the extent to which the administration is providing funds to sustain the infrastructure facilities constructed or rehabilitated by the United States. As the new Iraqi government forms, it must plan to secure the financial resources it will need to continue the reconstruction and stabilization efforts begun by the United States and international community. Initial assessments in 2003 identified $56 billion in reconstruction needs across a variety of sectors in Iraq. However, Iraq’s needs are greater than originally anticipated due to severely degraded infrastructure, post-conflict looting and sabotage, and additional security costs. The United States has borne the primary financial responsibility for rebuilding and stabilizing Iraq; however, its commitments are largely obligated and remaining commitments and future contributions are not finalized. Further, U.S. appropriations were never intended to meet all Iraqi needs. International donors have provided a lesser amount of funding for reconstruction and development activities; however, most of the pledged amount is in the form of loans that Iraq has just begun to access. Finally, Iraq’s ability to contribute financially to its additional rebuilding and stabilization needs is dependent upon the new government’s efforts to increase revenues obtained from crude oil exports, reduce energy and food subsidies, control government operating expenses, provide for a growing security force, and repay external debt and war reparations. Initial assessments of Iraq’s needs through 2007 by the U.N., World Bank, and the CPA estimated that the reconstruction of Iraq would require about $56 billion. The October 2003 joint UN/World Bank assessment identified $36 billion, from 2004 through 2007, in immediate and medium-term needs in 14 priority sectors, including education, health, electricity, transportation, agriculture, and cross-cutting areas such as human rights and the environment. For example, the assessment estimated that Iraq would need about $12 billion for rehabilitation and reconstruction, new investment, technical assistance, and security in the electricity sector. In addition, the assessment noted that the CPA estimated an additional $20 billion would be needed from 2004 through 2007 to rebuild other critical sectors such as security and oil. Iraq may need more funding than currently available to meet the demands of the country. The state of some Iraqi infrastructure was more severely degraded than U.S. officials originally anticipated or initial assessments indicated. The condition of the infrastructure was further exacerbated by post-2003 conflict looting and sabotage. For example, some electrical facilities and transmission lines were damaged, and equipment and materials needed to operate treatment and sewerage facilities were destroyed by the looting that followed the 2003 conflict. In addition, insurgents continue to target electrical transmission lines and towers as well as oil pipelines that provide needed fuel for electrical generation. In the oil sector, a June 2003 U.S. government assessment found that more than $900 million would be needed to replace looted equipment at Iraqi oil facilities. These initial assessments assumed reconstruction would take place in a peace-time environment and did not include additional security costs. Further, these initial assessments assumed that Iraqi government revenues and private sector financing would increasingly cover long-term reconstruction requirements. This was based on the assumption that the rate of growth in oil production and total Iraqi revenues would increase over the next several years. However, private sector financing and government revenues may not yet meet these needs. According to a January 2006 International Monetary Fund (IMF) report, private sector investment will account for 8 percent of total projected investment for 2006, down from 12 percent in 2005. In the oil sector alone, Iraq will likely need an estimated $30 billion over the next several years to reach and sustain an oil production capacity of 5 million barrels per day, according to industry experts and U.S. officials. For the electricity sector, Iraq projects that it will need $20 billion through 2010 to boost electrical capacity, according to the Department of Energy’s Energy Information Administration. The United States is the primary contributor to rebuilding and stabilization efforts in Iraq. Since 2003, the United States has made available about $30 billion for activities that have largely focused on infrastructure repair and training of Iraqi security forces. As priorities changed, the United States reallocated about $5 billion of the $18.4 billion fiscal year 2004 emergency supplemental among the various sectors, over time increasing security and justice funds while decreasing resources for the water and electricity sectors. As of January 2006, of the $30 billion appropriated, about $23 billion had been obligated and about $16 billion had been disbursed for activities that included infrastructure repair, training, and equipping of the security and law enforcement sector; infrastructure repair of the electricity, oil, and water and sanitation sectors; and CPA and U.S. administrative expenses. These appropriations were not intended to meet all of Iraq’s needs. The United States has obligated nearly 80 percent of its available funds. Although remaining commitments and future contributions have not been finalized, they are likely to target activities for building ministerial capacity, sustaining existing infrastructure investments, and training and equipping the Iraqi security forces, based on agency reporting. For example, in January 2006, State reported a new initiative to address Iraqi ministerial capacity development at 12 national ministries. According to State, Embassy Baghdad plans to undertake a comprehensive approach to provide training in modern techniques of civil service policies, requirements-based budget processes, information technology standards, and logistics management systems to Iraqi officials in key ministries. International donors have provided a lesser amount of funding for reconstruction and development activities. According to State, donors have provided about $2.7 billion in multilateral and bilateral grants—of the pledged $13.6 billion—as of December 2005. About $1.3 billion has been deposited by donors into the two trust funds of the International Reconstruction Fund Facility for Iraq (IRFFI), of which about $900 million had been obligated and about $400 million disbursed to individual projects, as of December 2005. Donors also have provided bilateral assistance for Iraq reconstruction activities; however, complete information on this assistance is not readily available. Most of the pledged amount is in the form of loans that the Iraqis have recently begun to access. About $10 billion, or 70 percent, of the $13.6 billion pledged in support of Iraq reconstruction is in the form of loans, primarily from the World Bank, the IMF, and Japan. In September 2004, the IMF provided a $436 million emergency post-conflict assistance loan to facilitate Iraqi debt relief, and in December 2005, Iraq secured a $685 million Stand-By Arrangement (SBA) with the IMF. On November 29, 2005, the World Bank approved a $100 million loan within a $500 million program for concessional international development assistance. Iraq’s fiscal ability to contribute to its own rebuilding is constrained by the amount of revenues obtained from crude oil exports, continuing subsidies for food and energy, growing costs for government salaries and pensions, increased demands for an expanding security force, and war reparations and external debt. Crude oil exports account for nearly 90 percent of the Iraqi government revenues in 2006, according to the IMF. Largely supporting Iraq’s government operations and subsidies, crude oil export revenues are dependent upon export levels and market price. The Iraqi 2006 budget has projected that Iraq’s crude oil export revenues will grow at an annual growth rate of 17 percent per year (based on an average production level of 2 million bpd in 2005 to 3.6 million bpd in 2010), estimating an average market price of about $46 per barrel. Oil exports are projected to increase from 1.4 million bpd in 2005 to 1.7 million bpd in 2006, according to the IMF. Iraq’s current crude oil export capacity is theoretically as high as 2.5 million bpd, according to the Energy Information Administration at the Department of Energy. However, Iraq’s crude oil export levels have averaged 1.4 million bpd as of December 2005, in part due to attacks on the energy infrastructure and pipelines. In January 2006, crude oil export levels fell to an average of about 1.1 million bpd. Further, a combination of insurgent attacks on crude oil and product pipelines, dilapidated infrastructure, and poor operations and maintenance have hindered domestic refining and have required Iraq to import significant portions of liquefied petroleum gas, gasoline, kerosene, and diesel. According to State, the Iraqi Oil Ministry estimates that the current average import cost of fuels is roughly $500 million each month. Current government subsidies constrain opportunities for growth and investment and have kept prices for food, oil, and electricity low. Before the war, at least 60 percent of Iraqis depended on monthly rations—known as the public distribution system (PDS)—provided by the UN Oil for Food program to meet household needs. The PDS continues to provide food subsidies to Iraqis. In addition, Iraqis pay below-market prices for refined fuels and, in the absence of effective meters, for electricity and water. Low prices have encouraged over-consumption and have fueled smuggling to neighboring countries. Food and energy subsidies account for about 18 percent of Iraq’s projected gross domestic product (GDP) for 2006. As part of its Stand-By Arrangement with the IMF, Iraq plans to reduce the government subsidy of petroleum products, which would free up oil revenues to fund additional needs and reduce smuggling. According to the IMF, by the end of 2006, the Iraqi government plans to complete a series of adjustments to bring fuel prices closer to those of other Gulf countries. However, it is unclear whether the Iraqi government will have the political commitment to continue to raise fuel prices. Generous wage and pension benefits have added to budgetary pressures. Partly due to increases in these benefits, the Iraqi government’s operating expenditures are projected to increase by over 24 percent from 2005 to 2006, according to the IMF. As a result, wages and pensions constitute about 21 percent of projected GDP for 2006. The IMF noted that it is important for the government to keep non-defense wages and pensions under firm control to contain the growth of civil service wages. As a first step, the Iraqi government plans to complete a census of all public service employees by June 2006. Iraq plans to spend more resources on its own defense. Iraq’s security- related spending is currently projected to be about $5.3 billion in 2006, growing from 7 to about 13 percent of projected GDP. The amount reflects rising costs of security and the transfer of security responsibilities from the United States to Iraq. The Iraqi government also owes over $84 billion to victims of its invasion of Kuwait and international creditors. As of December 2005, Iraq owed about $33 billion in unpaid awards resulting from its invasion and occupation of Kuwait. As directed by the UN, Iraq currently deposits 5 percent of its oil proceeds into a UN compensation fund. Final payment of these awards could extend through 2020 depending on the growth of Iraq’s oil proceeds. In addition, the IMF estimated that Iraq’s external debt was about $51 billion at the end of 2005. For the past 2½ years, the United States has provided $30 billion with the intent of developing capable Iraqi security forces, rebuilding a looted and worn infrastructure, and supporting democratic elections. However, the United States has confronted a lethal insurgency that has taken many lives and made rebuilding Iraq a costly and challenging endeavor. It is unclear when Iraqi security forces will be able to operate independently, thereby enabling the United States to reduce its military presence. Similarly, it is unclear how U.S. efforts are helping Iraq obtain clean water, reliable electricity, or competent health care. Measuring the outcomes of U.S. efforts is important to ensure that the U.S. dollars spent are making a difference in the daily lives of the Iraqi people. In addition, the United States must ensure that the billions of dollars it has already invested in Iraq’s infrastructure are not wasted. The Iraqis need additional training and preparation to operate and maintain the power plants, water and sewage treatment facilities, and health care centers the United States has rebuilt or restored. In response to our reports, State has begun to develop metrics for measuring progress and plans for sustaining the U.S.-built infrastructure. The administration’s next budget will reveal its level of commitment to these challenges. But the challenges are not exclusively those of the United States. The Iraqis face the challenge of forming a government that has the support of all ethnic and religious groups. They also face the challenge of addressing those constitutional issues left unresolved from the October referendum— power of the central government, control of Iraq’s natural resources, and the application of Islamic law. The new government also faces the equally difficult challenges of reducing subsidies, controlling public salaries and pensions, and sustaining the growing number of security forces. This will not be easy, but it is necessary for the Iraqi government to begin to contribute to its own rebuilding and stabilization efforts and to encourage investment by the international community and private sector. We continue to review U.S. efforts to train and equip Iraqi security forces, develop the oil and electricity sectors, reduce corruption, and enhance the capacity of Iraqi ministries. Specifically, we will examine efforts to stabilize Iraq and develop its security forces, including the challenge of ensuring that Iraq can independently fund, sustain, and support its new security forces; assess issues related to the development of Iraq’s energy sector, including the sectors’ needs as well as challenges such as corruption; and examine capacity-building efforts in the Iraqi ministries. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or the other Committee members may have. For further information, please contact Joseph A. Christoff on (202) 512- 8979. Individuals who made key contributions to this testimony were Monica Brym, Lynn Cothern, Bruce Kutnick, Steve Lord, Sarah Lynch, Judy McCloskey, Micah McMillan, Tet Miyabara, Jose Pena III, Audrey Solis, and Alper Tunca. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The United States, along with coalition partners and various international organizations, has undertaken a challenging and costly effort to stabilize and rebuild Iraq following multiple wars and decades of neglect by the former regime. This enormous effort is taking place in an unstable security environment, concurrent with Iraqi efforts to transition to its first permanent government. The United States' goal is to help the Iraqi government develop a democratic, stable, and prosperous country, at peace with itself and its neighbors, a partner in the war against terrorism, enjoying the benefits of a free society and a market economy. In this testimony, GAO discusses the challenges (1) that the United States faces in its rebuilding and stabilization efforts and (2) that the Iraqi government faces in financing future requirements. This statement is based on four reports GAO has issued to the Congress since July 2005 and recent trips to Iraq. Since July 2005, we have issued reports on (1) the status of funding and reconstruction efforts in Iraq, focusing on the progress achieved and challenges faced in rebuilding Iraq's infrastructure; (2) U.S. reconstruction efforts in the water and sanitation sector; (3) U.S. assistance for the January 2005 Iraqi elections; and (4) U.S. efforts to stabilize the security situation in Iraq (a classified report). The United States faces three key challenges in rebuilding and stabilizing Iraq. First, the security environment and the continuing strength of the insurgency have made it difficult for the United States to transfer security responsibilities to Iraqi forces and progressively draw down U.S. forces. The security situation in Iraq has deteriorated since June 2003, with significant increases in attacks against Iraqi and coalition forces. In addition, the security situation has affected the cost and schedule of rebuilding efforts. The State Department has reported that security costs represent 16 to 22 percent of the overall costs of major infrastructure projects. Second, inadequate performance data and measures make it difficult to determine the overall progress and impact of U.S. reconstruction efforts. The United States has set broad goals for providing essential services in Iraq, but limited performance measures present challenges in determining the overall impact of U.S. projects. Third, the U.S. reconstruction program has encountered difficulties with Iraq's inability to sustain new and rehabilitated infrastructure projects and to address basic maintenance needs in the water, sanitation, and electricity sectors. U.S. agencies are working to develop better performance data and plans for sustaining rehabilitated infrastructure. As the new Iraqi government forms, it must plan to secure the financial resources it will need to continue the reconstruction and stabilization efforts begun by the United States and international community. Iraq will likely need more than the $56 billion that the World Bank, United Nations, and CPA estimated it would require for reconstruction and stabilization efforts from 2004 to 2007. More severely degraded infrastructure, post-2003 conflict looting and sabotage, and additional security costs have added to the country's basic reconstruction needs. However, it is unclear how Iraq will finance these additional requirements. While the United States has borne the primary financial responsibility for rebuilding and stabilizing Iraq, its commitments are largely obligated and future commitments are not finalized. Further, U.S. appropriations were never intended to meet all Iraqi needs. In addition, international donors have mostly committed loans that the government of Iraq is just beginning to tap. Iraq's ability to financially contribute to its own rebuilding and stabilization efforts will depend on the new government's efforts to increase revenues obtained from crude oil exports, reduce energy and food subsidies, control government operating expenses, provide for a growing security force, and repay $84 billion in external debt and war reparations.
To enable the Department of Defense (DOD) to close unneeded bases and realign others, Congress enacted base realignment and closure legislation that instituted base closure rounds in 1988, 1991, 1993, and 1995. In some cases, DOD retained some of the property and created military enclaves on closed installations. Generally, as part of the base closure process, DOD prefers to change the jurisdiction of the property that it has retained from exclusive federal to proprietary jurisdiction. Under exclusive federal jurisdiction, the federal government is responsible for providing all municipal services and enforcing federal laws. The state and local governments do not have any authority or obligation to provide municipal services under this type of jurisdiction, except under mutual support agreements. Under proprietary jurisdiction, the federal government has rights—similar to a private landowner—but also maintains its authorities and responsibilities as the federal government. Under this type of jurisdiction, the local government is the principal municipal police and fire authority. Following the decision to close the installations in 1991, the Naval Shipyard and the Naval Station in Philadelphia were officially closed in September 1995 and January 1996, respectively. In March 2000, the Navy transferred 1,180 acres of the property to the Philadelphia Authority for Industrial Development, the local redevelopment authority. The Navy retained exclusive federal jurisdiction over about 270 acres as a military enclave. As a result, the Navy is responsible for providing all municipal services, including fire protection, in this enclave. Similarly, the City of Philadelphia and the Commonwealth of Pennsylvania maintain jurisdiction over the 1,180 acres that were transferred. The federal government has no jurisdiction over this land. Together, the Navy-retained and Navy-transferred property is called the Philadelphia Naval Business Center. The Navy’s 270-acre enclave in Philadelphia is made up of several distinct noncontiguous areas separated by the transferred acreage. (See app. I for a map and an aerial photograph of the enclave.) The Navy retained 67 buildings that house more than 2,300 civilian, contractor, and military employees. The majority of the Navy’s employees—about 1,800—work in about 47 office buildings. The remaining 500 Navy employees work at industrial or maintenance activities, including the Naval Foundry and Propeller Shop; a hull, mechanical, and electrical systems test facility; and a public works center. The enclave also includes a reserve basin that is used as a docking area for about 38 Navy inactive ships. In contrast, the non-Navy part of the business center includes about 45 private firms with approximately 2,500 employees. This part is being developed by the Philadelphia Industrial Development Corporation, the City of Philadelphia’s private economic development corporation. The corporation is authorized by the local redevelopment authority to attract private business to the Philadelphia Naval Business Center, a business and industrial park that is undergoing redevelopment utilizing the 1,180 transferred acres. The Navy facilities are protected by a federal fire service consisting of 26 personnel and 2 fire engines located on the enclave. The Navy estimated that the cost was $2.5 million to operate the federal fire department at the enclave during fiscal year 2001. The City of Philadelphia is responsible for providing fire protection services to private development on non-Navy property at the business center. It is also responsible for providing additional fire protection to the Navy facilities according to a March 2000 Mutual Aid Assistance Agreement. The agreement was signed by both Navy and City of Philadelphia officials, and it is intended to provide additional fire equipment and firefighters to respond to fires and other emergencies on each other’s property at the business center. (See app. II for a copy of the agreement.) Although not specified in the agreement, enclave command officials and Navy and city fire department officials told us that in practice, the Navy firefighters are first responders to all fire alarms at the business center—on both Navy and non-Navy property. The city fire department automatically responds to fire calls on non-Navy property at the business center; it responds to a fire on Navy property if it is called by the Navy fire department. The DOD Fire and Emergency Services Program provides policy that governs fire protection at military installations. The policy states that the first arriving fire apparatus shall meet a travel time of 5 minutes for 90 percent of all alarms and that the remaining apparatus shall meet a travel time of 10 minutes for all alarms. The policy also states that the initial response to a fire will be two engine companies and one ladder company but that another engine company may replace the ladder company. The number of full-time fire and emergency service personnel and equipment needed to meet these standards at any installation may depend on the extent to which equivalent forces are available from outside sources. The DOD policy encourages installations to enter into reciprocal agreements with local fire departments for mutual fire and emergency services to meet these standards. Navy policy mirrors that of DOD. The Navy considers a number of factors, including the strategic importance, the criticality to the overall Navy mission, the degree of fire and life safety hazards, the value of facilities and equipment, and the availability of outside support, in determining fire protection requirements at each installation. Using these criteria, the federal enclave at the business center is required to have a fully staffed on-site federal fire-fighting force; however, some of the fire-fighting force may be satisfied by city assets based on a mutual aid agreement. Today, according to military service base realignment and closure officials, federal firefighters operate at only 3 of the 27 federal enclaves that were created at closed Navy, Army, and Air Force installations (see table 1). The enclave at the former Philadelphia Naval Shipyard and Naval Station is the only Navy enclave where a federal fire protection presence remains. According to Navy officials, federal fire protection was retained because the Commonwealth of Pennsylvania did not respond to the Navy’s request in 1999 to change the jurisdictional status of the property from exclusive federal to proprietary jurisdiction in anticipation of the Navy transferring the ownership of excess land. In its April 1999 letter to the governor of Pennsylvania requesting the change, the Navy stated that such a change would provide uniform jurisdiction over the business center and the Navy’s enclave there. In addition, Navy officials told us that the change would mean that the City of Philadelphia would have been responsible for providing all municipal services such as fire and police protection. The Navy’s two other enclaves—the former Charleston, South Carolina, and Long Beach, California, shipyards—receive fire protection services from the local communities. A Navy official told us that the land at the former Charleston and Long Beach shipyards had already been designated as concurrent jurisdiction before they were closed, so the Navy did not have to request a change in designation. In addition, local governments agreed to provide fire protection to the federal enclaves at both former shipyards. Like the Navy, the Army retained federal firefighters at only one of its federal enclaves. The remaining 13 Army enclaves are protected by local community firefighters. According to an official in the Army’s Base Realignment and Closure Office, a federal fire-fighting force was retained at the enclave created when Fort Ord, California, was closed in order to provide fire protection for a 1,600-unit housing complex and other community support facilities, such as a military exchange and commissary. Before Fort Ord closed, the installation was under exclusive federal jurisdiction, but now the enclave is under concurrent jurisdiction. According to an Army base realignment and closure official, most of the other 13 Army installations changed from exclusive federal to proprietary jurisdiction. The Air Force also retained federal firefighters at only one of its enclaves while local firefighters provide fire protection at nine other Air Force enclaves. According to the Air Force’s Fire Protection Program Manager, a federal firefighter force was maintained at the enclave created when Grissom Air Force Base, Indiana, was closed to support the substantial flying mission that remained. Before the installation was closed, most of the land at Grissom, which is now an Air Reserve Base, was under exclusive federal jurisdiction, while a smaller portion was under proprietary jurisdiction; currently, all of the property at Grissom is under proprietary jurisdiction. The other nine Air Force enclaves are also under proprietary jurisdiction, although five had exclusive federal jurisdiction and two had a mix of exclusive and proprietary jurisdiction before the installations were closed. The level of fire protection at the business center is similar to that available elsewhere in the City of Philadelphia, but the arrangements for providing that protection are different. When a fire occurs on non-Navy property within the business center, both the City of Philadelphia Fire Department and the firefighters from the Navy’s enclave automatically respond to the call. When a fire occurs at the Navy’s enclave at the business center, only the Navy firefighters automatically respond to the alarm. If they need additional fire-fighting help, they must first call the city fire department, which will then send assistance. This mutual assistance is part of the agreement between the Navy and the City of Philadelphia, which Navy officials state enables them to meet DOD’s and Navy’s fire response requirements. Senior Philadelphia city fire department officials told us that they respond to alarms in the city or within the city-owned parts of the business center with a minimum of 2 engines, 2 ladders, and 19 firefighters. They noted that none of their 61 fire stations have the full complement of equipment and firefighters needed for the minimum response but that they rely on support from other fire stations throughout the city. Similarly, the Navy’s fire department at the federal enclave in the business center does not have—on its own—the full complement of equipment and firefighters needed for a minimum response as specified in DOD and Navy policy. However, the Navy’s fire department is able to meet DOD’s and Navy’s standards through its agreement with the City of Philadelphia. According to the Philadelphia Fire Commissioner, when the city responds to a request for assistance from the Navy, the city fire department would not necessarily respond with a ladder truck but with enough equipment and firefighters to bring the responding assets up to the city’s minimum standards. This is especially true when the call involves an emergency other than a fire. A Philadelphia Deputy Fire Commissioner estimated that the response time for an engine company from the nearest Philadelphia city fire station to the main gate of the business center would be just under 7 minutes and that the response time from the nearest ladder company would be less than 11 minutes. He also said that it would take additional time to get from the main gate to various parts of the Navy’s enclave. According to a study performed by the International Association of Firefighters, the first Philadelphia Fire Department ladder truck would arrive at the main gate of the business center in about 5 minutes and 55 seconds. Navy officials said that the Philadelphia Fire Department’s response times meet the current DOD and Navy response criteria—10 minutes for subsequent arriving vehicles—assuming the city fire department is arriving after Navy firefighters have already responded to the alarm. The Navy’s fire department has responded to more than 300 calls each year during the last 2 full years, and it is on track for responding to more than 300 calls in 2002. These calls included fire emergencies, emergency medical service (EMS) requests, rescues, natural gas leaks, hazardous materials incidents, standby fueling operations, and alarms with no fire. During this same period, Navy data indicate the enclave’s firefighters have responded to a total of 41 fires, 16 of which were on the enclave. From the time that the agreement was signed in March 2000 to September 2002, 29 months later, City of Philadelphia firefighters responded to one fire call on the Navy’s enclave as part of the agreement. They also responded to 39 EMS calls and 4 other calls at the enclave during the same period. Table 2 shows the number of fire, EMS, and other responses that the Navy and the City of Philadelphia conducted under their mutual aid agreement. On the other hand, during the same period, the Navy fire department responded to 25 mutual aid fire calls on non-Navy property at the business center. It also responded to 150 EMS and 54 other calls on non-Navy property. Both Navy and Philadelphia city fire department officials told us that they have found the agreement mutually beneficial and that they expect to renew the agreement in March 2003. According to city fire department officials, future economic development at the business center is expected to require a reassessment of fire protection services provided by the City of Philadelphia. Currently, about 45 private tenants with about 2,500 employees are housed in 47 buildings located on non-Navy property. However, the development corporation plans to add additional office space at the business center over the next several years. For example, a 43,000-square foot building directly across from the Navy command building is under renovation; when it is completed in early 2003, it will provide office space for about 150 people. In addition, the development corporation plans to provide an additional 800,000 square feet of office space over the next 8 years. According to the Philadelphia Fire Department Commissioner, as development in the business center continues to expand, his office is expected to reevaluate the location of fire stations located near the business center. This reevaluation could provide an opportunity for the Commonwealth of Pennsylvania, the City of Philadelphia, and the Navy to reassess jurisdictional issues and the need for a separate fire department to service the Navy’s enclave. A recent development underscored the possibility of change in fire protection at the business center. In August 2002, the development corporation announced that a developer plans to build 230 private homes on land outside the main gate of the business center. A Philadelphia Deputy Fire Commissioner stated that the city would need to reconsider fire protection for this area once the planned development was completed. At the time of the transfer of excess land at the former Philadelphia Naval Shipyard and Naval Station to the redevelopment authority, the Navy tried unsuccessfully to change the jurisdiction of the 270-acre enclave it retained from exclusive federal to proprietary. This jurisdictional change would have been similar to what occurred at most other military enclaves created during the base closure and realignment process. According to Navy officials, such a change would have provided uniform jurisdiction over both the non-Navy property and the Navy-owned enclave at the business center. This change would have given the City of Philadelphia responsibility for providing all municipal services, including fire protection, at the business center. Instead, the jurisdiction at the Navy-owned enclave remains exclusively federal, and the Navy spends about $2.5 million annually to retain its fire department there. As private development at the business center and in its immediate vicinity continues to grow over the next few years, the business center’s fire protection arrangements may have to be reevaluated. Philadelphia Fire Department officials told us they recognize they will need to reevaluate the way fire protection is provided at the business center. This reevaluation could provide the Commonwealth of Pennsylvania, the City of Philadelphia, and the Navy with an opportunity to reconsider the jurisdictional issues and reassess the need for a separate Navy fire department to service the Navy’s enclave at the business center. In commenting on a draft of this report, the Deputy Under Secretary of Defense (Installations and Environment) concurred with the report. DOD’s comments are included in this report as appendix III. We conducted our work at the Office of the Director Navy Fire and Emergency Services and Base Closure Office, the Naval Facilities Engineering Command in Washington, D.C., the Ship Systems Engineering Station and the Fire Department, the Philadelphia Naval Business Center, the Philadelphia Fire Department, and Philadelphia Industrial Development Corporation. We also did work at the Army’s Base Realignment and Closure office, the office of the Assistant Chief of Staff for Installation Management, and the Air Force Base Conversion Agency. To determine how fire protection services at the business center compared with those at other federal enclaves created under base closure, we reviewed the 1988, 1991, 1993, and 1995 base realignment and closure reports and identified where DOD retained property on closed installations. We analyzed information from the Army and Navy base closure offices and the Air Force Base Conversion Agency on how fire protection was provided at the retained federal property on closed installations and on the jurisdiction at the installations prior to and after closure. We reviewed DOD and Navy guidance regarding the staffing and equipping of fire departments. To determine how fire responses at the business center compared with those elsewhere in the City of Philadelphia, we interviewed the Commissioner and two Deputy Commissioners in the Philadelphia Fire Department to obtain information on how city firefighters respond to fire alarms in the City of Philadelphia and on the business center. In addition, we interviewed the Chief and the Assistant Chiefs of the Navy fire department to determine how Navy firefighters respond to fire alarms on Navy and non-Navy properties within the business center and we analyzed Navy fire department workload data. We also analyzed response time information provided by the Navy and the Philadelphia fire departments. Finally, we reviewed the agreement between the Navy and the City of Philadelphia regarding fire protection at the business center. To determine how future development of the business center would affect how fire protection is provided, we interviewed the Commissioner and two Deputy Commissioners in the Philadelphia Fire Department. To obtain information on future development at the business center, we interviewed officials from the Philadelphia Industrial Development Corporation. We conducted our review from July through September 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Director, Office of Management and Budget. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-8412 if you or your staff have any questions regarding this report. Key contributors to this report were Michael Kennedy, Richard Meeks, Aaron Loudon, Ken Patton, and Nancy Benco.
When the Department of Defense closed military installations as a part of the base realignment and closure process and transferred properties to public and private ownership, it in some cases retained a portion of an installation as a military enclave. During this process, legal jurisdiction over an enclave may be transferred from the federal government to the local government. Such a transfer may incorporate provisions for fire protection and other services by local and state governments. A federal fire-fighting service provides fire protection services at the Navy's enclave located at the Philadelphia Naval Business Center. This is one of the three military enclaves, formed during the base closure and realignment process, which is still protected by federal firefighters. Twenty-four other military enclaves were converted from federal to local fire protection during the base closure process. The Navy retained a federal fire-fighting force at its enclave at the Philadelphia Naval Business Center because of Commonwealth of Pennsylvania did not respond to the Navy's request to change the jurisdiction of the Navy-retained land. The level of fire protection at the Philadelphia Naval Business Center is similar to that available elsewhere in the City of Philadelphia, but the arrangements for providing that protection differ. If a fire occurs on non-Navy property within the business center, both the Navy and the Philadelphia fire departments will automatically respond to the call, with the Navy as the first responder. However, if the fire is located on Navy-owned property at the business center, only Navy firefighters will automatically respond to the alarm. As private development at the Philadelphia Naval Business Center continues, the fire protection arrangements are expected to be reassessed. The Commissioner of the Philadelphia Fire Department stated that, as development at the business center continues to increase, his office will need to reevaluate the location of city-owned fire stations in the area around the business center.
FDA is responsible for helping to ensure that food products marketed in the United States meet the same statutory and regulatory requirements, whether they are produced in the United States or another country. FDA shares responsibility for the oversight of food safety with the United States Department of Agriculture’s (USDA) Food Safety and Inspection Service (FSIS). FSIS oversees the safety of domestic and imported meat, poultry, and processed egg products, while FDA is responsible for the safety of virtually all other foods, including milk, seafood, fruits, and vegetables. FDA’s responsibilities for overseeing the safety of imported products are divided among its product centers and program offices. FDA’s six centers are each responsible for the regulation of specific types of products, whether manufactured in the United States or another country. For example, the Center for Food Safety and Applied Nutrition is responsible for ensuring that the nation’s food supply is safe, sanitary, wholesome, and honestly labeled, and that cosmetic products are safe and properly labeled. FDA’s Office of International Programs (OIP) has responsibility for leading, managing, and coordinating all of the agency’s international activities, including its foreign offices. OIP, which is part of FDA’s Office of Global Regulatory Operations and Policy, collaborates with the international affairs staff in FDA’s centers and the Office of Regulatory Affairs (ORA). ORA—also part of the Office of Global Regulatory Operations and Policy—performs fieldwork to promote compliance with FDA requirements and the applicable laws, such as inspecting foreign facilities and examining products at the U.S. border. FDA’s foreign offices function within the embassy or consulate for the country or region under the auspices of the Department of State, along with other federal agencies that operate abroad, such as the USDA’s Foreign Agricultural Service, USDA’s Animal and Plant Health Inspection Service, and the Department of Commerce’s U.S. Commercial Service. FDA also works on related issues with other U.S. agencies, including USDA’s Food Safety Inspection Service to share food safety information, the Centers for Disease Control and Prevention (CDC) during foodborne outbreaks, and the Environmental Protection Agency (EPA) to enforce pesticide residue tolerances in foods that are established by EPA. FDA’s foreign offices have a director or deputy director to whom staff members report. The offices also may have food investigators that conduct inspections, as well as senior regional specialists, technical experts, and program support specialists who are responsible for engaging with foreign stakeholders and gathering information. Some offices also may have investigators responsible for inspecting other FDA- regulated products, such as drugs and medical devices, and locally employed staff, also known as Foreign Service Nationals, who are non- U.S. citizens employed at U.S. missions abroad. In 2011, FSMA expanded and modified FDA’s authorities and responsibilities, enhancing the agency’s oversight of imported food by, among other things, including provisions that might better ensure the comparable safety of imported and domestic food. For example, FSMA gave FDA express authority to hold imported foods to the same standards as domestic foods. FSMA directed the establishment of offices in foreign countries and specified that the offices (1) assist governments in those countries in ensuring the safety of food and other FDA-regulated products and (2) conduct risk-based inspections of food and other products and support such inspections by foreign governments. With respect to foreign facilities that are sources of food imported to the United States, the law directs FDA to inspect at least 600 foreign facilities within 1 year of enactment of FSMA and, in each of the 5 years following that period, to inspect at least twice the number it inspected during the previous year. In addition, FDA can refuse entry into the United States of food from a foreign facility if FDA is denied access for inspections by the foreign facility or the country in which the facility is located. FDA’s foreign offices have engaged in a variety of activities intended to help ensure the safety of imported food; building relationships with foreign counterparts has been a top-priority activity. Foreign offices have conducted inspections of foreign food facilities, but FDA is not keeping pace with FSMA’s mandate for increasing the number of these inspections. FDA reported to Congress in 2012 that the primary purpose of posting staff in other countries is to engage more proactively and consistently with various stakeholders to help prevent unsafe products from reaching U.S. borders. To accomplish that purpose, the foreign offices have engaged in various types of activities, including (1) building collaborative and cooperative working relationships with foreign regulatory authorities and U.S. federal agencies located in other countries, (2) gathering and assessing information to increase FDA’s knowledge of the regulatory landscape, such as conditions in other countries that could affect the safety of food, and (3) conducting inspections to help identify high-risk facilities and determine the risks from imported products. Table 1 explains these and other types of activities conducted by FDA’s foreign offices. These activities are carried out by FDA’s foreign offices—some of which have multiple locations—that divide up responsibilities for different parts of the world. As shown in figure 1, all offices are located in other countries except the Asia-Pacific Office, which is located at FDA headquarters in the United States. As illustrated in the figure, FDA has closed, or plans to close, some of its foreign office locations. FDA has closed these offices for a variety of reasons. For example, the location in Parma, Italy—where the European Food Safety Authority (EFSA) is headquartered—was closed, and FDA staff relocated to the United States Mission to the European Union in Brussels, Belgium, as a more efficient use of resources to ensure coverage for FDA-related activities within the European Union while maintaining the liaison with EFSA through temporary duty assignments. As part of these closures, the Asia-Pacific Office—which covers Canada, Australia, New Zealand, and countries in Asia other than China and India—has absorbed responsibilities for countries previously covered by the Middle East and North Africa Office and the Sub-Saharan Africa Office. We questioned the foreign offices to determine the extent to which they performed these activities and which three activities were a top priority in 2014. The foreign offices reported similarities and differences in the types of activities they conducted. For example, all offices similarly reported conducting activities related to (1) gathering and assessing information, (2) providing information on FDA standards, and (3) building relationships. As shown in figure 2, we found differences in top-priority activities across the foreign offices. As noted earlier, FSMA directed, among other things, the establishment of foreign offices to conduct risk-based inspections of food and other products. The foreign offices that conduct inspections of food facilities use investigators that are either assigned to a foreign office for at least a 2- year rotation (in-house) or assigned to a foreign office on temporary duty for 60, 90, or 120 days from ORA. FDA’s China Office, India Office, and Latin America Office are the only foreign offices that conducted inspections of food facilities in 2014. Our analysis showed that the number of inspections performed by the foreign offices has increased since we reported in 2010 but remains a small part of FDA’s total number of foreign food inspections. In 2010, FDA’s China Office completed 13 food inspections, and the India Office completed none. By 2013, the China Office completed 45 of FDA’s total 59 food inspections in China (about 76 percent), and the India Office completed all 20 FDA food inspections in India—about 5 percent of FDA’s total 1,415 inspections of foreign food facilities. In 2014, FDA added food investigators to its Latin America Office to conduct inspections, and the agency anticipates conducting more inspections of foreign food facilities in the future. During 2014, the foreign offices completed 140 of FDA’s total 1,323 inspections of foreign food facilities—66 in China, 67 in India, and 7 in Latin America—a 10-fold increase in the 4 years since 2010. The foreign offices also have begun providing in-country information to U.S.-based ORA investigators to help them complete their assigned foreign food inspections. Figure 3 shows the locations where FDA investigators conducted inspections of foreign food facilities in fiscal year 2014. These numbers include food inspections performed by FDA investigators, whether they were assigned to a specific foreign office, on temporary duty from ORA, or based with ORA in the United States and assigned to travel for a few weeks at a time to inspect foreign facilities. The increase in inspections completed by the foreign offices notwithstanding, FDA is not keeping pace with the targets for foreign food inspections set by Congress in FSMA. The act mandated that FDA inspect at least 600 foreign food facilities in the 1-year period following the enactment of FSMA. For each of the 5 following years, FSMA mandated that FDA inspect at least twice the number of facilities inspected during the previous year. Figure 4 shows the number of inspections FDA actually completed (or has planned to complete), along with two possible scenarios in response to FSMA. The first scenario has FDA inspecting twice the actual (or planned) number of foreign food facilities compared with the previous year, starting with the 1,002 inspections FDA completed in 2011 (see shaded bars labeled “FSMA mandate”). For example, as highlighted in the figure data, the FSMA mandate set a target of at least twice as many inspections—2,004—in 2012 as FDA actually inspected in 2011. The second scenario shows FDA inspecting 600 facilities—the FSMA minimum—in 2011, then doubling that number each of the 5 following years (see white bars labeled “Doubling each year”). The first scenario would yield a target of at least 2,646 foreign inspections in 2015 and an estimated target of at least 2,400 foreign inspections in 2016, the final year of the mandate. The second scenario, as FDA has reported to Congress, would yield a target of 19,200 foreign inspections in 2016. FDA is not currently keeping pace with the FSMA mandate for increased foreign food inspections under either scenario’s targets. As the figure shows, FDA completed 1,002 foreign food inspections in 2011, 167 percent of the FSMA mandate. In 2012, FDA completed 1,343 such inspections, a 34 percent increase from, but not twice, the previous year’s number. During 2013, FDA completed 1,403 such inspections, a 4 percent increase from the previous year but also less than twice the previous year’s number. Thus far, the agency has completed 1,323 inspections in 2014, which is more than planned but an overall decrease compared with the previous 2 years. FDA officials told us that the agency has not met—and is not planning to meet—the FSMA mandate. They questioned the usefulness of conducting the number of inspections mandated by FSMA. According to FDA officials, the cost of inspections is the main reason that the agency is not keeping pace with the FSMA mandate for foreign food facility inspections. In its most recent report to Congress on food imports and foreign offices, FDA estimated that the average cost of a foreign inspection was $23,600, compared with $15,500 for a comparable domestic one. By that estimate, FDA would have needed at least $113 million to complete the 4,800 foreign inspections that it has reported were required in fiscal year 2014 to meet the FSMA mandate. For 2014 and 2015, FDA requested funding for 1,200 foreign food inspections for each year. For fiscal year 2014, FDA received a total of about $138 million to implement all provisions of FSMA, including training, rulemaking, and foreign inspections. FDA officials told us that, given limited funding, the agency determined that additional foreign inspections were not the best use of FSMA-related funds. FDA officials said they were focusing resources instead on technical assistance to the domestic and foreign food industry to help manufacturers comply with new FSMA rules, as well as training for FDA investigators and other agency staff to modernize FDA’s food inspection program. However, FDA has not conducted an analysis to determine whether either the required number of inspections in the FSMA mandate or the lower number of inspections it is conducting is sufficient to ensure comparable safety of imported and domestic food. Without such an analysis, FDA is not in a position to know what is a sufficient number of foreign inspections and, if appropriate, request a change in the mandate regarding the number of foreign inspections to be conducted. FDA foreign office officials cited a variety of contributions to improving the safety of food imported from other countries to the United States. However, the extent of the contributions is unknown because FDA’s performance measures have not fully captured these contributions. Officials from the foreign offices cited instances when they had made significant contributions to determining the cause of outbreaks that led to illnesses and deaths in the United States. Among them: The Europe Office credited new relationships with their Italian counterparts for providing information that helped link a 2012 outbreak of listeriosis, which sickened 22 people and resulted in four deaths in the United States, to ricotta cheese imported from Italy. According to FDA officials, the office staff worked with Italian food safety authorities to investigate firms that could have caused the outbreak. The result of these efforts was a recall of some ricotta cheese, ending instances of illness and death in the United States. In 2012, the India Office’s in-country investigators were able to rapidly conduct inspections of tuna processing facilities that were identified as potential sources of an outbreak of Salmonella in tuna products, which sickened 425 people in the United States. FDA and other agencies were then able to quickly take action on the inspection findings, including FDA issuing an import alert for the tuna products. Listeriosis. The Centers for Disease Control and Prevention (CDC) estimated that approximately 1,600 illnesses and 260 deaths due to listeriosis occur annually in the United States. Listeriosis is a serious infection with symptoms that can include headache, stiff neck, confusion, loss of balance, and convulsions in addition to fever and muscle aches. Infection during pregnancy can lead to miscarriage, stillbirth, premature delivery, or life-threatening infection of the newborn. relationship with the Mexican government’s food regulatory authorities to narrow down the source of a Cyclospora outbreak in 2013 that sickened 631 people in the United States. This office coordinated investigations at the facilities that handled the leafy greens identified as the potential source of the outbreak and certified the facilities as free of Cyclospora so the shipment of the products could resume. Salmonellosis. According to the CDC, Salmonella is estimated to cause more than 1 million illnesses annually in the United States, with 19,000 hospitalizations and 380 deaths. Most persons infected with Salmonella develop diarrhea, fever, and abdominal cramps 12 to 72 hours after infection. Cyclosporiasis. Cyclosporiasis is an intestinal infection that can be caused by people ingesting food or water contaminated with feces. In the United States, outbreaks since the mid-1990s have been linked to various types of imported fresh produce such as raspberries, basil, and mesclun lettuce. The foreign office officials also provided examples of additional actions that stopped the importation of food products that were potentially harmful to humans. For example, in 2012, the Latin America Office in Mexico City helped stop the importation of a fraudulent dietary supplement into the United States because the officials discovered that the supplement did not contain the ingredients it claimed to include. Also, in 2012, this office helped test shipments of orange juice products from all foreign sources for a pesticide residue, carbendazim, and found that 31 of 166 shipments had carbendazim. EPA has not registered carbendazim for use as a fungicide on oranges or established a tolerance or an exemption from a tolerance for carbendazim in orange juice. As a result of the testing, several facilities were stopped from exporting orange juice containing carbendazim residues to the United States, and the occurrence of carbendazim in imported orange juice declined. FDA also provided an example of a foreign office’s contribution to the safety of imported animal food. Specifically, in 2012, in-country investigators in the China Office conducted inspections of five facilities that made jerky pet treats to determine if they were the cause of ongoing illnesses and deaths in pets in the United States. As of May 2014, FDA had received reports of illness involving more than 5,600 dogs and 24 cats, and the deaths of more than 1,000 dogs, which may be related to consumption of jerky pet treats. In addition, FDA received three reports of human illness after exposure to jerky pet treats. The China Office has assisted with the ongoing investigation into the illnesses; however, as of October 2014, the cause has not been found. FDA investigators were not permitted to take samples of the pet treats or their ingredients inside the facilities and have them tested in an FDA laboratory in the United States. Foreign office officials told us that FDA investigators do not typically take samples during foreign inspections, but they have taken samples in Mexico and sent them to an FDA laboratory in the United States to assist in a food outbreak investigation. FDA’s Center for Veterinary Medicine continues to work on finding the cause for illnesses and deaths linked to jerky pet treats. The extent of the foreign offices’ contributions to food safety is unknown because FDA does not fully capture the foreign offices’ contributions through performance measures that are either agency-wide or specifically developed by OIP for the foreign offices. In our 2010 report, we recommended that, as the agency completed its strategic planning process for the foreign offices, it develop performance goals and measures that can be used to demonstrate the offices’ contributions to long-term outcomes related to imported FDA-regulated products. FDA’s agency-wide performance measures for the foreign offices provide counts from each foreign office on how many inspections were conducted within each country and the number of completed country profiles— reports and papers on the food safety conditions in a given country. These measures do provide important output information, but they do not provide outcome-oriented information on how a specific action by a foreign office contributed to food safety. For example, an output measure, such as a number count of inspections, does not show how the inspections and reports contribute to broader food safety goals. OIP does have one measure that is outcome oriented—a measure of collaborative actions by each foreign office that led to improved public health outcomes. However, neither FDA’s agency-wide performance measures nor OIP’s measure fully captures the foreign offices’ activities to help improve food safety. See table 2 for a list of FDA agency-wide and OIP performance measures for the foreign offices in fiscal year 2014. In our 2010 report, we acknowledged that some measures are difficult to develop because results for some activities are not easy to quantify and that it can be difficult to attribute results to programs that involve multiple organizations within FDA. However, performance measures are important management tools for agencies. The agency has initiated a review to determine how to better reflect the value of the foreign offices in the agency-wide performance system. The initial phase of this review has been completed, and FDA could not provide a date when the full review would be completed or when new performance measures would be implemented. OIP has developed a strategic map that aligns the activities of the foreign offices with strategic outcomes. OIP is also collecting information from its foreign offices by means of annual operational plans that track each office’s progress toward completing a specific project, such as organizing a conference to help foreign regulatory counterparts and industry officials better understand FSMA. These are potentially useful performance planning and management tools; however, they are not performance measures. Leading practices indicate that results- oriented performance measures focus on expected results to show progress toward, or contributions to, intended results. We believe our previous recommendation that FDA develop performance goals and measures for the foreign offices that are outcome-oriented is still valid. Without performance measures that can be used to demonstrate the offices’ contributions to long-term outcomes related to imported FDA- regulated products, FDA has less information available to effectively measure the foreign offices’ progress toward meeting the agency’s goals. Since we last reported, FDA has continued to experience recruitment challenges in the foreign offices. FDA has taken some steps to address those challenges, but it has not completed a strategic workforce plan. In 2010, we found that FDA had experienced challenges in staffing some of the foreign offices. For example, at that time, FDA had 2 vacant staff positions in the Latin America Office out of a total of 14 positions, and 4 vacancies in the India Office out of a total of 15 positions. In subsequent years, the number of vacancies in the foreign offices has increased as these offices have expanded. There are fewer staff members in the foreign offices now than in 2010, and the percentage of vacant positions has increased because the number of approved staff positions is larger. As shown in figure 5, 44 percent of FDA’s approved foreign office positions were vacant as of October 2014, and most of these vacancies were in the China Office. These vacancies shown in the figure above include both U.S. government and locally employed staff positions. Locally employed staff account for 17 out of the 50, or 34 percent of the total staff working in FDA’s foreign offices, as of October 2014. Appendix II provides additional information about the staffing composition of the foreign offices and the contributions of locally employed staff. A number of factors have contributed to vacancies in the foreign offices, including delays in obtaining visas from the Chinese government. According to FDA officials, the last visa for a new FDA staff member to be posted in the China Office was issued in October 2012; there are nine U.S. government staff who have been hired by FDA for the China Office, but they cannot deploy because of the Chinese government’s delay in issuing new visas for FDA employees. OIP officials told us that they began discussions with Chinese government officials in February 2012 about increasing the number of investigators in the China Office. As of October 2014, FDA’s discussions with the Chinese government were ongoing. Figure 6 shows a timeline of developments, including White House involvement, related to FDA’s efforts to obtain visas for new staff in the China Office. In an effort to facilitate the granting of visas for new staff, FDA agreed to close its locations in Guangzhou and Shanghai and consolidate all China Office staff in Beijing. However, officials in the China Office expressed concern that they will lose a valuable resource because one of the two locally employed staff members in Guangzhou will not be able to relocate to Beijing. The language skills of the locally employed staff are especially important in China, where the investigators do not typically speak the local language. OIP officials told us that, in the absence of locally employed staff available to translate, investigators in China rely on translators provided by the firms that are being inspected. Consolidating all China Office staff in Beijing also poses challenges in providing enough office space within the embassy. OIP officials told us that when adding investigators to the China Office was first proposed in February 2012, they knew that they might face space constraints regardless of whether staff were placed in the China Office’s locations in Shanghai, Guangzhou, or Beijing. There is not enough space in the U.S. embassy in Beijing to house additional FDA staff, so FDA will be one of the occupants in a new annex building that the Department of State is currently constructing. FDA anticipates moving into that space in October 2015. Other factors that have affected the recruitment of staff for the foreign offices include issues that are directly affected by FDA personnel policies, such as reintegration of staff who have returned from assignments at a foreign office location. In the past, FDA handled reintegration on a case- by-case basis. Foreign office officials told us that not having a reintegration policy for staff members who have completed foreign assignments had hampered their ability to recruit staff to work in the foreign offices. Foreign office officials said U.S.-based ORA investigators have been hesitant to transfer to foreign offices because they were concerned about whether they would be able to return to their previous geographic location once they completed their posting abroad. They have also been concerned about whether FDA would value the experiences they gained while abroad. To address the uncertainty surrounding reintegration, FDA adopted a set of standard operating procedures, which were finalized in November 2014. OIP officials said they, in conjunction with the Office of Human Resources, have been engaging in outreach efforts to help managers understand the reintegration process. Foreign office officials told us that lengthy hiring processes also have affected FDA’s ability to staff its foreign offices. According to information published by the Office of Management and Budget, in 2009 it took federal agencies an average of 122 days to fill an open position. According to the most recent data available, that time dropped to an average of 93 days for fiscal year 2011 and 87 days for fiscal year 2012. For FDA, during fiscal years 2013 and 2014, it took an average of 121 days to fill staff positions in the Asia-Pacific Office, 140 days in the China Office, 172 days in the Europe Office, 200 days in the India Office, and 104 days in the Latin America Office. FDA has recently implemented an agency-wide initiative known as FDA’s Accelerated Staffing Track to reduce the time it takes to hire a candidate to 80 calendar days. OIP has undertaken initiatives to help recruit and develop staff. According to OIP officials, one successful initiative was to implement temporary duty assignments of investigators for 60, 90, or 120 days to meet immediate resource needs of the foreign offices. Officials in the foreign offices told us that the investigators assigned on temporary duty were a staffing resource that helped the offices conduct inspections. Temporary duty assignments also served as an important recruiting tool since investigators returning from a temporary overseas assignment can provide a firsthand account of their foreign office experiences to their U.S. colleagues. In addition, OIP officials told us that they were able to use information from a draft workforce gap analysis to implement some learning and development initiatives to help ensure that the foreign office staff had the necessary skills to perform their job duties. OIP identified “diplomacy” and “global awareness” as training topics for foreign office staff. OIP also sought to strengthen staff members’ foreign language skills by offering language training. However, OIP does not have a formalized staffing mechanism through which it can decide on strategic resource allocations based on a targeted analysis of the specific staffing needs of its various foreign offices. Such a staffing mechanism would be included in a strategic workforce plan. Currently, the foreign offices provide input into staffing decisions through office-level staffing proposals, but some of the needs identified in their proposals have not been met. For example, officials from one foreign office expressed a need to have a staff member located in headquarters to represent them in real time during face-to-face discussions of policy matters. That office also identified a need for additional information technology support because of the challenges created by operating in a time zone when headquarters staff are not typically working and the security requirements for accessing FDA computer systems in an embassy setting. OIP officials told us that they are developing a strategic workforce plan that requires an FDA-wide perspective and approach that recognizes the broad role of FDA’s centers and ORA in its international activities. To that end, OIP has developed a strategic workforce planning framework, and officials told us that, over the next year, they will develop the first phase of a forward-looking strategic workforce plan for the foreign offices. However, OIP has yet to define what the workforce plan will entail, and there are no time frames for completion. Strategic workforce planning is an essential tool to help agencies align their workforces with their current and emerging missions and develop long-term strategies for acquiring, developing, and retaining staff. In our 2010 report on FDA’s foreign offices, we recommended that FDA develop a strategic workforce plan for the foreign offices to help ensure that the agency is able to recruit and retain staff with the necessary experience and skills. We continue to believe that completing a strategic workforce plan for the foreign offices is critical to FDA’s ability to address staffing challenges. FDA established foreign offices to help prevent unsafe products from entering the United States. Through their activities, FDA’s foreign offices have helped the agency to increase the total number of foreign food inspections conducted annually. Nonetheless, FDA has not kept pace with FSMA’s inspection mandate since 2011. FDA is planning to conduct 1,200 foreign food inspections through the end of the mandate—well below either scenario that might satisfy the FSMA mandate to increase inspections each year through 2016. FDA officials cited limited resources as the primary reason they are not conducting more foreign food inspections. FDA officials also questioned the usefulness of conducting the number of inspections mandated by FSMA. However, FDA has not conducted an analysis to determine whether the number of inspections mandated by FSMA or the number of inspections it is now conducting is sufficient to ensure comparable safety of imported and domestic food. Without such an analysis, FDA is not in a position to know what is a sufficient number of foreign inspections and, if appropriate, request a change in the mandate regarding the number of foreign inspections to be conducted. In addition, in 2010, we recommended that FDA develop performance goals and measures that can be used to demonstrate the foreign offices’ contributions to long-term outcomes related to improving the safety of imported food products. According to FDA officials, the agency has initiated a review to determine how to better reflect the value of the foreign offices in the agency-wide performance system. However, FDA has not yet implemented new performance measures or determined when its review would be completed. We continue to believe that performance measures that demonstrate the foreign offices’ contributions to long-term outcomes for the safety of imported food are important to provide information to help the agency track progress toward meeting its goals and to provide managers with crucial information on which to base funding decisions. We also recommended that FDA develop a strategic workforce plan for the foreign offices to help ensure that the agency is able to recruit and retain staff with the necessary experience and skills. FDA has taken some steps to address recruitment challenges, but the agency has not yet completed a strategic workforce plan. We continue to believe that a strategic workforce plan for the foreign offices is critical to FDA’s ability to address staffing challenges, especially given the number of vacancies abroad. There are other challenges affecting the foreign offices, such as problems obtaining visas for the China Office staff. However, a strategic workforce plan would provide FDA some assurance that it has placed the right people in the right positions at the right time and can carry out its mission to protect public health in an increasingly complex and globalized world. To help ensure the safety of food imported into the United States, we recommend that the Commissioner of Food and Drugs complete an analysis to determine the annual number of foreign food inspections that is sufficient to ensure comparable safety of imported and domestic food. If the inspection numbers from that evaluation are different from the inspection targets mandated in FSMA, FDA should report the results to Congress and recommend appropriate legislative changes. We provided a draft of this report to FDA for comment. In its written comments, which are reprinted in appendix III, FDA concurred with the recommendation, pending the necessary resources to conduct the analysis, as part of a larger FSMA-implementation strategy to improve the safety of imported food that will, among other things, reconsider the number of inspections conducted in other countries. FDA said that foreign inspections are an important part of FSMA, providing accountability for inspected foreign firms, incentives for them to comply with U.S. import requirements, and intelligence about foreign food safety practices. FDA added that foreign inspections will not, in themselves, ensure comparable safety of imported and domestic food, and the agency is expanding its collaborations with foreign governments to assist in ensuring the safety of imported food. As noted in its comments, FDA is optimistic that additional visas will be approved to expand its presence in China, which would help reduce the number of vacant staff positions that we cite in this report. FDA also provided technical comments that were incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Commissioner of Food and Drugs, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report responds to your request that we examine the progress of Food and Drug Administration (FDA) foreign offices since we last reported in 2010 for helping to ensure the safety of imported food. Our objectives of this report were to examine (1) the activities the FDA foreign offices have engaged in since 2010 to help ensure the safety of imported food, (2) the extent of the foreign offices’ contributions to the safety of imported food, and (3) the extent to which FDA has engaged in workforce planning for its foreign offices. For the purposes of this report, imported food refers to food for human or animal consumption, unless otherwise specified. To examine the activities of FDA’s foreign offices, we reviewed and analyzed documents including FDA reports to Congress that were mandated by the FDA Food Safety Modernization Act (FSMA) that describe the activities of all foreign offices, evaluated written answers to questions about their activities since the 2010 report, and conducted structured interviews with FDA officials in the foreign offices, and analyzed counts of foreign food facility inspections for each year of the FSMA mandate. We questioned all FDA offices that reported conducting food safety activities at the time of our review. The Sub-Saharan Africa post was vacant and, therefore, was not included in the structured interview. Based on conversations with officials from the China Office and Latin America Office, the Chile post and Shanghai post were not included in the structured interview because the posts did not focus on food. As part of our questions, we asked the officials to identify their three top priority activities. In addition, we analyzed food inspections conducted by the foreign offices compared with targets mandated in FSMA between 2011 and 2016. We cross-checked FDA’s foreign inspection numbers, as provided by the Office of Regulatory Affairs through its FACTS database, with information in FDA reports to Congress and additional information obtained during our site visits to locations in Beijing and Guangzhou, China, and Mexico City, Mexico. We selected those offices, in part, because they conducted food inspections. Through this examination of the data and interviews with FDA officials who were knowledgeable about foreign food inspections, we determined that the inspection counts provided by the agency were sufficiently reliable for use in our review. To examine how FDA foreign offices have contributed to imported food safety, we reviewed and analyzed documents and data that described the outcomes of the foreign offices’ activities, including inspection reports and import alerts. We conducted structured interviews with FDA officials from the foreign offices, including the Asia-Pacific Office, China Office, Europe Office, India Office, and Latin America Office to determine the outcomes of the foreign offices’ activities. We also discussed the outcomes of the foreign offices with FDA’s Center for Food Safety and Applied Nutrition and Center for Veterinary Medicine. We analyzed performance planning and management planning documentation to determine the extent that FDA had performance measures that were outcome oriented and captured the activities of the foreign offices, based on leading practices that we have previously identified. We interviewed FDA officials in the Office of International Programs (OIP) and the Office of Strategic Planning and Analytics to understand how FDA is measuring the performance of the foreign offices. Through interviews with FDA officials knowledgeable about performance measures for the foreign offices, we determined that the performance measure data were sufficiently reliable for use in our review. To examine the extent to which FDA has engaged in workforce planning for its foreign offices, we reviewed workforce planning documents, including descriptions of recruitment and retention and learning and development initiatives, FDA’s 80-day hiring model and draft reintegration policy, and draft analyses from a contractor hired to develop a workforce plan for the foreign offices. We also reviewed leading practices for workforce planning that we have previously identified. We also analyzed staffing data from the foreign offices, and we interviewed officials from the OIP, the Office of Operations, the Office of Planning, and the Office of Human Resources. We cross-checked the staffing counts provided by the OIP with information we obtained during our site visits to locations in Beijing and Guangzhou, China, and Mexico City, Mexico. Through this examination of the data and interviews with FDA officials knowledgeable about staffing for the foreign offices, we determined that the data were sufficiently reliable for use in our review. In addition, to address all three objectives, we conducted an in-depth review of FDA operations in Canada, China, and Mexico. We selected these locations based on an analysis of the volume of food imports, the percentage of food imports refused at the border, and the number of food facility inspections for fiscal year 2013. We also considered the number of active import alerts (i.e., warnings about particular products, manufacturers, and countries based on FDA experience or information that triggers a more intensive inspection at the U.S. border). We visited FDA’s offices in Beijing and Guangzhou, China, and Mexico City, Mexico. We interviewed all FDA staff at those locations, as well as the regional director for the Latin America Office who was present in Mexico City during our visit. We also met with officials from U.S. government agencies in those locations, including the United States Department of Agriculture’s (USDA) Foreign Agricultural Service and USDA’s Animal and Plant Health Inspection Service, the Centers for Disease Control and Prevention, the Department of Commerce’s Foreign Commercial Service and the Department of State’s Environment, Science, Technology, and Health Officers. We also accompanied FDA foreign office staff on site visits to food facilities. During our visit to Mexico City, we visited the world’s largest greenhouse, which grows and packs hydroponic tomatoes and peppers for export to the United States. During our visit to Guangzhou, we visited a large facility that produces farm-raised seafood for export to the U.S. market. Additionally, we spoke with food safety regulatory authorities in Canada, China, and Mexico, including the Canadian Food Inspection Agency; the China Food and Drug Administration; the China Center for Food Safety Risk Assessment; the General Administration of Quality Supervision, Inspection, and Quarantine of the People’s Republic of China; the Guangdong Entry-Exit Inspection and Quarantine Bureau of the People’s Republic of China; the Mexico Federal Commission for the Protection against Sanitary Risk; and the Mexico National Service of Agro Alimentary Health, Safety and Quality. We conducted this performance audit from November 2013 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Food and Drug Administration’s (FDA) foreign offices comprise both U.S. government staff and locally employed staff who are non-U.S. citizens employed at U.S. missions abroad. Figure 7 below shows staff numbers for each foreign office position, as of October 2014. Figure 8 shows the approved and filled staff positions by foreign office, as of October 2014. Foreign office officials told us that locally employed staff provide valuable contributions toward the activities of the foreign offices. Locally employed staff speak the local language and help foreign office staff better understand local regulations. Foreign office officials said that the locally employed staff also are knowledgeable about FDA standards and inspection protocols and are helpful to FDA investigators. Table 3 shows the number of staff in each foreign office by location and position as of October 2014. In addition to the individual named above, Mary Denigan-Macauley (Assistant Director), Darnita Akers, Cheryl Arvidson, Kevin Bray, Michele Fejfar, Jennifer Gould, and Terrance Horner Jr. made key contributions to this report. Other contributors included Adam Cowles, Marcia Crosse, Elizabeth Curda, Joyce Evans, Kimberly Gianopoulos, Armetha Liles, Cynthia Norris, Ifunanya Nwokedi, and Geri Redican-Bigott.
FDA has responsibility for ensuring the safety and proper labeling of more than 80 percent of the U.S. food supply, including an increased volume of imported food. Beginning in 2008, FDA established foreign offices to help prevent unsafe products from reaching U.S. borders. In 2010, GAO examined FDA's foreign offices and found that they engaged in a variety of activities relating to food safety but faced challenges due to an increasing workload and other factors. GAO was asked to follow up that report. This study examines (1) the activities FDA foreign offices have engaged in since 2010 to help ensure the safety of imported food, (2) the extent of the foreign offices' contributions to the safety of imported food, and (3) the extent to which FDA has engaged in workforce planning for its foreign offices. GAO reviewed documentation of foreign office activities and plans, visited offices in China and Mexico, and interviewed agency officials, foreign regulators, and other stakeholders. The Food and Drug Administration's (FDA) foreign offices have engaged in a variety of activities since 2010 to help ensure that imported food is safe. Foreign offices reported that building relationships with foreign counterparts and gathering and assessing information were among their top priorities. As directed by the FDA Food Safety Modernization Act (FSMA), foreign offices also inspected foreign food facilities. Under FSMA, FDA is to inspect at least 600 foreign food facilities in 2011 and, for each of the next 5 years, inspect at least twice the number of facilities inspected during the previous year. As shown in the figure below, FDA is not currently keeping pace with the FSMA mandate. FDA officials told GAO that they do not plan to meet the FSMA mandate because of funding, and they question the usefulness of conducting that many inspections. However, FDA has not conducted an analysis to determine whether the number of inspections in the FSMA mandate or the lower number of inspections it is conducting is sufficient to ensure comparable safety of imported and domestic food. Without such an analysis, FDA is not in a position to know what is a sufficient number of foreign inspections and, if appropriate, request a change in the mandate. FDA foreign offices cite their contributions to the safety of imported food, but the agency's performance measures do not fully capture these contributions. GAO recommended in 2010 that FDA develop performance measures that can be used to demonstrate the offices' contributions to imported food safety. This recommendation remains valid. FDA has initiated a review to determine how to better reflect the value of the foreign offices in the agency-wide performance systems. Until the offices' contributions are captured, FDA will have less information to effectively measure their progress toward meeting agency goals. FDA has taken some steps to address recruitment challenges since GAO last reported, but it still does not have a strategic workforce plan. In 2010, GAO recommended that FDA develop such a plan for the foreign offices to help ensure that it recruits and retains staff with the necessary experience and skills. GAO continues to believe that such a plan for the foreign offices is critical to FDA's ability to address staffing challenges, especially since 44 percent of foreign office positions were vacant as of October 2014. GAO recommends that FDA complete an analysis to determine the annual number of foreign food inspections that is sufficient to ensure comparable safety of imported and domestic food. FDA agreed with GAO's recommendation.
About 374,000 single, active-duty enlisted servicemembers are housed in the United States. Of this number, about 212,000 are permanently assigned to installations and live in barracks, about 96,000 receive a housing allowance and live off base in civilian communities near military installations, about 36,000 live on Navy ships, and about 30,000 live in barracks while in recruit or other short-term training. Most permanently assigned junior members living in barracks share a sleeping room and bath with one or two others. In many older barracks, everyone living on a hall or floor shares a communal bathroom, or central latrine. The Secretary of Defense is required to establish uniform barracks construction standards that define size limitations for newly constructed permanent barracks. Over the years, barracks construction standards have changed to provide for increased space and privacy. Prior to the 1970s, most permanent party barracks consisted of large, open-bay rooms with central latrines shared by many members. To meet the needs of the all-volunteer force, DOD adopted a new barracks standard in 1972. This standard provided a 270-square-foot room for three junior members that also shared a bath. Citing the need to provide more space for all pay grades, DOD adopted a new construction standard in 1983. This standard, known as the 2+2 design, consisted of a module with two, 180-net-square-foot sleeping rooms and a shared bath. With this design, two junior enlisted members normally would occupy each sleeping room, and four members would share a bath. The current 1+1 design standard provides a barracks module consisting of two private sleeping rooms, each with 118 net square feet, a bath, and a kitchenette. Two junior enlisted members in pay grades E-1 through E-4 are assigned to each module with each member having a private sleeping room. Normally, enlisted members in pay grades E-5 and above are assigned the entire module, using one sleeping room as a living room.Citing concerns over unit cohesion and team building, the Marine Corps obtained a permanent waiver from the Secretary of the Navy from using the 1+1 design standard in its new barracks construction. The Marine Corps prefers to use a barracks standard known as the 2+0 design, which provides a 180-net-square-foot room with a bath. Normally, either two junior Marines in pay grades E-1 through E-3 or one Marine in pay grade E-4 or E-5 are assigned to each room. Because the design standards apply to the construction of new barracks, adequacy of the existing barracks for housing members may not necessarily change. DOD separately establishes minimum standards of acceptable space and privacy for members assigned to existing barracks. For example, the current minimum assignment standard for permanent party personnel in pay grades E-1 through E-4 is 90 square feet of net living area per person, not more than four persons to a room, and a central latrine. When this assignment standard cannot be met or when space is not available, installation commanders can authorize single members to live off base and receive a housing allowance. Regardless of the availability of adequate barracks space, senior personnel in pay grades E-7 through E-9 may elect to live off base and receive a housing allowance. With the exception of the Marine Corps, the services have embraced the 1+1 design standard and began building new and renovating older barracks in accordance with the standard in fiscal year 1996. As shown in table 1, through fiscal year 1999, about $1.5 billion in funding was approved for 124 barracks projects designed to provide over 29,000 barracks spaces meeting the 1+1 design standard. Except for the Marine Corps, each service has adopted a plan for improving its barracks and implementing the 1+1 standard. According to service officials, the plans generally call for (1) eliminating barracks with central latrines primarily through construction of new 1+1 barracks, (2) providing members with increased privacy and approximating the 1+1 standard in existing barracks by assigning one member to rooms originally designed for two members or two persons to rooms originally designed for three persons, (3) constructing new 1+1 barracks to meet existing barracks shortages and to regain capacity lost when fewer members are assigned to existing rooms, and (4) replacing existing barracks at the end of their economic life with new 1+1 barracks. The services, as discussed below, estimated that an additional $7.4 billion would be required to implement their plans and approximate the 1+1 standard. The Marine Corps’ plan is similar to the other services’ plans except that it calls for implementation of the 2+0 barracks design standard in lieu of the 1+1 design. In its plan, the Army estimated that about $3 billion would be required through fiscal year 2008 to approximate the 1+1 standard for about 84,000 servicemembers in the United States in pay grades E-1 through E-6. When the Army meets this goal, about 38 percent of the Army’s barracks spaces will meet all requirements of the 1+1 standard. The balance of the spaces will consist of existing (1) private sleeping rooms that do not meet all requirements of the 1+1 standard and (2) multiperson rooms that have been downloaded. The Army’s barracks strategy also provides for improving the entire barracks community. As such, many Army barracks construction projects include construction of new company operations buildings, battalion and brigade headquarters buildings, soldier community buildings, and dining facilities. The Army is also developing a barracks master plan that will include an installation-by-installation assessment of barracks conditions and detailed plans for replacement or renovation to meet requirements of the 1+1 design standard. The master plan is to be completed by September 1999. The Army has approved no waivers to the 1+1 standard for barracks projects in the United States. In 1997, the Air Force completed a comprehensive barracks master plan that defines the Air Force’s long-range barracks investment strategy and lays out a road map for implementing the 1+1 standard. The Air Force’s strategy calls for providing private sleeping rooms for permanent party servicemembers in pay grades E-1 through E-4 by downloading existing 2+2 rooms and constructing new 1+1 rooms to regain the lost capacity. The strategy also calls for paying housing allowances for single members in pay grades E-5 and above to live off base. The Air Force estimated that about $750 million would be required through fiscal year 2009 to approximate the 1+1 standard for about 48,000 members in the United States in pay grades E-1 through E-4. The Air Force has approved no waivers to the 1+1 standard for barracks projects in the United States. The Navy estimated that about $2.9 billion would be required through fiscal year 2013 to approximate the 1+1 design standard worldwide. The Navy’s strategy calls for (1) providing barracks space for about 36,000 permanent party, shore-based single servicemembers in pay grades E-1 through E-4 in the United States; (2) paying housing allowances to most members in pay grades E-5 and above to live off base; and (3) continuing to house about 36,000 single members in pay grades E-1 through E-4 assigned to large ships, on the ships, rather than in barracks, even when the ships are in their homeports. The Navy is developing a barracks master plan that will include an installation-by-installation assessment of barracks conditions and detailed plans for barracks replacement or renovation to meet requirements of the 1+1 design standard. The master plan is scheduled to be completed by April 1999. The Navy has approved waivers from using the 1+1 design standard for four projects in the United States, and one additional waiver request was pending. The waivers were granted because these installations could improve barracks conditions more quickly and for more members by building the projects using a lower and less costly standard. In addition, two of the projects were for barracks designed for Navy personnel assigned to Marine Corps installations. In these cases, the waiver justifications also stated that the barracks should use the Marine Corps 2+0 design standard to be compatible with other barracks at the installations. In July 1998, the Secretary of the Navy approved the Marine Corps’ request for a permanent waiver to allow the use of the 2+0 barracks design standard in lieu of the 1+1 design standard. The waiver request stated that Marine Corps junior members in pay grades E-1 through E-3 would live in two-person rooms and that private rooms would be provided for members in pay grades E-4 and above. Through fiscal year 1999, about $205 million was approved for 16 Marine Corps 2+0 barracks projects that will provide about 5,900 barracks spaces. The Marine Corps’ strategy calls for providing barracks space for permanent party single servicemembers in pay grades E-1 through E-5 and paying housing allowances for most members in pay grades E-6 and above to live off base. The Marine Corps estimated that about $725 million would be required through fiscal year 2022 to approximate the 2+0 standard worldwide. A Marine Corps official stated that a barracks master plan similar to the other services plans is under development. DOD primarily justified the adoption of the 1+1 barracks design standard in 1995 as an investment in quality of life aimed at improving readiness, retention, and motivation of a professional, all-volunteer armed force. In a December 1995 report to the House and Senate Committees on Appropriations, DOD stated that “savings in recruiting, training, and productivity will offset the quality-of-life investment. To what degree is impossible to say, but focusing only on the barracks cost would risk missing those savings.” DOD further stated that the new standard addressed the results of a 1992 triservice survey of barracks occupants at 12 installations. The survey showed that servicemembers were dissatisfied with the privacy and living space offered with the previous design standard and wanted larger rooms, private rooms, private baths, and more storage space. Hence, DOD concluded that continuing to build more of the same type of barracks would have been unwise. According to DOD officials, adoption of the 1+1 standard also reflected an attempt to treat single servicemembers in a more equitable manner compared to married servicemembers who normally live in multiroom houses. More equitable treatment of single members in housing was a matter of concern expressed by the House Armed Services Committee in 1993. To illustrate, married members in pay grades E-1 through E-4 living on base normally are assigned to a house with at least 950 square feet, two bedrooms, a full kitchen, a family room, and one or one and a half baths. If available, housing with a separate bedroom for each dependent child is provided. In comparison, single members in pay grades E-1 through E-4 living on base in barracks designed under the standard in place prior to 1995 would live in a 180-square-foot room shared with another member and would share a bath with three other members. We agree with DOD that the 1+1 design standard reduces the differences in housing for married and single members. We also agree that improved barracks enhance individual quality of life. However, to what extent is unknown because quality of life is inherently difficult to quantify. Quality of life is a complex issue reflected in a delicate mix of variables such as balancing personal life and the demands of military service, adequate pay and benefits, and many other factors. DOD officials stated that no quantitative measures directly link a single quality-of-life element, such as barracks quality, with readiness or retention. Without such data, there is little evidence to support DOD’s assumption that improved barracks will result in improved readiness and higher enlisted retention rates. Even with existing barracks conditions, the services have met most retention goals over the past 3 fiscal years. In particular, according to service officials, the large majority of barracks occupants are serving in their first term of enlistment, and except in one instance, the services have achieved their first-term retention goals for fiscal years 1996-98. In the one instance, the Air Force missed its first-term retention goal by 1 percentage point in fiscal year 1998. Further, information collected from members that do not reenlist has shown that factors other than housing, such as pay and promotion opportunities, are usually cited as the reasons members leave the military. We also noted that the 1992 triservice barracks survey, cited as part of the justification for the 1+1 standard, was somewhat limited in scope. The survey began in October 1991 when the Air Force collected information from four installations and was expanded in March and April 1992 to include three Army, three Navy, and two Marine Corps installations. Although the survey showed that about 2,200 Army, Navy, and Marine Corps barracks occupants participated in the voluntary survey, documentation was not clear on how many Air Force members participated or how the survey participants were selected. The survey included 96 questions, and participants were asked to respond to many questions on a scale of “very satisfied” to “very dissatisfied” or “very important” to “not at all important.” The survey also included some interesting results that DOD has not usually cited. For example, 84 percent of the participants reported that they preferred to receive a housing allowance and live off base rather than live in the barracks. The preference to live off base could continue regardless of the type or quality of barracks provided and thereby result in members’ continued dissatisfaction with the barracks. Also, when participants were asked, how satisfied or dissatisfied they were with their barracks or dormitory room, 53 percent responded that they were dissatisfied (34 percent) or very dissatisfied (19 percent). At the same time, only 46 percent responded to a similar question that they were dissatisfied or very dissatisfied with living on the installation. Although these numbers show that about half of the respondents were dissatisfied with the barracks, the other half reported that they were not dissatisfied with their housing. Finally, when asked, what one improvement in the barracks or dormitory would most increase retention of enlisted personnel, the most mentioned improvement, cited by 35 percent of the respondents, was fewer rules and restrictions for barracks occupants and freedom from command inspections. A private room was the second most mentioned improvement, cited by 24 percent of the respondents. We compared the costs of constructing barracks using the 1+1 design standard to the costs of constructing barracks using other design standards, specifically the 2+0 design used by the Marine Corps and the 2+2 design that was the previous barracks design standard. The comparison showed significant cost differences among the designs. For example, the estimated cost to construct a single barracks space using the 1+1 design standard for a member in pay grades E-1 through E-4 was about $63,000. The comparable construction costs using the 2+0 design standard was about $41,000. Using the 2+2 design standard, the comparable cost was about $38,000 for each barracks space. The designs have different costs primarily because of differences in each design’s maximum building area per occupant. For example, the maximum gross building area for each junior member occupant is 355, 229, and 213 square feet for the 1+1, 2+0, and 2+2 designs, respectively. Table 2 shows the cost per occupant for each of the designs. Costs are higher for members in pay grades E-5 and above because barracks assignment policies normally provide these members with double the space provided to junior members. We also estimated the total additional cost for the services to fully implement each of the three design standards. Specifically, using the cost estimates for each design and the services’ estimates of barracks requirements and configuration after the completion of projects funded through fiscal year 1999, we estimated the additional funds required to provide all planned barracks occupants with spaces that comply with each of the standards. Table 3 summarizes our estimates. We included the Marine Corps in our calculations, even though its current plan is to implement the 2+0 standard in lieu of the 1+1 standard. The total additional cost to fully implement the 1+1 standard in the Army, the Navy, and the Air Force and the 2+0 standard in the Marine Corps, as currently planned, is about $10.9 billion. In comparison, if all services used the 2+0 design standard, they would need about $3.1 billion to fully implement the standard—or about $7.8 billion less than the current plan; and if all services used the 2+2 standard, they would need about $1.7 billion to fully implement the standard—or about $9.2 billion less than the current plan. Although DOD officials agreed that costs associated with the 1+1 design are significantly higher, they stated that the less costly designs do not relieve their concerns for improving quality of life. Army, Navy, and Air Force officials stated that the reasons for initially adopting the 1+1 design—to improve quality of life and provide more equity in housing for single and married members—continue to be valid. In addition, they noted that a considerable investment, about $1.5 billion, has already been made in implementing the 1+1 standard and that changing the standard would result in inequities in the barracks inventory. Further, the officials expressed concern that abandoning the 1+1 design and its improvements could be perceived by members as a promise not kept and consequently have an adverse impact on morale. Marine Corps officials stated that the higher cost of the 1+1 design was a concern to them. For 2 years, the Marine Corps obtained a waiver allowing use of the 2+0 design on the basis that they could improve barracks conditions faster by using the less costly design. The Marine Corps also sees an additional drawback to the 1+1 standard. Specifically, because of the increased isolation provided in private sleeping rooms, the Marine Corps believes that the 1+1 standard does not allow for the unit cohesion and team building needed to reinforce Corps values and develop a stronger bond among junior Marines. It was for this reason that the Marine Corps obtained a permanent waiver from using the 1+1 design for Marines in pay grades E-1 through E-3. Army, Navy, and Air Force officials stated that they do not see any negative aspects to the 1+1 standard from an individual isolation or team-building perspective. They stated that the standard is used only for permanent party personnel, not for recruits or initial trainees; whenever possible, members of the same unit are assigned to the same barracks or area so that unit integrity is maintained; and barracks occupants continue to have adequate interaction with other occupants. These officials also noted that the Marine Corps’ first-term retention goals are significantly lower than the goals of the other services. As a result, they believed that the potential benefits from improved quality of life provided by private sleeping rooms outweighed any potential drawbacks from increased isolation in private rooms. Although the 1+1 barracks standard improves the quality of life for single servicemembers and to some degree addresses housing differences between single and married members, DOD has no quantifiable evidence that barracks improvements result in improved readiness and retention. Implementing the 2+0 or 2+2 design standard in lieu of the 1+1 standard would be significantly less costly to the military; however, the less costly designs do not alleviate DOD’s concerns about improving servicemembers’ quality of life. Whether the 1+1 standard has drawbacks from an individual isolation or team-building standpoint appears to be a matter of military judgment that varies depending on each service’s culture, mission, and goals. Ultimately, the barracks design standard decision is a qualitative policy decision. In written comments on a draft of this report, DOD affirmed its commitment to providing quality housing for single members stating that improved quality of life is a critical component to attracting and retaining high quality personnel. While recognizing our assessment that measuring the impact of improved barracks on individual quality of life, retention, and readiness is inherently difficult, DOD maintained that providing more privacy and amenities in the barracks is important in order to address concerns raised by single servicemembers. DOD stated it has no precise measures linking barracks improvements to retention and readiness because (1) few 1+1 barracks have been completed, which limits the availability of data for analysis, and (2) the quality of home life is just one of many factors affecting individuals’ quality of life, and individuals’ quality of life is just one of many factors affecting readiness. DOD commented that in discussing the reasons that DOD adopted the 1+1 standard, we should have mentioned a May 1995 Air Force quality-of-life survey. This survey reported that barracks occupants cited privacy as their number one concern. We have added to our report a reference to the Air Force survey. We had considered this survey during our review but did not originally mention it because (1) its key barracks-related finding of privacy was the same as the key finding from the 1992 triservice survey, which we do discuss, and (2) DOD officials more frequently cited the 1992 triservice survey results as documentation of servicemembers’ dissatisfaction with their barracks. DOD commented that although the 1992 triservice survey found that the majority of the survey participants preferred to live off base, on base housing is needed to maintain good order and discipline. Our point, as stated in the report, is that the preference to live off base may continue regardless of the type or quality of barracks that are provided. Unfortunately, reliable, quantitative data is not available to show what impact improved barracks will have on members’ perceptions of their quality of life and ultimately on members’ decisions to stay in the military. DOD questioned our analysis of costs that would be incurred if the Marine Corps’ 2+0 barracks standard were adopted by all services. DOD stated that we failed to consider the costs of additional baths that would be required if existing 2+2 barracks were converted to 2+0 use. DOD’s contentions are not accurate. In our analysis, we assumed that existing 2+2 barracks would be downloaded by assigning only one member to each of the two bedrooms that share a bath. With this configuration, more net square footage would be provided to each member than required under the 2+0 standard and no additional baths would be required. DOD commented that some of our cost estimates were misleading because we did not consider the cost of modernizing and renovating existing barracks if a barracks standard other than the 1+1 standard were adopted. We disagree. Regardless of which barracks design standard is used, barracks wear out and eventually require repair, modernization, and renovation. For this reason, our analysis considered only costs to fully implement the three barracks design standards. Finally, DOD commented that our analysis of costs for full implementation of the 1+1 barracks design is not based on any DOD or service plan. As such, DOD stated that our analysis failed to consider that the services plan to replace existing barracks only after they reach the end of their useful life. In describing the services’ plans, our report notes that new barracks will be constructed, when required, to replace barracks at the end of their economic life. We did not intend to suggest that existing barracks should be abandoned and new 1+1 barracks should be immediately constructed. Rather, our analysis is intended to estimate the costs for the Army, the Air Force, and the Navy to fully implement the 1+1 standard over time, which represents the current plans of these services. DOD also provided some technical comments, which we have incorporated as appropriate. We are sending copies of this report to Senator Robert C. Byrd, Senator Carl Levin, Senator Ted Stevens, Senator John W. Warner, and to Representative David R. Obey, Representative Ike Skelton, Representative Floyd D. Spence, and Representative C.W. Bill Young, in their capacities as Chair or Ranking Minority Member of Senate and House Committees. We are also sending copies of this report to the Honorable William Cohen, Secretary of Defense; the Honorable Louis Caldera, Secretary of the Army; the Honorable Richard Danzig, Secretary of the Navy; the Commandant of the Marine Corps, General Charles C. Krulak; and the Honorable F.W. Peters, Acting Secretary of the Air Force. Copies will also be made available to others upon request. Please contact me at (202) 512-5140 if you or your staff have any questions on this report. Major contributors to this report are listed in appendix IV. Module with 2 Private Sleeping Rooms, 2 Closets, 1 Bath, 1 Kitchenette Total building area maximum per module (sq ft): Total building area maximum per room (sq ft): Total net living area in sleeping room (sq ft): Net living area in sleeping room per member (sq ft): E1-E4 (E1-E3 Marines): 1 member per sleeping room, 2 members share bath. E5-E6 (E4-E5 Marines): 1 member per module (2 sleeping rooms). Module with 2 Sleeping Rooms, 2 Baths, Normally No Kitchenette Each room has 2 closets and 1 bath. Total building area maximum per module (sq ft): Total building area maximum per room (sq ft): Total net living area in sleeping room (sq ft): Net living area in sleeping room per member (sq ft): E1-E4 (E1-E3 Marines): 2 members per sleeping room, 2 members share bath. E5-E6 (E4-E5 Marines): 1 member per sleeping room. As requested, we reviewed the Department of Defense’s (DOD) barracks program in the United States to (1) determine the status of the services’ implementation of the 1+1 barracks design standard; (2) document DOD’s rationale for adopting the standard; (3) determine the costs of alternatives to the 1+1 standard; and (4) obtain service views of the impact of the standard from a team-building, individual isolation, or similar perspective. Our review focused on military barracks used to house permanent party enlisted personnel in the United States. We performed our work at the Office of the Secretary of Defense and the headquarters of each military service. We interviewed responsible agency personnel and reviewed applicable policies, procedures, and documents. We also visited one installation of each service to observe barracks designs and conditions and to talk with barracks managers and occupants. We visited the following installations, as recommended by the respective service headquarters: Fort Lewis, Washington; Cheatham Annex Fleet Industrial Supply Center, Virginia; Edwards Air Force Base, California; and Marine Corps Air Station, Beaufort, South Carolina. To determine the status of each service’s barracks program, we obtained and reviewed information on barracks policies, requirements, inventory, and condition of the inventory. We also reviewed each service’s plans and cost estimates for improving the barracks, including plans for implementing the 1+1 design standard. We reviewed the status of military construction barracks projects for fiscal years 1996-99, and for all 1+1 projects, we summarized the costs incurred and number of barracks spaces provided. To document DOD’s rationale for adopting the 1+1 barracks design standard, we reviewed (1) changes to barracks design standards since 1970, (2) DOD and service documentation describing the process that resulted in adoption of the 1+1 design standard, (3) previous DOD reports discussing the rationale for the 1+1 design, and (4) the results from the 1992 triservice survey of barracks occupants. We also obtained and reviewed available information on servicemembers’ quality of life and reviewed retention statistics since fiscal year 1996. To determine the costs of alternatives to the 1+1 standard, we analyzed the services’ cost information on constructing military barracks using the 1+1, 2+0, and 2+2 design standards. We used this information to develop estimates of the cost to construct a barracks space in accordance with each of these standards. Using these cost estimates, data on the existing barracks inventory and approved barracks construction projects, and service estimates of barracks requirements, we also estimated and compared the costs for each service to fully implement each of the three design standards. In addition, we obtained the views of service representatives on the use of barracks designs other than the 1+1 design. To obtain service views of the impact of the standard from an individual isolation, team-building, or similar perspective, we (1) reviewed documentation describing the process resulting in adoption of the 1+1 standard to determine whether any negative aspects of the design had been identified and evaluated, (2) reviewed the justifications supporting all service requests for waivers from using the 1+1 design standard, and (3) obtained opinions on the matter from service representatives. We conducted our review between July 1998 and January 1999 in accordance with generally accepted government auditing standards. Gary Phillips, Evaluator in Charge James Ellis, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) barracks program in the United States, focusing on: (1) the status of the services' implementation of the 1 plus 1 barracks design standard, which calls for more space and increased privacy in new barracks; (2) DOD's rationale for adopting the standard; (3) the costs of alternatives for the 1 plus 1 standard; and (4) service views of the impact of the standard from a team-building, individual isolation, or similar perspective. GAO noted that: (1) except for the Marine Corps, the services embraced the 1 plus 1 barracks design standard and in fiscal year (FY) 1996 began building new and renovating older barracks to conform to the new standard; (2) in fiscal years 1996-99, about $1.5 billion in funding was approved for 124 military construction projects designed to provide over 29,000 barracks spaces meeting the 1 plus 1 design standard; (3) also, to provide increased privacy in existing barracks over a phased time period, the Army, the Navy, and the Air Force plan to assign one member to existing rooms designed for two members and two members to existing rooms designed for three members; (4) when required, the barracks capacity lost through this practice will be regained through construction of new 1 plus 1 barracks; (5) in lieu of the 1 plus 1 design, the Marine Corps is building new barracks with two-person sleeping rooms for junior Marines; (6) DOD justified the adoption of the 1 plus 1 standard primarily as an investment in quality of life aimed at improving military readiness and retention; (7) although barracks improvements do enhance individuals' quality of life, to what degree is unknown because quality of life is inherently difficult to quantify; (8) DOD has not developed any direct, quantitative evidence showing that barracks improvements, as distinct from other factors, result in improved readiness and retention; (9) even with existing barracks conditions, the services have achieved their first-term retention goals for the past 3 fiscal years with only one exception; (10) in FY 1998, the Air Force missed its first-term retention goal by one percentage point; (11) information collected from members that do not reenlist has shown that many factors other than housing, such as pay and promotion opportunities, are usually cited as the reasons for leaving the military; (12) GAO's comparison of barracks construction costs associated with alternative design standards showed significant differences in the amount of funds that would be required over and above what has already been funded; (13) because of the isolation provided in private rooms, the Marine Corps believes the 1 plus 1 standard does not allow for the unit cohesion and team building needed to reinforce Marine Corps values and develop a stronger bond among Junior Marines; and (14) the other services believe that the 1 plus 1 standard does not include these negative aspects because the standard applies only to permanent party personnel, not to recruits or initial trainees.
SAFETEA-LU authorized over $52 billion for federal transit programs, including the New Starts and JARC programs, from fiscal year 2005 through fiscal year 2009. SAFETEA-LU authorized $7.9 billion for the New Starts program and $727 million for the JARC program. Both of these programs are managed by FTA. The New Starts program is a discretionary grant program for investments in new fixed-guideway projects. Under the statutorily-defined evaluation process for the New Starts program, FTA identifies and selects fixed- guideway transit projects—including heavy, light, and commuter rail; ferry; and busway projects—for funding. FTA generally funds New Starts projects through full funding grant agreements (FFGA), which establish the terms and conditions for federal participation in a New Starts project and also define a project’s scope, including the length of the system and the number of stations; the project’s schedule, including the date when the system is expected to open for service; and the project’s cost. To obtain an FFGA, a project must progress through a local or regional review of alternatives and meet a number of federal requirements, including providing information for the New Starts evaluation and rating process. As required by SAFETEA-LU, New Starts projects must emerge from a regional, multimodal transportation planning process. The first two phases of the New Starts process—systems planning and alternatives analysis— address this requirement. The systems planning phase identifies the transportation needs of a region, while the alternatives analysis phase provides information on the benefits, costs, and impacts of different corridor-level options, such as rail lines or bus routes. The alternatives analysis phase results in the selection of a locally preferred alternative— which is intended to be the New Starts project that FTA evaluates, as required by statute. After a locally preferred alternative is selected, project sponsors seek FTA’s approval for entry into the preliminary engineering phase. Following completion of preliminary engineering and federal environmental requirements—and assuming New Starts requirements continue to be met—FTA may approve the project’s advancement into final design, after which FTA may approve the project for an FFGA and proceed to construction, as provided for in statute. FTA oversees grantees’ management of projects from the preliminary engineering phase through construction and evaluates the projects for advancement into each phase of the process, as well as annually for the New Starts report to Congress. To help inform administration and congressional decisions about which projects should receive federal funds, FTA assigns ratings based on a variety of financial and project justification criteria, and then assigns an overall rating. For the fiscal year 2007 evaluation cycle, FTA used the financial and project justification criteria identified in TEA-21. These criteria reflect a broad range of benefits and effects of the proposed project, such as cost-effectiveness, as well as the ability of the project sponsor to fund the project and finance the continued operation of its transit system (see fig. 1). FTA assigns the proposed project a rating for each criterion, then assigns a summary rating for local financial commitment and project justification. Finally, FTA develops an overall project rating. Projects are rated at several points during the New Starts process—as part of the evaluation for entry into preliminary engineering and final design, and yearly for inclusion in the New Starts annual report that is submitted to Congress. More recent than New Starts, the JARC program was created in 1998 in order to support the nation’s welfare reform goals. Without adequate transportation, welfare recipients face significant barriers in moving from welfare to work. In 1998, we reported that three-fourths of welfare recipients live in central cities or rural areas, while two-thirds of new entry-level jobs are located in suburbs. Public transportation facilities often offer limited or no access to many of these jobs. JARC, which is administered by FTA, was designed to fill these gaps in transportation services for low-income individuals. JARC is intended to increase collaboration among transit agencies, local human service agencies, nonprofit organizations, and others and to improve the mobility of low-income individuals seeking work. Programs selected to receive grants—including the expansion of public transportation routes, ridesharing activities, and promotion of transit voucher programs—are designed to assist low-income individuals in accessing employment opportunities and related services, such as child care and training. SAFETEA-LU made changes to the New Starts program that range from identifying new evaluation criteria to establishing the Small Starts program. FTA has taken some initial steps in implementing these changes, including issuing an ANPRM for the Small Starts program and guidance for the New Starts program, both in January 2006. The Small Starts program is a new component of the New Starts program and is intended to expedite and streamline the application and review process for small projects. The transit community, however, questioned whether the program, as outlined in the ANPRM, would streamline the process. In its January 2006 guidance, FTA also identified and sought public input on possible changes to the New Starts program that would affect traditional New Starts projects, or large starts, such as revising the evaluation process to incorporate the new evaluation criteria identified by SAFETEA-LU. FTA also identified possible implementation challenges, including how to distinguish between land use and economic development criteria in the evaluation framework. SAFETEA-LU introduced eight changes to the New Starts program, codified an existing practice, and clarified federal funding requirements. These changes range from the creation of the Small Starts program to introducing new evaluation criteria. For example, SAFETEA- LU added economic development to the list of criteria that FTA must use in the New Starts evaluation process. In addition, SAFETEA-LU codified FTA’s requirement that project sponsors conduct before and after studies for all completed projects. SAFETEA-LU also clarified the federal share requirements for New Starts projects. In particular, SAFETEA-LU states that the federal share for a New Starts project may be up to 80 percent of the project’s net capital project cost, unless the project sponsor requests a lower amount. SAFETEA-LU also prohibits the Secretary of Transportation from requiring a nonfederal share of more than 20 percent of the project’s total net capital cost. This language addresses FTA’s policy of favoring projects that seek a federal New Starts share of no more than 60 percent of the total cost. FTA instituted this policy beginning with the fiscal year 2004 evaluation cycle in response to language contained in appropriation committee reports. Table 1 describes SAFETEA-LU provisions for the New Starts program and compares them to TEA-21’s requirements. FTA has taken some initial steps in implementing SAFETEA-LU changes. For example, in January 2006, FTA published the New Starts policy guidance and, as will be discussed later, the ANPRM for the Small Starts program. FTA will continue to implement the changes outlined in SAFETEA-LU through the rulemaking process over the next year and a half. Specifically, in response to SAFETEA-LU changes, FTA is developing a Notice of Proposed Rulemaking (NPRM) for the New Starts and Small Starts programs. FTA plans to issue the NPRM in January 2007, with the goal of implementing the final rule in January 2008. Figure 2 shows a time line of FTA’s actual and planned implementation of SAFETEA-LU changes. A significant SAFETEA-LU change was the creation of the Small Starts program. The Small Starts program is a discretionary grant program for public transportation capital projects that (1) are corridor-based, (2) have a total cost of less than $250 million, and (3) are seeking less than $75 million in federal Small Starts program funding. The Small Starts program is a component of the existing New Starts program, but, according to the conference reports accompanying SAFETEA-LU, is intended to provide project sponsors with an expedited and streamlined evaluation and rating process. Table 2 compares New Starts and Small Starts program requirements. In January 2006, FTA published an ANPRM to give interested parties an opportunity to comment on the characteristics of and requirements for the Small Starts program. In its ANPRM, FTA suggests that the planning and project development process for proposed Small Starts projects could be simplified by allowing analyses of fewer alternatives for small projects, allowing the development of evaluation measures for mobility and cost- effectiveness without the use of complicated travel demand modeling procedures in some cases, and possibly defining some classes of pre- approved low-cost improvements as effective and cost-effective in certain contexts. FTA also sought the transit community’s input on three key issues in its ANPRM, including eligibility, the rating and evaluation process, and the project development process. For each of these issues, FTA outlined different options for how to proceed, and then posed a series of questions for public comment, including the following questions on the rating and evaluation process: How should the evaluation framework for New Starts be changed or adapted for Small Starts projects? How might FTA evaluate economic development and land use as distinct and separate measures? How might FTA incorporate risk and uncertainty into project evaluations for Small Starts? What weights should FTA apply to each measure? FTA’s ANPRM for Small Starts generated a significant volume of public comment. While members of the transit community were supportive of some proposals for the Small Starts program, they also had a number of concerns. In particular, the transit community questioned whether FTA’s proposals would, as intended, provide smaller projects with a more streamlined evaluation and rating process. As a result, some commenters recommended that FTA simplify some of its original proposals in the final NPRM to reflect the smaller scope of these projects. For example, several project sponsors and industry representatives thought that FTA should redefine the baseline alternative as the “no-build” option and make the before and after study optional for Small Starts projects to limit the time and cost of project development. In addition, others were concerned that FTA’s proposals minimized the importance of the new land use and economic development evaluation criteria introduced by SAFETEA-LU, and they recommended that the measures for land use and economic development be revised. Since FTA does not plan to issue its final rule for the New Starts and Small Starts programs until early 2008, FTA issued proposed interim guidance for the Small Starts program in June 2006 to ensure that project sponsors would have an opportunity to apply for Small Starts funding and be evaluated in the upcoming cycle (i.e., the fiscal year 2008 evaluation cycle, which begins in August 2006). The proposed interim guidance describes the process that FTA will use to evaluate proposed Small Starts projects to support the decision to approve or disapprove their advancement to project development and the decision to recommend projects for funding, including whether proposed projects are part of a broader strategy to reduce congestion in particular regions. In addition, although not required by SAFETEA-LU, FTA introduced a separate eligibility category within the Small Starts program for “Very Small Starts” projects in the proposed interim guidance. Small Starts projects that qualify as Very Small Starts are projects that do not include the construction of a new fixed guideway, are in corridors with existing riders who will benefit from the proposed project and number more than 3,000 on an average weekday, including at least 1,000 riders who board at the terminal stations, and have a total capital cost of less than $50 million and less than $3 million per mile (excluding rolling stock). According to the proposed interim guidance on the Small Starts program, FTA intends to scale the planning and project development process to the size and complexity of the proposed projects. Therefore, Very Small Starts projects will undergo a very simple and streamlined evaluation and rating process. Small Starts projects that do not meet all three criteria for Very Small Starts projects will be evaluated and rated using a framework similar to that used for traditional, or large starts, New Starts projects. However, FTA officials have indicated that this evaluation and rating framework would be modified, for example, to include only those criteria listed in the statute. FTA is seeking public input on the Small Starts proposals contained in the proposed interim guidance through July 9, 2006. FTA plans to review the comments received and issue its final interim guidance for the Small Starts program by August 2006. This guidance will govern the program until the final rule is issued. In response to SAFETEA-LU, FTA identified possible changes to the New Starts program that would affect traditional New Starts projects, or large starts, in its January 2006 guidance. According to FTA, some of SAFETEA- LU provisions could lead to changes in the definition of eligibility, the evaluation and rating process, and the project development process. In the guidance, FTA outlines changes it is considering and solicits public input, through a series of questions, on the potential changes. For example, FTA identified two options for revising the evaluation and rating process to reflect SAFETEA-LU’s changes to the evaluation criteria. The first option would extend the current process to include economic development impacts and the reliability of cost and ridership forecasts. Specifically, FTA suggested that economic development impacts and the reliability of forecasts simply be added to the list of criteria considered in developing the project justification rating. The second option would be to develop a broader process to include the evaluation criteria identified by SAFETEA- LU and to organize the measures to support a more analytical discussion of the project and its merits. According to FTA, the second option would broaden the evaluation process beyond a computation of overall ratings based on individual evaluation measures and develop better insights into the merit of a project than are possible from using the quantified evaluation measures alone. (See app. I for a description of the different changes FTA is considering.) FTA also identified potential challenges in implementing some of SAFETEA-LU changes in its guidance. In particular, FTA described the challenges of incorporating and distinguishing between two measures of indirect benefits in the New Starts evaluation process—land use and economic development impacts. For example, FTA noted that its current land use measures (e.g., land use plans and policies) indicate the transit- friendliness of a project corridor both now and in the future, but they do not measure the benefits generated by the proposed project. Rather, they describe the degree to which the project corridor provides an environment in which the proposed project can succeed. According to FTA’s guidance, FTA’s evaluation of land use does not include economic development benefits because FTA has not been able to find reliable methods of predicting these benefits. FTA further stated that because SAFETEA-LU introduces a separate economic development criterion, the potential role for land use as a measure of development benefits becomes even less clear given its potential overlap with the economic development criterion. In addition, FTA noted that many economic development benefits result from direct benefits (e.g., travel time savings), and therefore, including them in the evaluation could lead to double counting the benefits FTA already measures and uses to evaluate projects. Furthermore, FTA noted that some economic development impacts may represent transfers between regions rather than a net benefit for the nation, raising questions as to whether these impacts are useful for a national comparison of projects. To address some of the challenges, FTA suggested that an appropriate strategy might be to combine land use and economic development into a single measure. We have also reported on many of the same challenges of measuring and forecasting indirect benefits, such as economic development and land use impacts. For example, we noted that it is challenging to predict changes in land use because current transportation demand models are unable to predict the effect of a transportation investment on land-use patterns and development, since these models use land-use-forecasts as inputs into the model. In addition, we noted that certain benefits are often double counted when evaluating transportation projects. In particular, indirect benefits, such as economic development, may be more correctly considered transfers of direct user benefits or economic activity from one area to another. Therefore, estimating and adding such benefits to direct benefits could constitute double counting and lead to overestimating a project’s benefits. Despite these challenges, experts told us that evaluating land use and economic development impacts is important, since they often drive local transportation investment choices. FTA received a large number of written comments on its online docket in response to its proposed changes. (See app. I for common comments submitted for each proposed change.) While members of the transit community were supportive of some proposals, they expressed concerns about a number of FTA’s proposed changes. For example, a number of commenters expressed concerns about FTA’s options for revising the evaluation process, noting that both proposals deemphasized the importance of economic development and land use. Some commenters also noted that land use and economic development should not be combined into a single measure and that they should receive the same weight as cost-effectiveness in the evaluation and rating process. SAFETEA-LU made a number of changes to the JARC program, the most notable of which was the creation of a formula to distribute JARC funds. Whereas funds for JARC projects were congressionally designated in recent years, SAFETEA-LU’s formula distributes funds to states and large urbanized areas. This is a significant change because some states and urbanized areas will receive substantially more funds than under the discretionary program, while others will receive substantially less. In addition, the formula program will result in some areas receiving JARC funds that had not received them in the past. Other JARC changes resulting from SAFETEA-LU include the ability to use a portion of JARC funds for planning activities and the removal of a restriction on the JARC funding available for reverse commute projects, which are designed to help individuals in urban areas access suburban employment opportunities. FTA has worked to develop guidance to help JARC recipients implement these changes by soliciting comments and input through program notices and listening sessions beginning in November 2005. FTA issued interim JARC guidance in March 2006 and is currently working to develop draft final guidance for the program. Final guidance for JARC is expected later this year. Two potential challenges for FTA as it moves forward will be to issue final JARC guidance in a timely manner and to determine its plan for oversight of the JARC program. A key SAFETEA-LU change to the JARC program was the creation of a formula to distribute JARC funds. Under TEA-21, JARC was a discretionary grant program for which FTA competitively selected JARC projects and, more recently, awarded funds for congressionally designated projects. Under SAFETEA-LU, states and large urbanized areas have been apportioned funding for JARC projects through a formula based on the number of low-income individuals and welfare recipients in each area. This is a significant change because some states and urbanized areas will receive substantially more funds than under the discretionary program, while others will receive substantially less. In addition, the formula program will result in some areas receiving JARC funds that had not received them in the past. Forty percent of JARC funds each year are required to be apportioned among states for projects in small urbanized and other than urbanized areas, and the remaining 60 percent are required to be apportioned among urbanized areas with a population of 200,000 or more. The governor of each state must designate a recipient for JARC funds at the state level to competitively select and award funds for projects in small urbanized and other than urbanized areas within the state. In large urbanized areas, the recipient must be designated by the governor, local officials, and publicly owned operators of public transportation. In addition to creating a formula for distributing JARC funds, SAFETEA- LU also extended a JARC requirement related to coordinated planning to additional FTA programs and made a number of other changes to key aspects of the JARC program. In the past, JARC projects were required to be part of a coordinated public transit-human services transportation plan; a similar requirement is included in SAFETEA-LU. However, this requirement will apply in fiscal year 2007 to two other FTA programs that provide funding for transportation-disadvantaged populations. In addition, recipients in states and urbanized areas that select JARC projects must now certify that their selections were based on this plan. Another change resulting from SAFETEA-LU is the ability of a recipient to use up to 10 percent of its JARC allocation for administration, planning, and technical assistance, and the expansion of the definition of eligible activities to include planning as well as capital and operating activities. SAFETEA-LU also removed a restriction on the amount of funding available for reverse commute projects to help individuals in urban areas gain access to suburban employment opportunities. Table 3 compares key JARC provisions under SAFETEA-LU and TEA-21. Some of these changes address issues that we have raised in past reports on JARC and the coordination of transportation services for transportation-disadvantaged populations. For example, in 2004 we reported that a majority of the JARC grantees we spoke with supported a proposal to use grant funds for administrative, planning, and technical assistance activities, because these activities could increase coordination with potential partners. In 2003, we also reported that some federal and state officials believed that providing financial incentives or mandates for coordination was one way to improve the coordination of transportation services among federal programs. In addition, officials of one metropolitan planning organization that we spoke to about changes to the JARC program also note that the change to a formula program may better facilitate cooperation between organizations. They explained that the required coordinated plans for JARC projects became irrelevant in the past when JARC funds were congressionally designated. FTA has been working to develop guidance to help JARC recipients implement changes to the program. In November 2005, FTA published a notice of changes for FTA programs, including JARC. This notice provided information on the JARC program and solicited public comment on aspects of the program such as technical assistance needs and the coordinated planning process. FTA also held five public listening sessions across the country in December 2005 on a number of programs, including JARC, to obtain comments and input on the questions and issues that should be included in future guidance. In March 2006, drawing on the information FTA received through comments and the listening sessions, it released interim JARC guidance for fiscal year 2006 and proposed strategies for fiscal year 2007, and sought comments to assist in the development of program guidance. FTA received more than 200 comments on this notice, and the comments addressed a variety of issues, including the coordinated planning requirement for JARC and other programs and the selection of designated recipients. For example, several private operators of transportation services have requested that FTA include language that private transportation operators be involved in the coordinated planning process. A number of comments have also addressed whether there would be a potential conflict of interest in having a provider of transportation services also serve as the designated recipient that will select JARC projects for funding. FTA officials have indicated that they plan to address many of the issues raised in the comments in draft final guidance for JARC that they plan to release later this summer. FTA plans to solicit comments on the draft final guidance and issue final guidance for JARC later this year. Figure 3 presents a time line for FTA’s implementation of changes to the JARC program. Through our preliminary work, we have identified two challenges that FTA may encounter as it moves forward in its implementation of changes to JARC. One potential challenge for FTA will be to ensure that it develops JARC guidance in a timely manner so that JARC recipients can implement the program. Officials from one metropolitan planning organization we spoke with about JARC changes noted that the guidance will be important because it will address questions that JARC recipients have raised about the program’s implementation and to which they have received conflicting answers from FTA headquarters and regional staff. A publicly available schedule of FTA deliverables related to SAFETEA-LU’s implementation stated that draft final guidance for JARC was anticipated between May and July 2006. However, FTA officials told us that they now expect to issue the draft final guidance in late July or early August. This change reflects FTA’s extension of the comment period for the March 2006 notice by 1 month to receive additional comments, and the submission of more than 100 comments on or after the last day of the comment period. The additional comments raised a number of issues for FTA to consider, according to FTA officials. While FTA has stated that criteria in the final guidance will not apply retroactively to issued grants so that areas can proceed with JARC projects, FTA officials as well as officials from an association that represents metropolitan planning organizations have told us that some recipients of JARC funds will likely wait for final program guidance before proceeding. In addition, few states and urbanized areas have taken formal steps to apply for fiscal year 2006 funds. As of late May, 5 states had notified FTA of their designated recipient for JARC funding, and 1 of the 152 urbanized areas that receive a JARC apportionment had obligated fiscal year 2006 JARC funds, according to FTA officials. Another potential challenge for FTA in moving forward will be to determine its plan for overseeing the JARC program. FTA officials have told us that they are still developing this plan, and that at a minimum they expect to use routine grant management tools—such as progress reports and site visits—to oversee JARC recipients. In its interim guidance, FTA also indicates that it intends to use existing oversight mechanisms from the federal urbanized area and nonurbanized area formula programs, such as triennial reviews and state management reviews. However, FTA officials acknowledge they need to determine how to incorporate JARC grant recipients into these oversight processes. Our past work suggests that transparency, communication, and accountability issues will be important as FTA moves forward in implementing SAFETEA-LU changes to the New Starts and JARC programs. Like SAFETEA-LU, TEA-21 required GAO to regularly review the New Starts and JARC programs. Since 1998, we have issued numerous reports on these programs, and many of the reports contained recommendations to FTA on ways to improve the implementation of these programs. SAFETEA-LU addressed some of these issues, and FTA has also taken steps to resolve some of them. Nevertheless, given the number of changes that are being made to both programs, continued focus on improving transparency, communication, and accountability will be important. In our recent reports on the New Starts program, we noted several cases in which FTA could have improved the program’s transparency. Typically, these cases dealt with FTA’s decisions not to seek public input on proposed policy changes before they were implemented. In our 2005 report, we found that FTA had made 16 changes to the New Starts process since fiscal year 2001, but had not published information about the changes in the Federal Register or instituted a rulemaking process for 9 of the changes; moreover, for 6 of the 9 changes, FTA did not provide any avenues for public review and comment. For example, during the fiscal year 2004 cycle, FTA instituted a preference policy in its ratings process favoring current and future projects that do not request more than a 60 percent federal funding share. However, FTA did not amend its regulations to reflect this change in policy or its existing procedures, and the public did not have an opportunity to comment on the impact of the change prior to its adoption. SAFETEA-LU addressed our past concerns about the transparency of the New Starts program by requiring FTA to publish for notice and comment any proposals that make significant changes to the New Starts program. FTA has already implemented this requirement. For instance, earlier this year, FTA gave the transit community an opportunity to review and comment on proposed procedural changes (i.e., nonregulatory changes) to the New Starts process as well as possible changes FTA was considering for the New Starts program in the future. Although members of the transit community expressed concerns about some of FTA’s proposed changes in their comments, project sponsors and industry representatives repeatedly told us that they appreciated the opportunity to review and comment on the proposals. FTA officials have also stated that they have been pleased with the review and comment process, noting that it helps to ensure that FTA’s guidance is more complete, more responsive to stakeholders’ needs, and more likely to take into account on-the-ground realities. We have also previously reported shortfalls in FTA’s communication of New Starts program changes to project sponsors that in several cases, have resulted in implementation problems. For example, in our 2003 report, we noted that a number of project sponsors were unable to calculate a valid Transportation System User Benefits (TSUB) value, and as a result, their projects received a “not rated” rating for the cost- effectiveness criterion. Project sponsors commented that they would have benefited from additional guidance and technical support on how to generate the required data for the TSUB measure. Similarly, during the fiscal year 2005 evaluation cycle, FTA introduced a requirement for project sponsors to submit a “make the case” document to articulate the benefits of a proposed New Starts project. FTA officials intended to use the document to help interpret data produced by the local travel forecasting models, but FTA did not prepare any written guidance on what information to include or provide report templates. Without such information, project sponsors stated that they did not understand what should be included in the document or how it would be used, and FTA officials later acknowledged that many of the submissions did not meet their expectations. SAFETEA-LU addressed these communication problems by requiring that FTA routinely publish policy guidance. Specifically, SAFETEA-LU requires that FTA publish policy guidance for comment and response no later than 120 days after the enactment of SAFETEA-LU, each time significant changes are made, and at least every 2 years. FTA responded to this requirement by publishing policy guidance for the New Starts program in January 2006 and soliciting public comments on the proposed changes outlined in the guidance. Furthermore, in its January guidance, FTA included possible long-term changes to the large starts component of the New Starts program that FTA is considering. FTA stated that it hoped to use the policy guidance as a forum for discussing possible changes with the transit community so that FTA could take the community’s comments into account when developing the NPRM for the New Starts program. In addition, FTA held multiple listening sessions across the country, during which officials told project sponsors about proposed changes to the New Starts program and their rationale for implementing these changes. Most of the project sponsors and industry representatives we interviewed told us that they appreciated FTA’s efforts to solicit their feedback and to encourage an open discussion about the proposed changes. Finally, we have identified steps for increasing the accountability of the New Starts and JARC programs. For example, we previously reported that outcome evaluations of completed transit and highway projects were not usually conducted to determine whether proposed outcomes were achieved. We noted that because outcome evaluations are not usually completed, agencies miss an opportunity to learn from the successes and shortcomings of past projects to better inform future planning and decision making and increase accountability for results. FTA also identified such evaluations as an opportunity to hold agencies accountable for results and identify lessons learned, and therefore, starting in fiscal year 2003, FTA required project sponsors to complete before and after studies for completed New Starts projects. SAFETEA-LU codified the requirement for before and after studies, and required that these studies (1) describe and analyze the impacts of the new fixed guideway capital project on transit services and transit ridership, (2) evaluate the consistency of predicted and actual project characteristics and performance, and (3) identify sources of differences between predicted and actual outcomes. In addition, SAFETEA-LU included several provisions, including the following, that emphasize the accuracy and consistency of project cost and ridership estimates in the New Starts process: SAFETEA-LU requires the Secretary of Transportation to consider the reliability of the forecasting methods used by New Starts project sponsors and their contractors to estimate costs and ridership as part of the New Starts evaluation process. SAFETEA-LU allows the Secretary of Transportation to provide a higher grant percentage than requested by the project sponsor if the net cost of the project is not more than 10 percent higher than the net cost estimated at the time the project was approved for advancement into preliminary engineering and the ridership estimated for the project is not less than 90 percent of the ridership estimated for the project at the time the project was approved for advancement into preliminary engineering. SAFETEA-LU requires the Secretary of Transportation to submit an annual report to congressional committees analyzing the consistency and accuracy of cost and ridership estimates made by contractors to public transportation agencies developing new projects. Likewise, we have raised issues associated with FTA’s measurement of the JARC program’s results and made recommendations for improvement. In April 2002, we testified that FTA had not yet completed its evaluation of the JARC program or reported to Congress, as TEA-21 required. We also expressed concerns about FTA’s plan to evaluate the program using one performance measure—the number of accessible employment sites— because it would not allow FTA to fully address key aspects of the program or criteria for selecting grantees. We reiterated these concerns in our December 2002 report and recommended that FTA report to Congress on the results of its evaluation of JARC, as required by law, and consider as part of its evaluation of the effectiveness of the JARC program in meeting both of its goals. Our most recent review of the JARC program concluded that the data used in FTA’s 2003 evaluation of the JARC program lacked the consistent, generalizable, and complete information needed to draw any definitive conclusions about the program as a whole. According to FTA, it has faced obstacles in evaluating the JARC program primarily because grantees have had difficulty collecting and reporting information on their programs. SAFETEA-LU requires the Secretary of Transportation to evaluate the JARC program and submit a report describing the results of this study to Congress by August 2008. Specifically, the Secretary must conduct a study to evaluate the effectiveness of the grant program and the effectiveness of recipients making grants to subrecipients. FTA has already begun to take some steps to meet its evaluation requirements, even prior to issuing its final program guidance. These steps may also address some of the concerns we previously raised about FTA’s evaluation of the JARC program. For example, FTA has identified new performance measures and goals, developed a preliminary performance evaluation framework to guide its data collection efforts, and is currently in the process of researching options for simplifying its data collection system and reducing the reporting requirements for grantees. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Vidhya Ananthakrishnan, Nikki Clowers, John Finedore, Lauren Heft, Daniel Hoy, Jessica Lucas-Judy, Nancy Lueke, and Kimanh Nguyen. In its January 2006 guidance, the Federal Transit Administration (FTA) identified possible changes to the New Starts program in response to the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). According to FTA, some of SAFETEA-LU’s provisions may lead to changes in the program’s definition of eligibility, evaluation and rating process, and project development process. The following table summarizes the changes FTA has proposed in these three areas, FTA’s rationale for the proposed changes, and the transit community’s response to the proposed changes. Definition of a fixed guideway: FTA asks whether a Bus Rapid Transit project is a “fixed guideway” project and whether it should fund high-occupancy vehicle (HOV) projects to the degree that they provide benefits to public transit riders. A fixed guideway has not been specifically defined in the statute. The current definition of fixed guideway works well; thus, FTA should make no changes. A minimum percentage of the guideway (e.g., 30-75 percent) should be dedicated in order for a project to get funding. HOV projects should be funded by the Federal Highway Administration. Project evaluation and ratings process Evaluation framework: FTA proposes two options for revising the evaluation framework. Option 1 would extend the current framework to include economic development impacts and the reliability of forecasting methods for costs and ridership. Option 2 would be a broader framework that incorporates the new evaluation factors specified by SAFETEA-LU and, according to FTA, organizes the measures to support a more informative, analytical discussion of the project and its merits for New Starts funding. The current evaluation framework might be improved upon. concerns because they continue to define cost-effectiveness only in terms of mobility. enough weight to land use and economic development. Nature of the problem or opportunity evaluation measure: FTA asks whether measures that represent the nature of the problem or the opportunity the proposed projects are designed to address should be included in the evaluation framework, and how FTA should evaluate or rate projects that address significant transportation problems compared with projects that take advantage of opportunities to improve service. New Starts projects are intended to solve specific transportation problems, take advantage of opportunities to improve transportation services, or support economic development. Funding should be available for projects seeking to shape economic development or to provide a solution to mobility problems. Economic development impacts measure: FTA identifies two options for characterizing economic development benefits: (1) regional economic benefits and (2) station area development impacts. FTA sought comment on whether there was preference for either option, as well as on how to evaluate economic development and land use as distinct and separate measures. SAFETEA-LU identified economic impacts as a new evaluation criterion. better isolate the effect of the transit project. There are too many other variables associated with regional economic benefits. FTA should use both regional and station area economic benefits. Land use and economic development should be separate measures and carry as much weight as cost- effectiveness. Differentiating between land use and economic development is difficult. Mobility benefits measure: FTA proposes to measure mobility by using a combination of user benefits per passenger mile and project ridership. FTA also asked whether other measures of mobility benefits could be used. The measure of mobility benefits ought to capture as many benefits as possible. FTA should continue to work toward capturing transportation benefits to highway users in a project corridor. FTA should analyze the impact of non-home-based trips, trips generated by special events, and automobile trips not taken because of enhanced pedestrian activity established in a project corridor. Mobility for transit dependents measure: FTA proposes to measure mobility for transit dependents by the share of user benefits accruing to the passenger in the lowest income stratum compared with the regional share of the lowest income stratum. FTA asked whether this proposed measure would cause any implementation difficulties, and whether there were other measures FTA should consider. Since low-income populations and households without access to automobiles depend critically on the public transportation system to provide basic mobility, access to jobs, health care and other critical services, projects that improve transit services for these populations have special merit. An implementation difficulty would be the inconsistencies in regional travel demand models—that is, some models are based on income, others on automobile ownership, and some on both. FTA’s previous measure—percentage of low income households in the project corridor—is somewhat imprecise. Environmental benefits measure: FTA proposes to continue using the same environmental benefits measure, which uses the projected change in regional vehicle miles traveled to estimate the change in various harmful types of vehicle emissions and energy consumption. SAFETEA-LU maintained environmental benefits as an evaluation criterion. FTA should retain its current measure of environmental benefits. Operating efficiency measure: FTA proposes removing this measure as a separate evaluation criterion, relying instead on an evaluation of cost-effectiveness to address the statutory criterion. According to FTA, the impact of the project on operating and maintenance costs is captured in the calculation of cost-effectiveness. The current measure—projected systemwide change in operating cost per passenger mile—does not distinguish among proposed projects. FTA should use the cost- effectiveness evaluation measure to address the operating efficiency criterion. Cost-effectiveness measure: FTA proposes to broaden the current cost-effectiveness measure to include nontransportation benefits, such as economic development benefits, land use impacts, and mobility benefits to transit dependents. FTA also suggests using two cost- effectiveness measures—one for the forecast year as is done today and the second calculated for the year the project opens. The current measure of cost-effectiveness does not capture non-transportation benefits. measure would increase the time and cost of project development. FTA should use the consumer price index, not the gross domestic product index, to adjust the dollar value of the cost-effectiveness threshold. Financial capabilities measure: FTA proposes changing the way the financial rating factors related to uncertainty are incorporated into the evaluation process. Specifically, FTA suggests using the project sponsor’s ability to absorb funding shortfalls and cost overruns as an explicit measure of financial risk. SAFETEA-LU identifies the following factors that FTA must use in evaluating financial capability: (1) the reliability of forecasting methods for costs and ridership, (2) existing grant commitments, (3) the degree to which funding sources are dedicated, (4) debt obligations of the project sponsor, and (5) the non-New Starts funding share. It is unclear from the guidance who is responsible for assessing the reliability of financial forecasts. The emphasis placed on the reliability of the financial forecast should correlate to the stage of project development. Reliability of forecasts measures: FTA proposes to assess the risk and uncertainty inherent in project evaluation. Specifically, FTA plans to evaluate the uncertainty associated with the nature and severity of the problem, as well as individual measures of project merit and cost-effectiveness measures. SAFETEA-LU requires that the reliability of the forecasting methods used to estimate costs be considered in the evaluation of New Starts projects. Proposal is confusing. assessments suggests that the proposal would require substantial effort with little reduction in uncertainty. FTA should place significant weight on the project sponsor’s ability to enhance the reliability of forecasts through the proven quality control methods. Development of project ratings: Currently, FTA develops separate ratings for project justification and local financial commitment, and then derives an overall project rating from these component ratings using decision rules. FTA proposes to use a similar process for rating projects. However, FTA states that the reliability of forecasts needs to be incorporated into the ratings process, and suggests different options for accomplishing this, such as using probability weightings or using uncertainty indicators to decide the outcome for ratings at the margins. FTA also seeks input about the weights that should be assigned to each measure. SAFETEA-LU requires that the reliability of the forecasting methods used to estimate costs be considered in the evaluation of New Starts projects. Economic development and land use should receive the same weight as cost-effectiveness. Local endorsement of the financial plan: FTA proposes to require that project sponsors specify all proposed sources of funding in the financial plan, and that the sponsoring agency provide a letter endorsing the proposed financial strategies and amounts of planned funding by those agencies identified as funding sources. SAFETEA-LU requires that FTA ensure that proposed New Starts projects are supported by an acceptable degree of local financial commitment and resources. Securing an endorsement will be overly burdensome and delay project development. FTA has experienced situations in which a project’s financial plans state that local agencies will provide funding, but in reality those local agencies do not support the project plan. sponsors receive financial commitments. Hard to fully secure funding commitments in preliminary engineering and final design. Approval of the baseline alternative: FTA proposes to maintain the current approval process and definition of the baseline alternative. However, FTA asks whether the baseline can be more clearly defined and whether there is a way to report on the benefits of the project including the benefits attributable to the difference between the no-build and the baseline alternatives. There has been significant confusion over the definition of the baseline alternative. More clarity needed on how FTA defines baseline alternative. should not be driven by FTA. On-board transit survey: FTA is considering requiring that a recent survey of transit riders be used to inform the technical work completed during alternatives analysis. FTA suggests that “recent” could be defined as within the 5 years preceding a request to enter preliminary engineering. Data on current ridership patterns are essential to the development of reliable forecasts. Surveys are expensive and may be unnecessary in some areas. FTA should consider other means of collecting data on ridership, such as electronic fare collection data and small sample surveys. Preliminary engineering purpose and exit criteria: FTA is considering defining the preliminary engineering phase as the process of finalizing the project’s scope, cost, and financial plan such that (1) all environmental impacts are identified and adequate provisions are made for their mitigation in accordance with National Environmental Policy Act (NEPA), (2) all major or critical project elements are designed to the level that no significant unknown impacts relative to their costs will result, and (3) all cost estimating is complete to a level of confidence necessary for the sponsor to implement the financing strategy. Since the completion of preliminary engineering for proposed projects represents the completion of nearly all the steps needed to make a final decision on the actual implementation of the proposed project, the information for making that final decision must be reliable. Need a clearer definition of preliminary engineering phase to help project sponsors target resources. Design costs will be frontloaded, thereby increasing the costs of preliminary engineering. Project reaffirmation by the metropolitan planning organization (MPO): FTA is considering requiring that the sponsoring agencies reaffirm their adoption of the project in its final configuration and costs into the MPO’s long range transportation plan as part of the application to advance the project to final design. Before a project is approved for advancement into preliminary engineering, the project must be adopted by the MPO into its long-range transportation plan. However, a project’s scope and costs may change during the preliminary engineering phase. Thus, this requirement would ensure that a revised project still conforms to the MPO’s transportation plans and financial investment strategies. Creates another step that will increase time and cost of project development. Duplicates sponsors’ ongoing work with the MPO and provides no added certainty. Will likely have limited impact on local financial endorsement. Inconsistent with Federal Highway Administration regulations. New Starts funding share incentives: FTA asks how it should implement the provision in SAFETEA-LU that would give FTA discretion to provide a higher percentage of New Starts funding than that requested by the project sponsor as an incentive to produce reliable ridership and cost estimates. SAFETEA-LU allows the Secretary to provide a higher grant percentage than requested by the project sponsor if (1) the net cost of the project is not more than 10 percent higher than the net cost estimated at the time the project was approved for advancement into preliminary engineering, and (2) the ridership estimated for the project is not less than 90 percent of the ridership estimated for the project at the time the project was approved for advancement into preliminary engineering. Incentive money should be invested back into the New Starts program. Incentive should focus on the project’s outcomes like project impacts. Opportunities Exist to Improve the Communication and Transparency of Changes Made to the New Starts Program. GAO-05-674. Washington, D.C.: June 28, 2005. Mass Transit: FTA Needs to Better Define and Assess Impact of Certain Policies on New Starts Program. GAO-04-748. Washington, D.C.: June 25, 2004. Mass Transit: FTA Needs to Provide Clear Information and Additional Guidance on the New Starts Ratings Process. GAO-03-701. Washington, D.C.: June 23, 2003. Mass Transit: Status of New Starts Program and Potential for Bus Rapid Transit Projects. GAO-02-840T. Washington, D.C.: June 20, 2002. Mass Transit: FTA’s New Starts Commitments for Fiscal Year 2003. GAO-02-603. Washington, D.C.: April 30, 2002. Mass Transit: FTA Could Relieve New Starts Program Funding Constraints. GAO-01-987. Washington, D.C.: August 15, 2001. Mass Transit: Implementation of FTA’s New Starts Evaluation Process and FY 2001 Funding Proposals. GAO/RCED-00-149. Washington, D.C.: April 28, 2000. Mass Transit: Status of New Starts Transit Projects With Full Funding Grant Agreements. GAO/RCED-99-240. Washington, D.C.: August 19, 1999. Mass Transit: FTA’s Progress in Developing and Implementing a New Starts Evaluation Process. GAO/RCED-99-113. Washington, D.C.: April 26, 1999. Job Access and Reverse Commute: Program Status and Potential Effects of Proposed Legislative Changes. GAO-04-934R. Washington, D.C.: August 20, 2004. Welfare Reform: Job Access Program Improves Local Service Coordination, but Evaluation Should Be Completed. GAO-03-204. Washington, D.C.: December 6, 2002. Welfare Reform: DOT Has Made Progress in Implementing the Job Access Program but Has Not Evaluated Impact. GAO-02-640T. Washington, D.C.: April 17, 2002. Welfare Reform: Competitive Grant Selection Requirement for DOT’s Job Access Program Was Not Followed. GAO-02-213. Washington, D.C.: December 7, 2001. Welfare Reform: GAO’s Recent and Ongoing Work on DOT’s Access to Jobs Program. GAO-01-996R. Washington, D.C.: August 17, 2001. Welfare Reform: DOT Is Making Progress in Implementing the Job Access Program. GAO-01-133. Washington, D.C.: December 4, 2000. Welfare Reform: Implementing DOT’s Access to Jobs Program in Its First Year. GAO/RCED-00-14. Washington, D.C.: November 26, 1999. Welfare Reform: Implementing DOT’s Access to Jobs Program. GAO/RCED-99-36. Washington, D.C.: December 8, 1998. Welfare Reform: Transportation’s Role in Moving from Welfare to Work. GAO/RCED-98-161. Washington, D.C.: May 29, 1998. Highway and Transit Investments: Options for Improving Information on Projects’ Benefits and Costs and Increasing Accountability for Results. GAO-05-172. Washington, D.C.: January 24, 2005. Transportation Disadvantaged Populations: Some Coordination Efforts Among Programs Providing Transportation Services, but Obstacles Persist. GAO-03-697. Washington, D.C.: June 30, 2003. Transit Labor Arrangements: Most Transit Agencies Report Impacts Are Minimal. GAO-02-78. Washington, D.C.: November 19, 2001. Mass Transit: Many Management Successes at WMATA, but Capital Planning Could Be Enhanced. GAO-01-744. Washington, D.C.: July 3, 2001. Transit Grants: Need for Improved Predictability, Data, and Monitoring in Application Processing. GAO/RCED-00-260. Washington, D.C.: August 30, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) authorized a significant level of investment--over $52 billion--for federal transit programs. SAFETEA-LU also added new transit programs and made changes to existing programs, including the New Starts and Job Access and Reverse Commute (JARC) programs. The New Starts program is a discretionary grant program for public transportation capital projects. The JARC program is intended to improve the mobility of low-income individuals seeking work. SAFETEA-LU authorized $8.6 billion for these two programs. The Federal Transit Administration (FTA) manages both of these programs. This testimony discusses GAO's preliminary findings on the (1) changes SAFETEA-LU made to the New Starts program, (2) changes SAFETEA-LU made to the JARC program, and (3) issues that may be important as FTA moves forward with implementing the act. To address these objectives, GAO interviewed FTA officials, sponsors of New Starts projects, and representatives from industry associations and reviewed FTA's guidance on the New Starts and JARC programs and federal statutes, among other things. The changes SAFETEA-LU made to the New Starts program range from establishing the Small Starts program to introducing new evaluation criteria. FTA has taken some initial steps in implementing SAFETEA-LU changes, including issuing an Advanced Notice of Proposed Rule Making (ANPRM) for the Small Starts program and guidance for the New Starts program in January 2006. The Small Starts program is intended to offer small projects an expedited and streamlined application and review process; however, the transit community has questioned whether the Small Starts program, as outlined in the ANPRM, would provide such a process. FTA's guidance for the New Starts program identified and sought public input on possible changes to the program that would affect traditional New Starts projects, or large starts, such as revising the evaluation process to incorporate the new criteria identified by SAFETEA-LU. SAFETEA-LU also made a number of changes to the JARC program. One key change was to change JARC from a discretionary to a formula-based program, which provides funds to states and large urbanized areas for JARC projects. Other SAFETEA-LU changes include allowing JARC recipients to use a portion of funds for planning activities and removing a limit on the amount of funds available for reverse commute projects. To implement these changes, FTA solicited comments and input through public listening sessions and program notices. FTA has released interim guidance for fiscal year 2006, is currently developing draft final guidance for the JARC program, and plans to issue final guidance later this year. GAO's past work suggests that transparency, communication, and accountability issues will be important as FTA moves forward in implementing SAFETEA-LU changes to the New Starts and JARC programs. Since 1998, GAO has issued numerous reports on these programs, and many of the reports contained recommendations to FTA on ways to improve the implementation of these programs. For example, GAO has reported that FTA could increase the transparency of the New Starts program by obtaining public input on proposed policy changes before they are implemented. SAFETEA-LU addressed some of these issues, and FTA has also taken steps to resolve some of them. For example, SAFETEA-LU requires FTA to publish for notice and comment any proposals that make significant changes to the New Starts program. Nevertheless, given the number of changes that are being made to both programs, continued focus on efforts to improve transparency, communication, and accountability will be important. FTA officials provided technical comments on a draft of this testimony, which were incorporated where appropriate.
Most Americans rely on employer-sponsored health plans or government programs like Medicaid and Medicare to help them select and finance their family’s health insurance coverage. But about 10.5 million Americans rely exclusively on their own resources to select and pay for their family’s coverage. These participants in the market for individual health insurance must make important decisions affecting their family’s health and welfare without the same supports provided to the majority of Americans who obtain their health coverage through employer-sponsored or government plans. Most participants in the individual market do not currently have access to an employer-sponsored plan or a government insurance program. Those under 65 who may participate in the individual market include self-employed people; people whose employers do not choose to offer health insurance coverage to workers and their families; part-time, temporary, or contract workers who are not eligible for health insurance coverage through their employers; early retirees without employer-sponsored coverage and not yet eligible people not in the labor force, including people with disabilities, who are not eligible for Medicare or Medicaid coverage; college students who are no longer eligible for coverage under their parents’ health plans; unemployed people who are not eligible for Medicaid; people between jobs who have exhausted or are ineligible for continuation of their employer-sponsored coverage; and children, spouses, and other dependents ineligible for coverage or too costly to cover under an employer-sponsored plan. Some individuals falling into these categories can rely on spouses or other family members to include them under the family coverage options of their employer-sponsored plans. Many others, however, do not have this alternative. The individual market often provides a short-term source of health insurance coverage for people during transition points in their lives. Many people initially confront the individual market while they are in college or at an entry-level job and discover that they are no longer eligible for coverage under their parents’ employer-sponsored health insurance plan. They may have the option to obtain individual coverage through plans marketed through their schools or training programs, or they may obtain policies through insurance plans or health maintenance organizations (HMO) that operate in their home or school communities. Transitional employment in part-time or temporary jobs or periods of unemployment between jobs are other cases in which the individual market is used. In many entry-level jobs, employers do not provide health insurance, requiring those who wish to obtain coverage to access the individual market. For some, the lower paying entry-level jobs become their permanent source of employment, transforming the individual market into their permanent source of coverage. For self-employed people, the individual market is often the only viable source of coverage throughout their careers. For example, family farmers and those in other professions in which self-employment is common often rely on the individual market as a long-term source of health insurance coverage. Early retirees may rely on the individual market for transitional coverage until they are eligible for Medicare. Of course, many early retirees benefit from continuation of coverage under their former employers’ plans. A growing number of employers, however, have increased retirees’ contributions toward premiums, increased their deductibles and copayments, or in some cases, entirely phased out their financial support for health benefit plans for current and future retirees. Indeed, a recent study by the Employee Benefit Research Institute suggests that the availability of a retiree health benefit may become an increasingly important factor in an employee’s decision to retire early. For the typical person with employer-sponsored coverage, health insurance premium payments are shared by the employer and the worker. The typical employer pays about 80 percent of premiums (70 percent for family coverage). Participants in the individual market must pay their entire premiums out of pocket. Thus, an individual’s ability to pay for coverage largely determines which type of insurance product is purchased or whether the individual can purchase coverage at all. Those in employer-sponsored plans also benefit from the tax treatment of these plans. While health benefits are generally not considered income to the employee, employers may deduct the expense of providing such benefits to their workers. Employers, who often pay 70 to 80 percent of the cost of their employees’ health plans, typically may deduct all of that contribution. In contrast, participants in the individual market generally cannot. Self-employed individuals may deduct a percentage of their expenses, ranging from 40 percent in 1997 to 80 percent in 2006 and thereafter. Employers and benefit managers often provide participants in employer-sponsored plans with help in identification, selection, assessment, and enrollment in plans as well as with the negotiation of benefits and premiums. In contrast, individual market participants must access the market on their own. To help guide them through the broad range of insurance offerings available to eligible individuals in most states, individuals often enlist the assistance of professional insurance agents and brokers. In some states, Blue Cross and Blue Shield plans and other carriers serve as direct writers of insurance. Other individuals turn to organizations such as trade associations, professional associations, or farm cooperatives as access points to the health insurance market. In most states, a wide variety of carriers operates in the individual market, offering a broad range of products. Indeed, most healthy individuals have a broader choice of offerings than those in employer-sponsored plans. But all consumers may not be fully aware of their choices or of the avenues to access the market. To many consumers, insurance terms and options are easily misunderstood. In response, some states have issued consumer guides to help consumers better understand the market. While most individuals have a broad range of individual insurance options available, a significant minority have few if any affordable options. An individual’s health status can lead to sharply higher premiums or result in outright rejection under many plans. Medical underwriting—through which preexisting health conditions or an individual’s health status may result in denial of coverage, permanent exclusion from coverage of a preexisting condition, or higher premiums—is still fairly common in the individual markets of many states. Several states have attempted to deal with the effects of medical underwriting by creating special insurance pools for high-risk individuals or through state individual market reforms (see ch. 5). At the federal level, the Health Insurance Portability and Accountability Act of 1996 recently passed by the Congress may reduce the potential effects of medical underwriting and preexisting condition exclusions for those making the transition from an employer-sponsored plan to the individual market. States have been cautious or reluctant to extend many of the protections incorporated into their small business reforms to the individual health insurance market. Extension of insurance portability to the individual market was one of the most controversial issues debated in recently passed insurance reforms at both the state and federal levels. In large measure, the continuing debate reflects the paucity of reliable information on the individual health insurance market. The interaction between the goals of improved access and affordability of insurance takes on a magnified importance in the individual market. On the one hand, the individual market serves a significant share of older people who are not yet eligible for Medicare and individuals with poor or declining health who are most concerned about access to health insurance without medical preconditions. On the other hand, the individual market is also an important source of coverage for a significant number of younger and often healthier individuals just entering the labor force or in lower wage jobs that often do not provide employer-sponsored coverage. For most of them, premium costs are an important barrier to health insurance coverage. Yet some initiatives that improve access for the older and sicker group might result in higher premiums for the younger and healthier group, thus potentially pricing them out of the market. The interaction between expanding access and improving affordability varies among states and depends largely on the structure and relative size of the insurance market, characteristics of its participants, and its regulatory structure. Numerous states and the federal government have already introduced incremental reforms in the individual health insurance market, but many legislators and other observers believe that further adjustments may be needed. The Chairman of the Senate Committee on Labor and Human Resources asked us to report on the size of the individual health insurance market, recent trends, and the demographic characteristics of its participants; the market structure, including how individuals access the market, the prices, other characteristics of health plans offered, and the number of individual carriers offering plans; and the insurance reforms and other measures states have taken to increase individuals’ access to health insurance. Our review included both national and state-specific data. Our estimates of the size and demographic characteristics of individual market enrollees were based on nationally projectable data sets as were data concerning individual market insurance reforms, high-risk pools, and insurers of last resort. Because other aspects of individual insurance markets can vary significantly among states, we relied on case studies of the individual insurance markets in seven states. Although findings from these states cannot be projected to the nation at large, we believe they are reasonably representative of the range of individual insurance market dynamics across the country. Our confidence is based on the criteria we used to select the seven states as well as our contact with representatives of large, national insurance carriers, trade groups, and regulatory bodies (discussed further under methodology). Finally, our report focused on comprehensive major medical expense and HMO plans. Therefore, references to individual market products do not include more limited benefit products unless specifically noted. To determine the size and demographic characteristics of individual insurance market participants nationwide, we analyzed data from the Bureau of the Census’ March 1995 Current Population Survey (CPS), a national random survey of about 57,000 households. We also analyzed the 1993 National Health Interview Survey (NHIS) conducted by the Bureau of the Census for the National Center for Health Statistics. The findings of these two surveys were generally similar. Unless otherwise noted, we report CPS findings because the results were available for a more recent year, the number of individuals surveyed was greater, and state-level data were available. Appendix I contains more details on the methodology we used in our analyses. To understand the structure and dynamics of the individual insurance market, we visited seven states—Arizona, Colorado, Illinois, New Jersey, New York, North Dakota, and Vermont. We selected these states judgmentally on the basis of variations in their populations, urban/rural compositions, and the extent of individual insurance market reforms implemented. In each state, we interviewed and obtained data from representatives of the state insurance department and at least one of the largest individual market carriers. From insurance department representatives, we obtained information concerning the regulation and, where applicable, reform of the individual insurance market and the number and market share of individual market carriers in the state. From carriers, we obtained information concerning products offered, including their benefit structure, cost-sharing alternatives, eligibility, and prices. In some states, we also interviewed health department officials, insurance agents, and representatives of insurance industry trade associations, consumer groups, and insurance purchasing cooperatives. To supplement state-specific data, we interviewed representatives or obtained information from national insurance carriers and trade and industry groups, including the American Academy of Actuaries, American Chambers Life Insurance Company, the Blue Cross and Blue Shield Association, the Health Insurance Association of America, Mutual of Omaha Companies, Time Insurance Company, and Wellpoint Health Networks, Inc. We also reviewed published literature on the individual insurance market. To identify states that passed, from 1990 through 1995, individual insurance reforms or, as of year-end 1995, other measures designed to expand access to coverage in the individual market, we obtained summaries compiled by various industry and trade groups, including the Blue Cross and Blue Shield Association and the Health Insurance Association of America. We then obtained and reviewed each state’s individual insurance reform legislation and, when necessary, supplemented this review with telephone interviews of state officials to clarify certain provisions. Our work was performed between February and September 1996 according to generally accepted government auditing standards. Although most Americans obtain their health insurance through employment-based health plans, individual insurance provides coverage for many Americans who may not have access to employment-based coverage. We estimate that about 10.5 million Americans under 65 had individual insurance as their only source of health coverage during 1994, with another 8.6 million having individual insurance as well as some other type of health insurance. While those with individual insurance only represent a relatively small share of the nonelderly population— 4.5 percent in 1994—individual insurance is a more prominent source of health coverage in the Plains and Mountain states and among self-employed people, agricultural workers, and early retirees. On the basis of our analysis of the March 1995 CPS, we estimate that about 10.5 million Americans under 65 years of age (4.5 percent of the nonelderly population) received health coverage through individual health insurance as their only source of health coverage during 1994. That is, the health plan was purchased directly by an individual, not through a current or past employer or union. An additional 8.6 million Americans (3.7 percent) had individual health insurance in addition to employment-based coverage, Medicare, Medicaid, or coverage through the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) at some time in 1994. Many people purchase individual health insurance for only a short period, such as when they are between jobs and without group insurance coverage. For example, a representative of one carrier told us that 30 percent of enrollees maintain individual insurance for less than 1 year. Thus, the 8.6 million people who had individual insurance coverage and another type of health insurance during 1994 could either have (1) had individual health insurance for part of 1994 and another type of health insurance for the remainder of the year or (2) had both individual health insurance and another type of coverage—employment-based or government-sponsored—at the same time for part or all of the year. In the latter case, it is possible that the other type of health insurance would have been the primary source of health coverage with the individual insurance being a supplemental policy. It is not possible, however, to identify how many people would be in either of these groups. For this reason, we focused our analysis on the 10.5 million nonelderly Americans who had private individual insurance as their only source of health coverage at any time in 1994. While 4.5 percent of the U.S. nonelderly population had individual health insurance as their only source of health coverage in 1994, the importance of the individual insurance market varied considerably among states. (See fig. 2.1.) In some Mountain and Plains states, individual insurance is relied on much more as a source of coverage. For example, we estimate that about one of every seven people under 65 in North Dakota has individual health insurance as his or her only source of health coverage. North Dakota is the only state where our estimates of the number of participants in the individual health insurance market exceed the estimated uninsured population. Iowa, Montana, Nebraska, and South Dakota also have estimated participation rates in the individual insurance market that are at least twice the national rate. Appendix II presents rates of individual health insurance enrollment by state. Overall, individual insurance enrollment tends to be slightly lower in metropolitan areas than in nonmetropolitan areas. (See table 2.1.) In particular, individual health insurance is common among people living on farms. Nearly 30 percent of people indicating that their residence was a farm had individual health insurance in 1993, according to our analysis of the National Health Interview Survey. The pattern of higher enrollment in rural areas is not uniform throughout the country. The Southern region, for instance, has a relatively large nonurban population, but the proportions of the populations that had individual health insurance were lower than the national average in 12 of its 17 states. Florida is an exception; the large number of retirees under age 65 there may help explain the fact that a relatively large proportion of Florida’s nonelderly population has individual insurance (6.4 percent). In Hawaii, the only state with mandated employer-sponsored health insurance, only 1.8 percent of the nonelderly population had individual health insurance as the sole source of coverage in 1994. In several other states—Alaska, Arizona, Delaware, Kentucky, Nevada, New Mexico, Virginia, West Virginia, and Wisconsin—less than 3 percent of the population relied on individual insurance as the only source of coverage. The individual insurance market is an important source of health coverage for people in their fifties and early sixties, particularly early retirees and people who have been widowed. The relative importance of the individual insurance market to people of different ages is illustrated in table 2.2. Those in the 60 to 64 age group are more than two-and-a-half times as likely to be covered by individual insurance than those in their twenties (9.6 percent versus 3.4 percent). The median age of people with individual insurance is 35, compared with 32 for people with employment-based coverage and 28 for uninsured people. The individual insurance market is becoming increasingly important for early retirees because fewer employers are providing health coverage for them. In 1994, nearly 10 percent of retirees aged 64 or younger had individual health insurance as the sole source of health coverage. A disproportionate share of people who had been widowed (9.2 percent) also had individual insurance as the only source of health coverage. The likelihood of having individual health insurance also varies widely by race and ethnicity. Whites are more than twice as likely to have individual health insurance as are blacks or Hispanics. Blacks and Hispanics are also less likely to have employment-based coverage and are more likely to be uninsured. (See table 2.3.) The higher median income of whites makes the potentially high cost of individual health insurance more affordable for this group. The individual market is not a viable option for many of the nation’s low-income families. As shown in table 2.4, those with income below the federal poverty level are much more likely to be uninsured and slightly less likely to purchase individual insurance. For this group, the cost is an important deterrent to purchasing health insurance. Moreover, Medicaid and other government programs are potential alternatives for these lowest income households. Above the poverty level, the individual market becomes a more important health insurance alternative. Participation in the individual insurance market exceeds the national average for families with incomes between about $10,000 and $40,000. (See fig. 2.2.) Participation dips below the national average as income rises above about $40,000, perhaps reflecting greater availability of employment-based insurance. For those with incomes above about $90,000, participation is again at or above the national average. Overall, people with individual health insurance have a lower median family income ($34,422) than people with employment-based coverage ($48,015) but higher than people who are uninsured ($20,014). About three-quarters of those aged 18 to 64 with individual health insurance are employed, and some parts of the labor force depend more extensively on the individual insurance alternative. For example, self-employed and contingent workers, including part-time and temporary employees, are more likely to have individual health insurance. (See table 2.5.) These groups are often ineligible for employer-sponsored health plans. Furthermore, as shown in figure 2.3, individual insurance is more prevalent the smaller the employee’s firm is. Employees in smaller firms are also less likely to have employment-based coverage. The inverse relationship between individual and employment-based coverage is particularly evident for selected industries. (See table 2.6.) In particular, farm workers (17 percent), personal services workers (8 percent), and construction workers (7 percent) are more likely to have individual insurance than the national average and are less likely to have employment-based coverage. Among people employed in industries in which large firms predominate, including manufacturing, government, and transportation, individual insurance is not very common. Agricultural, personal services, and construction industries tend to be dominated by smaller firms, and individual insurance plays a more important role in these workers’ health coverage. Self-employment is also particularly common among agricultural workers, contributing to the high share of these workers who have individual health insurance. Most participants in the individual market (75 percent) rated their health condition as excellent or very good. Only about 6 percent rated their health as fair or poor. This pattern is nearly identical to the self-reported health status of those with employment-based health coverage. Individuals who report poor health status are disproportionately enrolled in government-funded health insurance programs or are uninsured. While 5.1 percent of those who assess their health as excellent have individual insurance coverage, only 2.5 percent of those who believe they are in poor health have individual health insurance. (See table 2.7.) Reflecting the pattern for people reporting poor health, individuals who are unable to work because of disabilities are less likely to be covered only by individual insurance. This low rate reflects this group’s greater reliance on government-sponsored health insurance programs and may reflect their higher cost for private coverage and more tenuous attachment to the labor force. Medical underwriting and preexisting condition limitations are also more common with individual insurance policies, making them unappealing for those with disabilities. Fundamental structural differences exist between the individual health insurance market and the employer-sponsored group insurance market. These differences can have significant implications for consumers. Individuals without employer-sponsored coverage usually access the health insurance market on their own and face a variety of ways of doing so. Individuals must choose from among a multitude of complex products that are often difficult to compare. Once a product is chosen, individuals must select from a wide range of cost-sharing arrangements and pay the full price of coverage. In contrast, employees eligible for group health coverage do not face the task of accessing the insurance market—this is done for them by the employer. And because employers typically offer only one or a few health plans, the task of identifying and comparing products is greatly simplified or eliminated. Finally, the burden of selecting cost-sharing options and paying for the products is significantly eased by employer contributions and payroll deductions. One common approach consumers take is to purchase insurance through an agent. Agents may sell products from only one insurance carrier or offer products from several competing carriers and assist consumers in identifying the product that best meets their needs. Agents may also assist consumers in the application process. Consumers may also purchase insurance by contacting carriers directly. In many states, dominant carriers have high name recognition and may focus marketing activities directly on individual consumers. Representatives from several Blue Cross and Blue Shield plans and large HMOs we visited such as Kaiser and FHP told us they regularly use television, radio, or print advertising to target individual consumers. Consequently, most of the individual market business for these carriers is generated through direct contact with applicants. Indemnity carriers, like Mutual of Omaha and Time Insurance, rely on agents to generate most of their individual market business. Another important access route for individual consumers is through a business or social organization. Organizations such as chambers of commerce, trade associations, unions, alumni associations, and religious organizations may offer insurance coverage to their members. Through the pooled purchasing power of many individuals or small employers, associations can negotiate with carriers for competitively priced products that they then offer their members. For example, a small-employer health care purchasing group in Arizona offers its products to the self-employed. Through this program, self-employed people have access to coverage on a guaranteed-issue, community-rated basis with premium adjustments permitted only for age and geography. Other arrangements make use of individuals’ common affiliation to increase access to health insurance for individuals. For example, Blue Cross and Blue Shield of North Dakota has made arrangements with essentially all the banks in the state to allow depositors to obtain coverage by having their premiums deducted directly from their bank accounts. In operation since the 1960s, this bank depositors plan covers about 76 percent of the carrier’s individual enrollees in the state. Individuals leaving most employer-sponsored group plans have access to two different types of coverage. First, federal law requires carriers to offer individuals leaving group coverage the option of continuing to purchase that coverage at no more than 102 percent of the total policy cost for up to 18 months. Required by the Consolidated Omnibus Budget Reconciliation Act (COBRA) of 1985, the law applies to employer groups of 20 or more. Some state laws extend similar requirements to groups of fewer than 20. Secondly, several states require carriers to offer individuals a product comparable to their group coverage on a guaranteed-issue basis. Conversion coverage tends to be very expensive, however. Because those who elect to purchase conversion coverage tend to be in poorer health than those who do not (a situation known as adverse selection), the premium prices are generally higher than for comparable individual market products. Finally, those determined by carriers to be uninsurable in the insurance market may be able to purchase coverage through a state high-risk program. Many states offer high-risk programs that provide subsidized coverage to uninsurable individuals at rates generally about 50 percent higher than what a healthy individual would pay in the private market. These programs cover a very small percentage of the insured population and are sometimes limited by the availability of public funding. Purchasing insurance through the individual market can be a complex process for even the most informed consumer. In addition to the multiple ways consumers can access the market, consumers are confronted with products offered by dozens and sometimes a hundred or more different carriers. Once a carrier and product are chosen, consumers must then select among a wide range of deductibles and other cost-sharing options. In each of the seven states we visited, individuals could choose among products offered by multiple carriers. Consumers could choose from plans offered by no fewer than 7 to over 100 carriers. Generally, HMO coverage was available in addition to traditional fee-for-service indemnity plans or preferred provider arrangements. Table 3.1 shows estimates of the number of individual market carriers in each state’s individual market. Unless otherwise noted, carrier estimates include only carriers that offer comprehensive coverage. While some states have fewer carriers than others, it is important to note that fewer carriers do not necessarily equate to fewer choices for consumers. For example, although 145 carriers in Illinois may offer individual products, these products are not available to all consumers in the state because of medical underwriting. In addition, some of these carriers may not actively market their products or may sell only limited benefit products. In contrast, New Jersey has 26 carriers offering one or more comprehensive products to which every individual market consumer in the state has guaranteed access. The mix of carriers participating in the individual market also differs from that of group insurance markets with respect to the role of Blue Cross and Blue Shield (Blues) plans, the extent of HMO penetration, and the size of carriers. Blues plans continue to be relatively important in the individual markets of many states. In six of the seven states we visited, the Blues were the largest single carrier in the individual market. In North Dakota and Vermont, the Blues had a 76 and 58 percent share, respectively, of the market for comprehensive individual market products. Nationally, about a quarter to a third of all individual enrollees obtained their coverage from a Blues plan in 1993. The HMO share nationally in the individual market is about half of what it is in the employer-sponsored group market, although it is increasing. In New York, for example, the HMO share of the individual health insurance market has increased from about 7 percent in 1992 to 40 percent in 1996. Partly in response to insurance reforms enacted there, at least one large individual market carrier withdrew its indemnity products altogether and replaced them with an HMO product, according to a New York trade association official. The trend in New York is expected to continue in response to recent state measures designed to encourage HMO participation in the individual market. In Illinois, a representative of one of the largest individual market carriers told us the carrier soon expects to introduce its first individual HMO product. In Colorado, an HMO plan is now the most popular product sold in the individual market. Finally, whereas the group market is dominated by large, national carriers such as Aetna and Prudential, carriers in the individual insurance market tend to be smaller or regional in focus. Blues plans are typically a dominant force in state individual markets. Also, few of the largest individual market carriers in the states we visited were among the 100 largest U.S. life and health insurance carriers. In contrast to employment-based group insurance, individuals may choose from multiple cost-sharing arrangements and are generally subject to relatively high out-of-pocket costs. Under employer-sponsored coverage, the range of available deductibles is narrower, and total out-of-pocket costs are capped at a lower level than under most individual market products. For plans offered by medium and large employers, annual deductibles are most commonly between $100 and $300, while limits on total out-of-pocket expenses are $1,500 or less for most employees. In the individual market, annual deductibles are commonly between $250 and $2,500, while limits on total out-of-pocket costs typically start at $1,200 and may exceed $6,000 annually. Insurance contracts require policyholders to contribute to the cost of benefits received. Under traditional, major medical expense plans, consumers must pay annual deductibles and coinsurance up to a specified total limit on out-of-pocket expenses. HMOs typically require consumers to make copayments for each service rendered until an annual maximum is reached. The cost-sharing arrangement selected by the consumer is a key determinant of the price of an individual insurance product. The more potential out-of-pocket expenses the consumer could incur, the lower the premium will be. To illustrate, table 3.2 shows how premiums for a comprehensive major medical expense policy offered by one Colorado carrier decrease as annual deductibles increase. Premiums shown are for a healthy 30-year-old, nonsmoking male living in a major metropolitan area of the state. Products offered in the states we visited typically included a wide range of cost-sharing alternatives. Most commonly selected by consumers were deductibles from $250 to $2,500, although deductibles of $5,000, $10,000, $50,000, and even $100,000 were also available. (Under the recently enacted national Health Insurance Portability and Accountability Act of 1996, high-deductible plans to be used in conjunction with medical savings accounts are defined as those with deductibles of between $1,500 and $2,250 for individuals.) HMO copayment requirements were typically $10 or $15 for a physician office visit and $100 to $500 per hospital admission. Total annual limits on out-of-pocket costs were most commonly between $1,500 and $6,000. Table 3.3 illustrates examples of cost-sharing options available for selected commonly sold comprehensive products. Because consumers pay the entire cost of coverage, affordability is often of paramount concern. Consequently, consumers who perceive their risk of needing medical care to be minimal but want coverage in case of an accident or catastrophic illness may choose very high cost-sharing provisions to obtain the lowest possible premium. Other consumers, regardless of their health status, may only be able to afford insurance with very high cost-sharing provisions. Consumers who anticipate a greater likelihood of requiring medical care may be willing to pay higher premiums to protect themselves from large out-of-pocket expenses for coinsurance, deductibles, or copayments. Carrier and insurance department representatives with whom we spoke suggested that the level of consumer cost-sharing has been increasing in recent years, reflecting consumers’ goal of keeping premiums affordable. One national carrier representative said that deductibles seem to be increasing every year. Among the carrier’s new enrollees in 1995, 40 percent chose $500 deductibles, 50 percent chose $1,000 deductibles, and the remaining 10 percent chose deductibles from $2,500 to $10,000. A representative of another national carrier said that the premiums for its $250- and $500-deductible products had become too expensive and are thus no longer offered. State regulation also influences the range in cost-sharing options available to consumers. For example, under individual market reforms enacted in New Jersey, carriers are limited to offering only standard plans with prescribed ranges of cost-sharing options. All individual market products sold in the state are limited to deductibles of $150, $250, $500, or $1,000 for an individual enrollee. In contrast, one carrier in Arizona, where cost-sharing arrangements are not subject to state regulation, offers deductibles ranging from $1,000 to $100,000. Comprehensive individual coverage includes major medical expense plans—traditional fee-for-service plans and preferred provider organization (PPO) arrangements—and standard HMO plans. While our study focused on comprehensive individual insurance market products, it should be noted that a wide range of less comprehensive, or limited benefit products, are also sold in the individual market. These products, which are sometimes confused with comprehensive products, are discussed in figure 3.1. Under most major medical expense plans a wide range of benefits is covered, including in- and outpatient hospital, physician, and diagnostic services; specialty services, such as physical therapy and radiology; and prescription drugs. Standard HMO plans typically cover an equally or more comprehensive range of benefits and are also more likely to offer a broad range of preventive care, such as periodic examinations, immunizations, and health education. Moreover, these benefits were generally comparable with benefits covered under employer-sponsored group plans. We reviewed the benefit structure of commonly sold comprehensive products in the states we visited. These products included traditional indemnity or fee-for-service, PPO, and HMO plans. Most of the plans covered a wide range of benefits, as shown in figure 3.2. Five benefits—hospice care, substance abuse treatment, maternity services, preventive care for adults, and well baby/child care—were less consistently covered. The latter three benefits were covered by each of the HMOs. Among plans that did not offer maternity coverage, half offered it as an additional rider. Beyond characteristics such as how consumers access the market, the number and types of health plans available, and the multiple cost-sharing options, other aspects of the individual market also distinguish it from the employer-sponsored group market. Aspects such as restrictions on who may qualify for coverage and the premium prices charged can have direct implications for consumers seeking to purchase coverage and are often exacerbated by the fact that individuals must absorb the entire cost of their health coverage, whereas employers usually pay for a substantial portion of their employees’ coverage. A consumer may find affordable coverage or may find coverage only at prohibitive rates. A consumer may find coverage available only if conditioned upon the permanent exclusion of an existing health condition or may be locked out of the private health insurance market entirely. Consumers may be forced to turn to state high-risk programs or an insurer of last resort for coverage—at a significantly higher premium—or go without any health insurance coverage whatsoever. Unlike the employer-sponsored market where the price for group coverage is based on the risk characteristics of the entire group, prices in the individual markets of most states are based on the characteristics of each applicant. Characteristics commonly considered to determine premium rates in both markets include age, gender, geographic area, tobacco use, and family size. For example, on the basis of past experience carriers anticipate that the likelihood of requiring medical care increases with age. Consequently, a 55-year-old in the individual market pays more than a 25-year-old for the same coverage. Similarly, females in this market may be charged a higher premium than males of the same age group because of the costs associated with pregnancy and the treatment of other female health conditions. These individuals, however, if in the group market, would usually pay the same amount as the other members of their group, regardless of their specific age or gender. Premiums may also vary geographically. In some states, premium prices are higher in urban areas than in rural areas because of higher medical costs. Likewise, smokers are expected to incur greater medical expenses than nonsmokers and are thus often charged higher premiums in the individual market. Finally, family composition is also factored into premium price as a larger family would be expected to incur higher medical expenses than a smaller family. Treatment of this last factor is generally similar between the individual and the group markets. Carriers establish standard rates for each combination of demographic characteristics. Table 4.1 provides examples of the range in monthly premium rates some carriers we visited charge individuals, depending on their age, gender, or geographic location, in states that do not strictly regulate carrier rating practices. The low end of the range generally represents the premium price charged to males about the age of 25 who do not live in a metropolitan area. In contrast, the high end usually represents the most expensive insured in this market, a male aged 60 to 64 who lives in a metropolitan area. Absent state restrictions, carriers also evaluate the health status of each applicant to determine whether an applicant’s health status will result in an increase to the standard premium rate, the exclusion of a body part or an existing health condition, or the denial of the applicant altogether. This process is called medical underwriting. Under medical underwriting, carriers evaluate an applicant’s health status on the basis of responses to a detailed health questionnaire. On the questionnaire, applicants must indicate whether they or any family member to be included on the policy have received medical advice or treatment of any kind within their lifetime or within a more limited time frame, such as the previous 5 to 10 years, and whether they have experienced a broad range of specifically identified symptoms, conditions, and disorders. Applicants must also indicate whether they have any pending treatments or surgery, are taking any prescription medication, or have ever been refused or canceled from another health or life insurance policy. On the basis of these responses, carriers may request additional information—typically medical records—or require an applicant to undergo a physical examination. Some carriers require physical examinations regardless of applicants’ responses to their questionnaires. The information obtained through this process is used by carriers to determine whether to charge a higher than standard premium rate, exclude from coverage a body part or an existing health condition, or deny the applicant coverage altogether. The criteria used to make these determinations vary among carriers and are considered proprietary. Certain conditions are commonly treated by carriers in the same manner, however. Table 4.2 lists examples of some carriers’ treatment of certain health conditions in states that do not prohibit medical underwriting. The carriers we visited generally accepted the majority of applicants for coverage at the standard premium rate. Where state mandates did not exist, however, these carriers denied coverage to a significant minority of applicants. Denial rates ranged from zero for carriers in states such as New Jersey, New York, and Vermont where the law guarantees coverage, to about 33 percent, with carriers in those states that do not prohibit medical underwriting typically denying coverage to about 18 percent of all applicants. Individuals with acquired immunodeficiency syndrome (AIDS) or other serious conditions, such as heart disease and leukemia, are virtually always denied coverage. We also found examples in which individuals with less severe conditions, such as attention deficit disorder and chronic back pain, could also be denied coverage by some of the carriers. Furthermore, at least two HMOs we visited almost always deny coverage to any applicant who smokes. Table 4.3 lists the estimated declination rates for some of the largest carriers we visited. Some officials suggested that these declination rates could be understated for at least two reasons. First, insurance agents are usually aware of which carriers medically underwrite and have a sense as to whether applicants will be accepted or denied coverage. Consequently, agents will often deter individuals with a health condition from even applying for coverage from certain carriers. In fact, officials from one carrier in Arizona told us that since agents discourage those who would not qualify for coverage from applying, their declination rate is not an accurate indicator of the proportion of potential applicants who are ineligible for coverage. Secondly, the declination rates do not take into account carriers that attach riders to policies to exclude certain health conditions or carriers that charge unhealthy applicants a higher, nonstandard rate for the same coverage. Thus, although a carrier may have a comparatively low declination rate, it may attach such riders and charge higher, nonstandard premiums to a substantial number of applicants. In fact, a national survey of insurers showed that 20 percent of all applicants were offered a policy with an exclusion rider, a rated-up premium, or both. The majority of the indemnity insurers we visited will add riders to policies that exclude certain conditions either temporarily or permanently. For example, knee injuries related to skiing accidents may be explicitly excluded from coverage as may be a more chronic condition such as asthma. Also, a person who suffers from chronic back pain may have all costs associated with treatment of that part of the body excluded from coverage. Similarly, some carriers we visited will accept an applicant with certain health conditions but will charge him or her a significantly higher premium to cover the higher expected costs. For example, an Illinois carrier charges 2 to 3 percent of its enrollees a nonstandard rate. This 2 to 3 percent, however, pays approximately double the standard rate. Also, at least one carrier we visited charges individuals, depending on their medical history, a standard or nonstandard rate for its HMO product. The nonstandard rate is approximately 15 percent higher. Individual consumers may be affected differently by the varying methods carriers use in determining eligibility and price. A consumer may find affordable coverage, may only find coverage that explicitly excludes an existing health condition, or may find coverage only at prohibitive rates. Many consumers may be locked out of the private health insurance market entirely. Tables 4.4 and 4.5 provide examples of what individuals may face, given particular demographic characteristics and health conditions, when attempting to purchase individual insurance from carriers in the states we visited. In addition to demographic characteristics and health status, the extent to which the state regulates the individual insurance market also influences eligibility and premium price decisions. Price comparisons among states, however, can be misleading. Premium prices also vary among states because of regional and state-specific factors. For example, differences among states in cost of living and health care utilization, among others, may also contribute to premium price differences. As discussed, carriers, absent regulation that prohibits the practice, generally base standard premium rates on the demographic characteristics of each applicant. Such demographic characteristics may include age, gender, geographic area, and family composition. Table 4.4 shows this price variation. Using the monthly premium charged to a healthy, 25-year-old male as a baseline, it compares the differences in prices certain carriers will charge to other healthy individuals on the basis of their age and gender. Carriers anticipate that the likelihood of needing medical care increases with age. In the states we visited, all the carriers except those that were prohibited by law from doing so, charged higher premiums to older applicants. For example, an Arizona PPO plan cost a 25-year-old male $57 a month and a 55-year-old male $191 a month for the same coverage, a difference of $134. Similarly, a 55-year-old male would have paid $243 more than a 25-year-old male for a PPO product from one Illinois carrier. The carriers we visited were not as consistent in their treatment of gender. Several carriers charged females a higher premium than males of the same age group because of the costs associated with the female reproductive system and pregnancy. For example, 25-year-old females in Illinois and Arizona paid $31 more each month than males of the same age for the same PPO coverage and $36 more each month for a fee-for-service plan in Colorado. All applicants to a Colorado HMO and a North Dakota fee-for-service plan, however, paid the same monthly premium, regardless of gender. Premium prices also varied depending on the geographic area where the applicant resides. For example, the monthly premium for the standard HMO product in New York may cost as much as $289 in metropolitan New York City or as little as $145 in more rural areas of the state. As the table indicates, all applicants in New Jersey, New York, and Vermont, regardless of age or gender, would pay exactly the same amount for the same insurance coverage from the same carrier. In these states, the individual insurance reform legislation requires community rating, a system in which the cost of insuring an entire community is spread equally among all members of the community, regardless of their demographic characteristics or health status. Reform legislation in New York does allow for limited adjustments by geographic regions. In New Jersey’s individual market, the premium price of the sample product for the carriers in the state ranges from $155 to $565. Although this is a fairly wide price range, all applicants are eligible for and may select from among any of these plans. The prices listed in table 4.4 generally are carriers’ standard rates charged to individuals with the specified demographic characteristics. Absent state restrictions, most carriers will also evaluate the health status of each applicant to determine whether to charge an increase over the standard premium rate, to exclude a body part or existing health condition from coverage, or to deny the applicant coverage altogether. Some carriers also regard smoking to be a risk characteristic and consider it when they determine an applicant’s eligibility and premium price. Table 4.5 provides examples of what a 25-year-old male with varying habits or health conditions might experience in terms of availability and affordability of coverage in the individual insurance market in the states we visited. Again, the baseline is the monthly premium price charged to a healthy, 25-year-old male. Three of the 11 carriers shown in table 4.5 charge smokers $7 to $27 more each month for the same coverage, and one HMO automatically denies coverage to all smokers. At least two of the carriers will attach a rider to a policy that explicitly excludes coverage of a preexisting knee condition and will not cover any costs associated with treatment of that part of the body. While three of the carriers automatically deny an applicant with preexisting diabetes, one will accept the applicant but will charge him or her a significantly higher premium to cover the higher expected costs. And finally, an applicant who had cancer within the past 3 years would almost always be denied coverage from all carriers except those in the guaranteed-issue states of New Jersey, New York, and Vermont. Individuals in these states, regardless of their health condition, will generally pay the same amount as healthy individuals for similar coverage. In non-guaranteed-issue states, applicants who have a history of cancer or other chronic health conditions are likely to have a difficult time obtaining coverage. In many of these states, high-risk insurance pools have been created to act as a safety net to ensure that these otherwise uninsurable individuals can obtain coverage, although at a cost that is generally 50 percent higher than the average or standard rate charged in the individual insurance market for a comparable plan. Individuals in Colorado, Illinois, and North Dakota who are denied coverage from one or more carriers can obtain insurance through the high-risk pool for $52 to $122 more each month. Arizona is the only state we visited that did not have guaranteed issue or a high-risk pool. Unhealthy individuals in this state who are most in need of coverage are not guaranteed access to any insurance product and will most likely be uninsured. Several state insurance regulators and a representative of the National Association of Insurance Commissioners (NAIC) expressed concern that some carriers may use closed block durational rating, a carrier rating practice used in the individual health insurance markets of many states. Under this practice, carriers offer a guaranteed renewable product at an artificially low rate to attract large numbers of new enrollees and increase their market share. These carriers eventually increase premium rates to more adequate levels and close the block of business by no longer accepting any new applicants. Because insurance pools rely on a steady influx of new, healthy applicants to maintain rates, the rates in the closed block rise even faster. Healthy members of the block tend to migrate—and are sometimes actively solicited by the carriers—to lower priced products that are not similarly available to the unhealthy members of the block. The unhealthy members must either remain in the closed block with its spiral of poorer risks and increasing rates or leave the carrier and face the uncertain prospect of obtaining coverage from another carrier on the open market. Consequently, this practice allows carriers to shed poorer risks and retain favorable risks. Though legal in most states, some regulators strongly object to this practice. They suggest it penalizes those individuals who have dutifully purchased and maintained their health coverage but eventually become unhealthy. Some states, through guaranteed-issue requirements and premium rate restrictions, have prohibited this practice. Although medical underwriting results in the exclusion of individuals from the private health insurance market, many carrier representatives and analysts suggest that it plays a role in keeping insurance premiums more affordable for most individuals. They contend that coverage of uninsurable individuals is a public policy concern and should be addressed through public initiatives such as high-risk pools, not through the private sector insurance market. Insurance industry representatives explain that where states prohibit carriers from using medical underwriting, individuals are essentially guaranteed access to insurance regardless of their health status. They suggest that guaranteed access to coverage can result in adverse selection. Adverse selection refers to the tendency of some individuals to refrain from purchasing insurance coverage while they are younger or healthier because they know it will be available to them in the future should their health status decline. If a significant number of younger, healthier individuals decide to forgo coverage, the average health status of those remaining in the insured pool diminishes. Higher claims costs for this less healthy group will result in higher premium prices, which in turn, could force additional healthy individuals to forgo coverage. The resulting spiral of poorer risks and higher premiums could make insurance less affordable for everyone. Many state insurance regulators and analysts disagree with this premise or suggest that its impact is overstated by the insurance industry. They present data to support their position as do insurance industry representatives to support theirs. The appropriate degree of regulatory intervention in private insurance markets will continue to be a subject of debate, underscoring the importance of thorough, ongoing evaluation of the impact of various state insurance reforms. A wide range of initiatives to increase access to various segments of the heath insurance market have been undertaken by states and more recently the federal government. While almost all of the states have enacted insurance reforms designed to, among other things, improve portability, limit waiting periods for coverage of preexisting conditions, and restrict rating practices for the small employer health insurance market, they have been slower to introduce similar reforms to the individual market. From 1990 through 1995, a number of states passed similar insurance reforms in the individual market, and by year-end 1995, about 25 states created high-risk insurance pools to provide a safety net for otherwise uninsurable individuals. Eight states and the District of Columbia have Blue Cross and Blue Shield plans that provide all individuals a product on an open enrollment basis. At least seven states have no insurance rating restrictions, operational high-risk pool, or an insurer of last resort. Table 5.1 catalogs state initiatives to increase individuals’ access to health insurance. Recent legislative efforts at the federal level also attempt to increase individuals’ access to this health insurance market. To improve the availability and affordability of health insurance coverage to individual consumers, a number of states have passed legislation in recent years to modify the terms and conditions under which health insurance is offered to this market. These reforms may seek to restrict carriers’ efforts to limit eligibility and charge higher premiums because of an individual’s health history or demographic characteristics. We identified 25 states that from 1990 through 1995 had passed one or more reforms in an effort to improve individuals’ access to this market. We found substantial variations in the ways states approached reform in this market, although reforms commonly passed included guaranteed issue, guaranteed renewal, limitations on preexisting condition exclusions, portability, and premium rate restrictions. More states may soon enact reforms in this market because of NAIC’s recent recommendation of two model laws for reforms in the individual insurance market. An explanation of the reforms follows. Table 5.2 catalogs the reforms passed by each state. Guaranteed issue requires all carriers that participate in the individual market to offer at least one plan to all individuals and accept all applicants, regardless of their demographic characteristics or health status. We found that 11 states required all carriers participating in the individual market to guarantee issue one or more health plans to all applicants. This provision, however, did not necessarily guarantee coverage to all individuals on demand. To limit adverse selection, carriers in most states did not have to accept individuals who qualify for employer- or government-sponsored insurance. Also, some states only required carriers to accept all applicants during a specified and usually limited open enrollment period. States also varied in the number of plans they required carriers to guarantee issue. In states such as Idaho, the legislation explicitly defined a basic and standard benefits plan that each carrier must offer all individuals. Other states, like Maine and New Hampshire, required carriers to guarantee issue all health plans they sold in the individual market. New Jersey explicitly defined and limited the number and type of plans carriers offered in the market. Guaranteed-renewal provisions prohibit carriers from not renewing coverage to plan participants because of their health status or claims experience. Exceptions to guaranteed renewal include cases of fraud or failure to pay premiums. A carrier may choose not to renew all of its individual policies by exiting a state’s market but is then prohibited from reentering the market for at least 5 years. Twenty-two states limited the period of time coverage can be excluded for a preexisting condition. States typically defined a preexisting condition as a condition that would have caused an ordinarily prudent person to seek medical advice, diagnosis, care, or treatment during the 12 months immediately preceding the effective date of coverage; a condition for which medical advice, diagnosis, care, or treatment was recommended or received during the 12 months immediately preceding the effective date of coverage; or a pregnancy existing on the effective date of coverage. Most reform states allowed carriers to exclude coverage for a preexisting condition for up to 12 months. Some states, however, such as Oregon and Washington, limited this exclusionary period to 6 or 3 months. Portability provisions require carriers to waive any preexisting condition limitations for covered services if comparable services were previously covered under another policy, and this previous policy was continuous to a date not more than a specified number of days before the new coverage went into effect. Among states that had passed portability reforms, the specified number of days ranged from 0 to 90. Six states had enacted portability provisions of 30 days, the most common duration among reform states. Eighteen of the 25 states included provisions in their legislation that in some way attempted to limit the amount carriers can vary premium rates or the characteristics that can be used to vary these rates. Among the seven states we visited, New Jersey, New York, and Vermont restricted carriers’ rating practices and generally required all carriers to community rate their individual products with limited or no qualifications. Under community rating, carriers must set premiums at the same level for all plan participants. That is, all participants are generally charged the same price for similar coverage regardless of age, gender, health status, or any other factor. North Dakota had limited rating restrictions, and Arizona, Colorado, and Illinois essentially had no rate limitations in place. Most of the 18 states with restrictions, however, allowed carriers to vary, or modify, the premium rates charged to individuals within a specified range according to differences in certain demographic characteristics, such as age, gender, industry, geographic area, and smoking status. For example, New Hampshire only allowed carriers to modify premium rates for differences in age, while South Carolina allowed carriers to use differences in age, gender, geographic area, industry, smoking status, occupational or avocational factors, and any additional characteristics not explicitly specified, to set premium rates. Most of the 18 states, however, limited the range over which carriers may vary rates among individual consumers. Carriers usually establish an index, or base rate, and all premium prices must fall within a given range of this rate. For example, in Idaho premium rates were permitted to vary by no more than +/-25 percent from the applicable index rate and only for differences in age and gender. Carriers in Louisiana were allowed to vary premium rates more liberally. The state’s legislation allowed carriers to vary premium rates +/-10 percent because of health status and allowed unlimited variation for specified demographic characteristics and other factors approved by the Department of Insurance. In addition, about 25 states have created high-risk insurance programs that act as a safety net to ensure that individuals who need coverage can obtain it, although at a cost that is generally 50 percent higher than the average or standard rate charged in the individual insurance market for a comparable plan. To qualify for the high-risk pool, applicants generally have to demonstrate they have been rejected by at least one carrier for health reasons or have one of a number of specified health conditions. Officials from at least two of the state insurance departments we visited suggested that their states’ high-risk pools ensure the availability of health insurance to all who needed it and prove that no access problem exists—provided the individual can afford the higher priced coverage. Although high-risk pools exist as a safety net for otherwise uninsurable individuals, they essentially enroll an insignificant number of individuals. In fact, in at least 22 of these 25 states, less than 5 percent of those under 65 with individual insurance obtain coverage through the high-risk pool. Only in Minnesota does the pool’s enrollment exceed 10 percent of the individually insured population. The low enrollment in these high-risk pools may be due in part to limited funding, lack of public awareness, and their relative expense. Some states limit enrollment and may have waiting lists. For example, California has an annual, capped appropriation to subsidize the cost of enrollees’ medical care and curtails enrollment in the program to ensure that it remains within its budget. Also, insurance department officials in each of the states we visited with high-risk pools recognized the public is often unaware that these pools exist, even though carriers are often required by law to notify rejected applicants of it. Officials in two of these three states were generally unaware of the extent to which carriers complied with this requirement. And finally, although these programs provide insurance to individuals who are otherwise uninsurable, they remain relatively expensive, and many people are simply unable to afford this higher priced coverage. In addition to the 11 states that require all carriers to guarantee issue at least one health plan to all individuals, the Blue Cross and Blue Shield plans in 8 states and the District of Columbia voluntarily offer at least one product to individuals during an annual open enrollment period, which usually lasts 30 days. Although these plans accept all applicants during this open enrollment period, they are not limited in the premium price they can charge an individual applicant. Our analysis also shows that by the end of 1995, seven states neither had passed reforms that attempted to increase access to the individual insurance market nor had an operational high-risk pool or a Blues plan that acted as insurer of last resort. In these states, individuals who are unhealthy, and thus most likely to need insurance coverage, may be unable to obtain it. These states are Alabama, Arizona, Delaware, Hawaii,Nevada, South Dakota, and Texas. In addition to state efforts, recently passed federal legislation also attempts to increase access to the individual health insurance market. The Health Insurance Portability and Accountability Act of 1996 will affect the individual market in several ways. It will, among other things, guarantee access to the individual market to consumers with previous qualifying group coverage, guarantee the renewal of individual coverage, authorize federally tax-exempt medical savings accounts (MSA), and increase the tax deduction for health insurance for self-employed individuals. Under this act, individuals who have had at least 18 months of continuous coverage have guaranteed access to an individual market product and do not need to fulfill a new waiting period for preexisting conditions if they move from a group plan to an individual market plan. It is important to note that although this law guarantees portability, it in no way limits the premium price carriers may charge individuals for this coverage. Also, with some exceptions, the legislation requires all carriers that provide individual health insurance coverage to renew or continue in force such coverage at the option of the individual. In addition, self-employed individuals who purchase health insurance will, beginning in 1997, have the option of establishing tax-deductible MSAs. An MSA is an account into which an individual deposits funds for later payment of unreimbursed medical expenses. To be eligible for the tax deduction, self-employed individuals must be covered under a high-deductible health plan (defined as a health plan with an annual deductible of $1,500 to $2,250 for an individual and $3,000 to $4,500 for family coverage) and have no other comprehensive coverage. As noted in chapter 3, many participants in the individual market already purchase high-deductible health coverage. An individual with an MSA can claim a tax deduction for 65 percent of his or her health plan’s deductible for self-only coverage and 75 percent for family coverage. Finally, the act increases the tax deductibility of health insurance for self-employed individuals, who constitute about one-fourth of individual market participants. Currently, self-employed individuals may deduct 30 percent of the amount they paid for health insurance for themselves as well as for their spouse and dependents. Beginning in 1997, these individuals may deduct 40 percent of this cost; 45 percent in 1998 through 2002; 50 percent in 2003; 60 percent in 2004; 70 percent in 2005; and 80 percent in 2006 and thereafter. While employer-sponsored group plans are still the dominant source of health insurance coverage for most Americans, millions depend on an accessible and affordable individual market outside the workplace. Many Americans, including family farmers, self-employed individuals, and those working for small firms that do not offer coverage, must rely on the individual market as their permanent source of health insurance coverage. Others rely on this market between jobs and during other periods of transition. Recent trends suggest a growing share of the U.S. population will probably turn to the individual market at some point in their lives. The days of rapid expansion of both private employer and government program coverage are probably behind us. Meanwhile, employer downsizing continues, job mobility increases, and the ranks of part-time and contract workers grow. The individual insurance market is complex, and consumers, unlike those who have access to employer-sponsored plans, are largely on their own in obtaining and financing coverage. Consumers can access the market in a variety of ways; must choose among multiple, usually nonstandardized, products offered by multiple carriers; and must select one of many cost-sharing options, each of which will have a different impact on the amount of money consumers will ultimately pay. Further adding to the complexity of this market is its high geographic variability. Depending on the state or even on the markets within a state, consumers may face an entirely different set of choices. Many consumers face barriers to coverage in the individual market. Absent state restrictions, carriers base coverage and pricing decisions on each individual’s demographic characteristics and health status. Thus in most states, those who are older or in poor health may be charged significantly higher premiums or may be denied coverage altogether. Among those with coverage in the individual market, many may be underinsured. Increasingly sold are very high deductible plans with lower premiums but greater financial risk for consumers. Many consumers may purchase these plans because they cannot afford premiums otherwise, suggesting that, unlike under medical savings accounts, a reserve to pay the high deductibles may not exist. Some consumers can only obtain coverage that permanently excludes the very medical condition for which they are most likely to need care. And other consumers—intentionally or unintentionally—purchase limited benefit policies as their only source of coverage. Twenty-five states have recently passed legislative reforms for their individual health insurance markets and more are likely to follow. The reforms vary widely in scope from limited measures, such as those intended only to limit the length of preexisting condition waiting periods a carrier may impose, to comprehensive reforms requiring carriers to provide coverage to all who apply and use community rating to set premiums. Some states use other measures to increase individual market access or affordability, such as high-risk pools and insurers of last resort. At the federal level, the Health Insurance Portability and Accountability Act of 1996 is a recent example of federal legislation that will affect the individual health insurance market. The act guarantees access to the individual market to consumers with qualifying previous group coverage and guarantees the renewability of individual coverage. For the self-employed, the act authorizes federally tax-deductible medical savings accounts and increases the tax deductibility of health insurance. The importance of the individual insurance market to millions of Americans is a factor to be considered in weighing any further incremental measures to improve the accessibility and affordability of private health insurance.
Pursuant to a congressional request, GAO provided information on the private individual health insurance market, focusing on the: (1) size of the market and characteristics of its participants; (2) structure of the market, including how individuals access the market, the prices, other characteristics of health plans offered, and the number of individual carriers offering plans; and (3) insurance reforms and other measures states have taken to increase individuals' access to health insurance. GAO found that: (1) about 10.5 million Americans under 65 years of age relied on private individual health insurance as their only source of health coverage during 1994; (2) when compared with those enrolled in employer-sponsored group coverage, individual health insurance enrollees are, on average, older and have lower income, but they are similar in their self-reported health status; (3) individual insurance is more prevalent among particular segments of the labor force, such as the self-employed and farm workers; (4) individuals must identify and evaluate multiple health insurance products and then obtain and finance the coverage on their own; (5) individuals in the states reviewed could select products from 7 to over 100 carriers, with deductibles ranging from $250 to $10,000; (6) in the majority of states, which permit medical underwriting, individuals may be excluded from the private insurance market, may only be able to obtain limited benefit coverage, or may pay premiums that are significantly higher than the standard rate for similar coverage; (7) carriers in these states determine premium price and eligibility on the basis of the risk indicated by each individual's demographic characteristics and health status; (8) carriers GAO visited declined coverage to up to 33 percent of applicants because of medical conditions, such as acquired immunodeficiency syndrome and heart disease; (9) if they do not decline coverage, carriers may permanently exclude from coverage certain conditions or body parts, or charge significantly higher premiums to those expected to incur large health care costs; (10) at least 43 states have sought to increase the health coverage options available to otherwise uninsurable individuals; and (11) a new federal law contains provisions intended to enhance access to the individual insurance market, particularly regarding portability and guaranteed renewal.
A federal grant is an award of financial assistance from a federal agency to an organization to carry out an agreed-upon public purpose. A Direct Payment for Specified Use (direct assistance) is an award of financial assistance from the federal government to individuals, private firms, and other private institutions to encourage or subsidize a particular activity by conditioning the receipt of the assistance on a particular performance by the recipient. As such, federal grants and direct assistance programs do not include solicited contracts for the procurement of goods and services for the federal government. Based on our analysis of fiscal years 2004 and 2005 data from the Federal Assistance Award Data System (FAADS), federal agencies collectively awarded grants and direct assistance of approximately $300 billion annually. Further analysis of the FAADS data indicates that approximately 80 percent of federal grants and direct assistance consist of federal funds provided to state and local governments, which, in turn, disburse funds to the ultimate recipients. Consequently, only about 20 percent of awarded funds are provided directly from the federal government to the organization that ultimately spends the money. Governmentwide policies affecting the award and administration of grants to nongovernmental entities are covered in OMB Circular No. A-110, Uniform Administrative Requirements for Grants and Agreements with Institutions of Higher Education, Hospitals, and Other Non-Profit Organizations. OMB Circular No. A-102, Grants and Cooperative Agreements with State and Local Governments, provides governmentwide guidance for administering grants provided to state and local governments and prescribes similar procedures as those included in Circular No. A-110. Direct assistance programs are not subject to the same governmentwide policies as grants; procedures governing the application and award processes for direct assistance are prescribed in guidance and regulations promulgated by the cognizant federal agency responsible for administering the program. Most grant applicants that apply directly to the federal government are required to complete an Application for Federal Assistance, Standard Form (SF) 424. The SF 424 provides federal agencies with entity information, such as name, employer identification number, address, and a descriptive title of the project for which the grant will be used. The applicant is required to certify that the information provided on the SF 424 is true and correct and whether the applicant is currently delinquent on any federal debt. While most federal grant and direct assistance recipients pay their federal taxes, we identified tens of thousands of grant and direct assistance recipients that collectively owed about $790 million in federal taxes as of September 2006. These tax debts were owed by entities who received federal payments directly from federal payment systems during fiscal years 2005 and 2006 and individuals who participated in HUD’s Section 8 tenant-based housing program as landlords during fiscal years 2005 and 2006. We used IRS’s September 2006 unpaid assessments file data to calculate the amount of taxes owed at or about the time the various grant recipients received their grant payments. Specifically, we found 2,000 of about 32,000 recipients (about 6 percent) who received federal grant and direct assistance benefits directly from three of the largest federal payment systems had more than $270 million of unpaid federal taxes. About $110 million of the $270 million in unpaid taxes represents unpaid payroll taxes. 37,000 of over 1 million landlords (about 4 percent) participating in the HUD Section 8 program had about $520 million of unpaid federal taxes. Most of these tax debts were unpaid individual income taxes. In our audit, we found that grant recipients had a substantial amount of unpaid payroll taxes. Employers may be subject to civil and criminal penalties if they do not remit payroll taxes to the federal government. When an employer withholds taxes from an employee’s wages, the employer is deemed to have a fiduciary responsibility to hold these funds “in trust” for the federal government until the employer makes a federal tax deposit in that amount. To the extent these withheld amounts are not forwarded, the employer is liable for these amounts, as well as the employer’s matching Federal Insurance Contribution Act contributions for Social Security and Medicare. Individuals employed by the employer (e.g., owners or officers) may be held personally liable for the withheld amounts not forwarded and assessed a civil monetary penalty known as a trust fund recovery penalty (TFRP). Willful failure to remit payroll taxes can also be a criminal felony offense punishable by imprisonment of up to 5 years, while the failure to properly segregate payroll tax funds can be a criminal misdemeanor offense punishable by imprisonment of up to a year. The law imposes no penalties upon an employee for the employer’s failure to remit payroll taxes since the employer is responsible for submitting the amounts withheld. The Social Security and Medicare trust funds are subsidized or made whole for unpaid payroll taxes by the federal government’s general fund. Thus, personal income taxes, corporate income taxes, and other government revenues are used to pay for these shortfalls to the Social Security and Medicare trust funds. Although grant and direct assistance recipients had about $790 million in unpaid federal taxes as of September 30, 2006, this amount likely understates the full extent of unpaid taxes for these or other organizations and individuals. For example, except for our case study involving HUD’s Section 8 tenant-based housing program, our estimate of grant and direct assistance recipients was limited to data from three of the largest government payment systems and thus did not include all federal grant and direct assistance disbursements. Further, our analysis of the three payment systems did not include recipients of payments who received their payments through state and local government entities. Based on our analysis of data from the FAADS, we estimated that payments paid by the federal government to final recipients account for only about 20 percent of the total grant and direct assistance funds awarded by the federal government. The remaining 80 percent of grant and direct assistance funds are provided to states and local governments, which, in turn, disburse them to the ultimate recipient. Further, to avoid overestimating the amount owed, we limited our scope to tax debts that were affirmed by either the taxable entity or a tax court for tax periods prior to 2006. We did not include the most current tax year because recently assessed tax debts that appear as unpaid taxes may involve matters that are routinely resolved between the taxpayer and IRS, with the taxes paid, abated, or both within a short period. We eliminated these types of debt by focusing on unpaid taxes for tax periods prior to calendar year 2006 and eliminating tax debt of $100 or less. The IRS tax database reflects only the amount of unpaid taxes either reported by the individual or organization on a tax return or assessed by IRS through its various enforcement programs. The IRS database does not reflect amounts owed by organizations and individuals that have not filed tax returns and for which IRS has not assessed tax amounts due. Further, our analysis did not attempt to account for organizations or individuals that purposely underreported income and were not specifically identified by IRS as owing the additional taxes. According to IRS, underreporting of income accounted for more than 80 percent of the estimated $345 billion annual gross tax gap. Consequently, the full extent of unpaid taxes for grant and direct assistance participants is not known. For all 20 cases of grant and direct assistance recipients we investigated for examples of abusive and criminal activity related to the federal tax system based on the large amount of tax debt and number of delinquent tax periods, we found in all cases evidence that indicated the existence of such abusive and criminal activities. Of these 20 cases, 14 cases were not- for-profit organizations that received grant payments and also had unpaid payroll taxes, some dating as far back as the 1990s. For example, one educational institution failed to submit employee payroll withholding taxes several times within a 3-year period and accumulated an unpaid tax liability of almost $4 million. For the cases of payroll tax delinquencies we investigated, officials responsible for these organizations failed to fulfill their role as “trustees” of employees’ payroll tax withholdings and forward this money to IRS as required by federal tax laws. Instead, these officials diverted the withholdings to fund the organizations’ operations or for personal benefits, such as their own salaries or extravagant vacations. The other 6 cases involved individuals who owed individual income taxes and who also received government subsidy payments through HUD’s Section 8 low-income housing program. In one of these cases, the landlord attempted to evict renters who were instructed by IRS to pay the rent directly to IRS instead of the landlord. In all 20 cases, we saw significant evidence of IRS collection activity occurring, but in only a couple of cases did we see action related to investigating these entities and individuals for criminal violations of federal tax laws. Table 1 highlights 10 cases of grant recipients we investigated with unpaid taxes. The other 10 cases are summarized in appendix II. The following provide illustrative detailed information on several of these cases: Case 4: This landlord participating in the Section 8 tenant-based housing program was involved in a fraudulent real estate transaction in addition to owing over $3 million in delinquent income taxes. The landlord was indicted on mortgage fraud and racketeering for attempting to sell real estate at an inflated price using false appraisals. Previously, the landlord’s family member was convicted of conspiracy to commit mail fraud, wire fraud, and money laundering in a scheme to sell fraudulent vacation club memberships. In addition, the landlord was charged with fraudulent conveyance of over $3 million worth of property to defraud creditors. Case 7: This grant recipient organization, which provides medical care to low-income families, has experienced financial problems throughout most of the 2000s and has several hundred thousand dollars in delinquent payroll taxes. During these years, while failing to properly fund its employee pension plan, the grant recipient paid hundreds of thousands of dollars in consulting fees to a former employee. Grant recipient could not document to other auditors over a million dollars in expenses relating to grants. Grant recipient has not made required contributions totaling tens of thousands of dollars to its pension plan. In addition, IRS assessed a TFRP against key grant recipient officials. Case 9: This not-for-profit grant recipient has been in operation since the 1980s to provide social services to disadvantaged individuals and families and has another closely related not-for-profit entity that also received federal grants. The recipient has over $1 million in unpaid payroll taxes dating back to the early 2000s, and IRS has placed several tax liens against the grantee’s property. The grant recipient has had recurring financial problems and an independent audit report raised concerns about the entity’s ability to continue operating. The recipient has also been cited for commingling funds among related grant recipients and for not having a functioning Board of Directors as represented to the granting agency. The recipient has been recommended for disbarment. Case 10: This not-for-profit grant recipient stopped making payroll tax deposits for several years beginning mid-2000s, accumulating unpaid payroll taxes totaling several hundreds of thousands of dollars. IRS filed liens against grantee assets and assessed the exempt organization with payroll tax violation penalties and interest totaling tens of thousands of dollars. A key grant recipient officer had a prior conviction for tax evasion and was again investigated for improperly using grant funds to purchase expensive clothing, a luxury vehicle, and lavish vacations and to pay taxes assessed from a prior tax evasion conviction. A key grantee officer had numerous individual income tax delinquencies. Neither federal law nor current governmentwide policies for administering federal grants or direct assistance prohibit applicants with unpaid federal taxes from receiving grants and direct assistance from the federal government. Even if such requirements did exist, absent consent from the taxpayer, federal law generally prohibits IRS from disclosing taxpayer data and, consequently, federal agencies have no access to tax data directly from IRS. Moreover, federal agencies we reviewed do not prevent organizations and individuals with unpaid federal taxes from receiving grants or direct assistance for the specific programs they administer. With regard to administering federal grants, federal law and current governmentwide policies, as reflected in OMB Circulars, do not prohibit individuals and organizations with unpaid taxes from receiving grants. OMB Circulars provide only general guidance with regard to considering existing federal debt in awarding grants. Specifically, the Circulars state that if an applicant has a history of financial instability, or other special conditions, the federal agency may impose additional award requirements to protect the government’s interests. However, the Circulars do not specifically require federal agencies to take into account an applicant’s delinquent federal debt, including federal tax debt, when assessing applications. While they require grant applicants to self-certify in their standard government application (SF 424) whether they are currently delinquent on any federal debt, including federal taxes, the Circulars contain no provision instructing the agencies to verify such certifications or describing how such verification should be done. No assessment of tax debt is required by OMB on a sampling or risk-based assessment. Although current governmentwide policies do not require it, federal agencies, such as HHS and Department of Education, have policies against awarding grants to applicants that owe federal debts. These policies state that a grant may not be awarded until the debt is satisfied or arrangements are made with the agency to which the debt is owed. However, awarding agencies rely extensively on applicants’ self-certifications that they are not delinquent on any federal debt, including tax debt. Certain agencies, such as HHS, stated that they check credit reports to see if the grant applicant has any outstanding tax liens prior to award of the contract. While it is difficult to validate the agencies’ assertions, none of these agencies could provide us examples where grant officials denied a grant based on self- disclosed tax delinquencies or required applicants to make repayment arrangements with the agency to which debt was owed. Even if requirements to verify applicants’ disclosures did exist, federal law poses a significant challenge to federal granting agencies in determining the accuracy of representations made by organizations applying for grants. Specifically, the law does not permit IRS to disclose taxpayer information, including tax debts, to federal agency officials unless the taxpayer consents. Thus, unless an applicant provides consent requesting that IRS provide taxpayer information to federal agencies, certain tax debt information generally can only be discovered from public records when IRS files a tax lien against the property of a tax debtor. Further, representatives of one federal agency that has attempted to develop an approved consent form discovered that IRS may not accept certain signed consent forms because a requirement for an applicant to sign a consent form as a precondition to the agency’s acceptance of the application may be considered a form of duress and thus raise a disclosure issue. Notwithstanding, while information on filed tax liens is generally publicly available, IRS does not file tax liens on all tax debtors nor does IRS have a central repository of tax liens to which grant-awarding agencies have access. Further, available information on tax liens may not be current or accurate because other studies have shown that IRS has not always released tax liens from property when the tax debt has been satisfied. Of the 20 organizations and individuals that we selected for additional investigation of abuse and criminal activity, 14 were grant recipients that were required to submit an application in order to be awarded a grant. In our review of the grant applications for the 14 grant cases, we found that 11 applicants certified in their applications that they were not currently delinquent in any federal debt, even though IRS had current tax assessments on file for these entities at the time the applications were filed. As a result, these 11 cases appear to have violated the False Statements Act because they did not declare their existing tax debt in their applications even though they were required to do so. Direct assistance programs are generally not subject to the same governmentwide guidance for grants. Instead, the cognizant federal agencies implement the necessary regulations for administering the program. With regard to our HUD’s Section 8 tenant-based program case study, HUD regulations do not require local housing authorities to identify whether landlords who participate in HUD’s housing assistance program and receive housing subsidies have outstanding federal tax delinquencies or prohibit payments if such delinquencies are identified. HUD regulations do permit local housing authorities to deny program participation if a landlord has not paid state or local real estate taxes, fines, or assessments. HUD regulations, however, do not require local housing authorities to deny the landlord from participating in the HUD program if the landlord owes any delinquent federal debts, including federal taxes. Because about 80 percent of all federal grants and direct assistance are administered and disbursed through state and local governments, the extent to which all final recipients of these federal payments owe taxes is not known. However, our limited audit has demonstrated that tens of thousands of grant and direct assistance recipients have taken advantage of the opportunity to avoid paying $790 million in federal taxes. At the same time they failed to pay their federal taxes, these individuals and organizations benefited by receiving billions of dollars of federal grants or direct assistance benefits. With regard to grants, allowing individuals and organizations to receive federal grants while not paying their federal taxes is not fair to the vast majority of grant applicants that pay their fair share of taxes. This practice causes a disincentive to individuals and organizations to pay their fair share of taxes and could lead to further erosion in compliance with the nation’s tax system. We recommend that the Director, Office of Management and Budget, assess the need to issue guidance requiring federal agencies that award certain grants and other direct assistance, where appropriate in relation to the potential adverse effect on potential applications, to take the following two actions: Conduct actions that would help determine if applicants had unpaid federal tax debt, including obtaining applicant consent to inquire as to tax debt status from IRS; this could be achieved through sampling or other risk-based assessments. Consider the result of those inquiries in the award determinations. We also recommend that the Acting Commissioner of Internal Revenue evaluate the 20 referred cases detailed in this report for appropriate additional collection action or criminal investigation as warranted. We received written comment from IRS and oral comments from OMB on the draft of this report. Both IRS and OMB agreed with the draft report’s recommendations. OMB also provided technical comments on the draft report, which we incorporated as appropriate. We have reprinted IRS’s written comments in their entirety in Appendix III. As agreed with your offices, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Acting Commissioner of Internal Revenue, the Director of the Office of Management and Budget, interested congressional committees, and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-6722 or [email protected] if you have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To address our first objective to describe the magnitude of tax debt owed, we obtained and analyzed federal payment databases from the Department of the Treasury’s Automated Standard Application Payment System (ASAP), the Department of Education’s Grant Administration and Payment System (GAPS), and the Department of Health and Human Services’ (HHS) Payment Management System (PMS) for fiscal years 2005 and 2006. These three agencies process grant and other assistance program payments on behalf of many federal agencies and, in fiscal years 2005 and 2006, processed over $460 billion in grant and direct assistance payments, excluding Medicare and Medicaid. Our analysis of data, however, was limited because approximately 80 percent of federal grant and direct assistance payments are paid to state and local governments, which then disburse funds to final recipients. Because identifying information on the final recipients of payments provided at the state and local level is not available at the federal level, our analysis was limited to those payments provided directly by the federal government to final recipients. We estimated these direct payments to final recipients represented about 20 percent of total federal grant and direct assistance payments. Because identifying information for final recipients of payments provided at the state and local levels was not available at the federal level and not practical for us to obtain, we selected one major federal program that disburses funds at the local level for a case study analysis. For this case study, we selected the Department of Housing and Urban Development’s (HUD) Section 8 tenant-based low-income housing program, a program classified as a direct assistance program, which provides rental assistance to low-income families by providing supplemental rental payments directly to landlords participating in the rental assistance program. We obtained and analyzed an extract from HUD’s Public Housing Information Center (PIC) database, which HUD represented to be the most complete data source available on low-income assisted households in HUD’s public housing or voucher programs, that contained identifying information on landlords who participated in HUD’s program during fiscal years 2005 and 2006. 2006, unpaid assessments files and electronically matched these files with the various grant and direct assistance recipients identified in the above databases using the taxpayer identification number. To avoid overstating the amount of unpaid taxes owed by grant recipients and to capture only significant tax debt, we excluded tax debts meeting specific criteria. The criteria we used to exclude tax debts are as follows: tax debts IRS classified as compliance assessments or memo accounts for financial reporting, tax debts from calendar year 2006 tax periods, and grant recipients with total unpaid taxes of $100 or less. The criteria above were used to exclude tax debts that might be under dispute or generally duplicative or invalid and tax debts that are incurred after the dates the entities received grant payments. Compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We excluded tax debts from calendar year 2006 tax periods to eliminate tax debt that may involve matters that are routinely resolved between the taxpayers and IRS, with the taxes paid or abated within a short period. We also excluded tax debts of $100 or less because they are insignificant for the purpose of determining the extent of taxes owed by grant recipients. personal bank data, entities established to hide assets, etc.) that the recipient individuals or entities, including key officials, own or receive. To determine actions federal agencies take to prevent individuals and organizations with significant unpaid federal taxes from either being approved for or receiving grant or direct assistance payments, we reviewed governmentwide and agency-specific policies and procedures for awarding grants and benefits from the respective programs. We also interviewed officials from the Office of Management and Budget and the Departments of Agriculture, Education, Homeland Security, HHS, and HUD on whether tax debts are considered in their decisions to approve award applications. In addition, to test whether grant applicants properly disclosed their current tax delinquencies when submitting applications, we requested and reviewed the grant applications for our 14 cases that applied for grants. We conducted our audit work from January 2007 through August 2007 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. For IRS unpaid assessments data, we relied on the work we performed during our annual audits of IRS’s financial statements. While our financial statement audits have identified some data reliability problems associated with the coding of some of the fields in IRS’s tax records, including errors and delays in recording taxpayer information and payments, we determined that the data were sufficiently reliable to address our report’s objectives. Our financial audit procedures, including the reconciliation of the value of unpaid taxes recorded in IRS’s master file to IRS’s general ledger, identified no material differences. information for reporting on program performance under the Government Performance and Results Act. We held discussions with HUD’s database administrators on input controls used to maintain data integrity for the data elements we used for our analysis and with HUD programmers to discuss the programming code HUD used to extract the data we used. We also performed electronic testing of the data elements we used and performed limited verification tests. Based on our discussions with agency officials, our review of agency documents, and our own testing, we concluded that the data elements used for this report were sufficiently reliable for our purposes. Table 1 provides data on 10 detailed case studies. Table 2 below provides details of the remaining 10 organizations and individuals we selected as case studies. As with the 10 cases discussed in the body of this report, we found evidence of abusive and potential criminal activity related to the federal tax system during our audit and investigations of these 10 case studies. In addition to the individual named above, Erika Axelson, James Berry, Ray Bush, Bill Cordrey, Kenneth Hill, Aaron Holling, Wil Holloway, Mitchell Karpman, John Kelly, Rick Kusman, Tram Le, John Ledford, Barbara Lewis, Andrew McIntosh, Eduvina Rodriguez, John Ryan, Steve Sebastian, Robert Sharpe, Barry Shillito, Pat Tobo, and Matthew Valenta made key contributions to this report. Tax Compliance: Thousands of Organizations Exempt from Federal Income Tax Owe Nearly $1 Billion in Payroll and Other Taxes. GAO-07- 1090T. Washington, D.C.: July 24, 2007. Tax Compliance: Thousands of Organizations Exempt from Federal Income Tax Owe Nearly $1 Billion in Payroll and Other Taxes. GAO-07- 563. Washington, D.C.: June 29, 2007. Tax Compliance: Thousands of Federal Contractors Abuse the Federal Tax System. GAO-07-742T. Washington, D.C.: April 19, 2007. Medicare: Thousands of Medicare Part B Providers Abuse the Federal Tax System. GAO-07-587T. Washington, D.C.: March 20, 2007. Internal Revenue Service: Procedural Changes Could Enhance Tax Collections. GAO-07-26. Washington, D.C.: November 15, 2006. Tax Debt: Some Combined Federal Campaign Charities Owe Payroll and Other Federal Taxes. GAO-06-887. Washington, D.C.: July 28, 2006. Tax Debt: Some Combined Federal Campaign Charities Owe Payroll and Other Federal Taxes. GAO-06-755T. Washington, D.C.: May 25, 2006. Financial Management: Thousands of GSA Contractors Abuse the Federal Tax System. GAO-06-492T. Washington, D.C.: March 14, 2006. Financial Management: Thousands of Civilian Agency Contractors Abuse the Federal Tax System with Little Consequence. GAO-05-683T. Washington, D.C.: June 16, 2005. Financial Management: Thousands of Civilian Agency Contractors Abuse the Federal Tax System with Little Consequence. GAO-05-637. Washington, D.C.: June 16, 2005. Financial Management: Some DOD Contractors Abuse the Federal Tax System with Little Consequence. GAO-04-414T. Washington, D.C.: February 12, 2004. Financial Management: Some DOD Contractors Abuse the Federal Tax System with Little Consequence. GAO-04-95. Washington, D.C.: February 12, 2004.
Since February 2004, GAO has reported that weaknesses in the federal programs and controls that allowed thousands of federal contractors, tax exempt entities, and Medicare providers to receive government money while owing taxes. GAO was asked to determine if these problems exist for entities who receive federal grants or direct assistance and (1) describe the magnitude of taxes owed, (2) provide examples of grant recipients involved in abusive and potentially criminal activity, and (3) assess efforts to prevent delinquent taxpayers from participating in such programs. To perform this work, GAO analyzed data from the Internal Revenue Service (IRS), three of the largest grant and direct assistance payment systems, representing over $460 billion in payments in fiscal years 2005 and 2006, and the Housing and Urban Development (HUD) Section 8 tenant-based housing program. GAO investigated 20 cases to provide examples of grant recipients involved in abusive activity. While most recipients of payments federal grant and direct assistance programs pay their federal taxes, tens of thousands of recipients collectively owed $790 million in federal taxes as of September 30, 2006. This included over 2,000 individuals and organizations that received $124 billion of payments directly from the federal government and who owed more than $270 million of unpaid taxes (almost 6 percent of such recipients) and about 37,000 landlords participating in HUD's Section 8 tenant-based housing program who owed an estimated $520 million of unpaid taxes (almost 4 percent of such landlords). The $790 million estimate is likely substantially understated because GAO's analysis excluded the 80 percent of federal grants that are directly given to state and local governments which, in turn, disburse the grants to the ultimate recipients. GAO selected 20 grant and direct assistance recipients with high tax debt for a more in-depth investigation of the extent and nature of abuse and criminal activity. For all 20 cases GAO found abusive and potential criminal activity related to the federal tax system, including failure to remit individual income taxes and/or payroll taxes to IRS. Rather than fulfill their role as ''trustees'' of payroll tax money and forward it to IRS, these grant recipients diverted the money for other purposes. Willful failure to remit payroll taxes is a felony under U.S. law. Individuals associated with some of these recipients diverted the payroll tax money for their own benefit or to help fund their businesses. GAO referred these 20 cases to IRS for additional collection and investigation action, as appropriate. Federal law and current governmentwide policies do not prohibit individuals and organizations with unpaid taxes from receiving grants or direct assistance. Several federal agencies established policies against awarding grants to tax delinquent applicants; however, federal agencies do not verify applicants' certification that they do not owe taxes. Further, federal law generally prohibits the disclosure of taxpayer data to federal agencies. Eleven grant recipients that GAO investigated appeared to have made false statements by not disclosing their tax debt as required. Further, agencies that award grants are not required to inquire as to recipients' tax debt status prior to providing direct assistance payments.
Energy commodities are bought and sold on both the physical and financial markets. The physical market includes the spot market where products such as crude oil or gasoline are bought and sold for immediate or near-term delivery by producers, wholesalers, and retailers. Spot transactions take place between commercial participants for a particular energy product for immediate delivery at a specific location. For example, the U.S. spot market for West Texas Intermediate crude oil is the pipeline hub near Cushing, Oklahoma, while a major spot market for natural gas operates at the Henry Hub near Erath, Louisiana. The prices set in the specific spot markets provide a reference point that buyers and sellers use to set the price for other types of the commodity traded at other locations. In addition to the spot markets, derivatives based on energy commodities are traded in financial markets. The value of the derivative contract depends on the performance of the underlying asset—for example, crude oil or natural gas. Derivatives include futures, options, and swaps. Energy futures include standardized exchange-traded contracts for future delivery of a specific crude oil, heating oil, natural gas, or gasoline product at a particular spot market location. An exchange designated by CFTC as a contract market standardizes the contracts. The owner of an energy futures contract is obligated to buy or sell the commodity at a specified price and future date. However, the contractual obligation may be removed at any time before the contract expiration date if the owner sells or purchases other contracts with terms that offset the original contract. In practice, most futures contracts on NYMEX are liquidated via offset, so that physical delivery of the underlying commodity is relatively rare. Market participants use futures markets to offset the risk caused by changes in prices, to discover commodity prices, and to speculate on price changes. Some buyers and sellers of energy commodities in the physical markets trade in futures contracts to offset or “hedge” the risks they face from price changes in the physical market. Exempt commercial markets and OTC derivatives are also used to hedge this risk. The ability to reduce their price risk is an important concern for buyers and sellers of energy commodities, because wide fluctuations in cash market prices introduce uncertainty for producers, distributors, and consumers of commodities and make investment planning, budgeting, and forecasting more difficult. To manage price risk, market participants may shift it to others more willing to assume the risk or to those having different risk situations. For example, if a petroleum refiner wants to lower its risk of losing money because of price volatility, it could lock in a price by selling futures contracts to deliver the gasoline in 6 months at a guaranteed price. Without futures contracts to manage risk, producers, refiners, and others would likely face greater uncertainty. By establishing prices for future delivery, the futures market also helps buyers and sellers determine or “discover” the price of commodities in the physical markets, thus linking the two markets together. Markets are best able to perform price discovery when (1) participants have current information about the fundamental market forces of supply and demand, (2) large numbers of participants are active in the market, and (3) the market is transparent. Market participants monitor and analyze a myriad of information on the factors that currently affect and that they expect to affect the supply of and demand for energy commodities. With that information, participants buy or sell an energy commodity contract at the price they believe the commodity will sell for on the delivery date. The futures market, in effect, distills the diverse views of market participants into a single price. In turn, buyers and sellers of physical commodities may consider those predictions about future prices, among other factors, when setting prices on the spot and retail markets. Other participants, such as investment banks and hedge funds, which do not have a commercial interest in the underlying commodities, generally use the futures market for profit. These speculators provide liquidity to the market but also take on risks that other participants, such as hedgers, seek to avoid. In addition, arbitrageurs attempt to make a profit by simultaneously entering into several transactions in multiple markets in an effort to benefit from price discrepancies across these markets. The physical markets for energy commodities underwent change and turmoil from 2002 through 2006, which affected prices in the spot and futures markets. We reported that numerous changes in both the physical and futures markets may have affected energy prices. However, because these changes occurred simultaneously, identifying the specific effect of any one of these changes on energy prices is difficult. The physical energy markets have undergone substantial change and turmoil during this period, which can affect spot and futures markets. Like many others, we found that a number of fundamental supply and demand conditions can affect prices. According to the Energy Information Administration (EIA), world oil demand has grown since 1983 from a low of about 59 million barrels per day in 1983 to more than 85 million barrels per day in 2006 (fig. 1). While the United States accounts for about a quarter of this demand, rapid economic growth in Asia also has stimulated a strong demand for energy commodities. For example, EIA data show that during this time frame, China’s average daily demand for crude oil increased almost fourfold. The growth in demand does not, by itself, lead to higher prices for crude oil or any other energy commodity. For example, if the growth in demand were exceeded by a growth in supply, prices would fall, other things remaining constant. However, according to EIA, the growth in demand outpaced the growth in supply, even with spare production capacity included in supply. Spare production capacity is surplus oil that can be produced and brought to the market relatively quickly to rebalance the market if there is a supply disruption anywhere in the world oil market. As shown in figure 2, EIA estimates that global spare production capacity in 2006 was about 1.3 million barrels per day, compared with spare capability of about 10 million barrels per day in the mid-1980s and about 5.6 million barrels a day as recently as 2002. Major weather and political events also can lead to supply disruptions and higher prices. In its analysis, EIA has cited the following examples: Hurricanes Katrina and Rita removed about 450,000 barrels per day from the world oil market from June 2005 to June 2006. Instability in major oil-producing countries of the Organization of Petroleum Exporting Countries (OPEC), such as Iran, Iraq, and Nigeria, have lowered production in some cases and increased the risk of future production shortfalls in others. Oil production in Russia, a major driver of non-OPEC supply growth during the early 2000s, was adversely affected by a worsened investment climate as the government raised export and extraction taxes. The supply of crude oil affects the supply of gasoline and heating oil, and just as production capacity affects the supply of crude oil, refining capacity affects the supply of those products distilled from crude oil. As we have reported, refining capacity in the United States has not expanded at the same pace as the demand for gasoline. Inventory, another factor affecting supplies and therefore prices, is particularly crucial to the supply and demand balance, because it can provide a cushion against price spikes if, for example, production is temporarily disrupted by a refinery outage or other event. Trends toward lower levels of inventory may reduce the costs of producing gasoline, but such trends also may cause prices to be more volatile. That is, when a supply disruption occurs or there is an increase in demand, there are fewer stocks of readily available gasoline to draw on, putting upward pressure on prices. Another consideration is that the value of the U.S. dollar on open currency markets could affect crude oil prices. For example, because crude oil is typically denominated in U.S. dollars, the payments that oil-producing countries receive for their oil also are denominated in U.S. dollars. As a result, a weak U.S. dollar decreases the value of the oil sold at a given price, and oil-producing countries may wish to increase prices for their crude oil in order to maintain the purchasing power in the face of a weakening U.S. dollar to the extent they can. As you can see, conditions in the physical markets have undergone changes that can help explain at least some of the increases in both physical and derivatives commodity prices. As we have previously reported, futures prices typically reflect the effects of world events on the price of the underlying commodity such as crude oil. For example, political instability and terrorist acts in countries that supply oil create uncertainties about future supplies, which are reflected in futures prices. Conversely, news about a new oil discovery that would increase world oil supply could result in lower futures prices. In other words, changes in the physical markets influence futures prices. At the same time that physical markets were undergoing changes, we found that financial markets also were amidst change and evolution. For example, the annual historical volatilities between 2000 and 2006— measured using the relative change in daily prices of energy futures— generally were above or near their long-term averages, although crude oil and heating oil declined below the average and gasoline declined slightly at the end of that period. We also found that the annual volatility of natural gas fluctuated more widely than that of the other three commodities and increased in 2006 even though prices largely declined from the levels reached in 2005. Although higher volatility is often equated with higher prices, this pattern illustrates that an increase in volatility does not necessarily mean that price levels will increase. In other words, price volatility measures the variability of prices rather than the direction of the price changes. Elsewhere in the futures market, we found an increase in the number of noncommercial traders such as managed money traders. Attracted in part by the trends in prices and volatility, a growing number of traders sought opportunities to hedge against those changes or profit from them. Using CFTC’s large trader data, we found that from July 2003 to December 2006, crude oil futures and options contracts experienced the most dramatic increase, with the average number of noncommercial traders more than doubling from about 125 to about 286. As shown in figure 3, while the growth was less dramatic in the other commodities, the average number of noncommercial traders also showed an upward trend for unleaded gasoline, heating oil, and natural gas. Not surprisingly, our work also revealed that as the number of traders increased, so did the trading volume on NYMEX for all energy futures contracts, particularly crude oil and natural gas. Average daily contract volume for crude oil increased by 90 percent from 2001 through 2006, and natural gas increased by just over 90 percent. Unleaded gasoline and heating oil experienced less dramatic growth in their trading volumes over this period. While much harder to quantify, another notable trend was the significant increase in the amount of energy derivatives traded outside exchanges. Trading in these markets is much less transparent, and comprehensive data are not available because these energy markets are not regulated. However, using the Bank for International Settlements data as a rough proxy for trends in the trading volume of OTC energy derivatives, the face value or notional amounts outstanding of OTC commodity derivatives excluding precious metals, such as gold, grew from December 2001 to December 2005 by more than 850 percent to over $3.2 trillion. Further, while some market observers believe that managed money traders were exerting upward pressure on prices by predominantly buying futures contracts, CFTC data we analyzed revealed that from the middle of 2003 through the end of 2006, the trading activity of managed money participants became increasingly balanced between buying (those that expect prices to go up) and selling (those that expect prices to go down). Using CFTC large trader reporting data, we found that from July 2003 through December 2006, managed money traders’ ratio of buying (long) to selling (short) open interest positions was 2.5:1 indicating that on the whole, this category of participants was 2.5 times as likely to expect prices to rise as opposed to fall throughout that period, which they did. However, as figure 4 illustrates, by 2006, this ratio fell to 1.2:1, suggesting that managed money traders as a whole were more evenly divided in their expectations about future prices. As you can see, managed money trading in unleaded gasoline, heating oil, and natural gas showed similar trends. Overall, we found that views were mixed about whether these trends put any upward pressure on prices. Some market participants and observers have concluded that large purchases of oil futures contracts by speculators could have created an additional demand for oil that could lead to higher prices. Conversely, some federal agencies and other market observers took the position that speculative trading activity did not have a significant impact on prices. For example, an April 2005 CFTC study of the markets concluded that increased trading by speculative traders, including hedge funds, did not lead to higher energy prices or volatility. This study also argued that hedge funds provided increased liquidity to the market and dampened volatility. Still others told us that while speculative trading in the futures market could contribute to short-term price movements in the physical markets, they did not believe it was possible to sustain a speculative “bubble” over time, because the two markets were linked and both responded to information about changes in supply and demand caused by such factors as the weather or geographical events. In the view of these observers and market participants, speculation could not lead to artificially high or low prices over a long period. Under CEA, CFTC’s authority for protecting market users from fraudulent, manipulative, and abusive practices in energy derivatives trading is primarily focused on the operations of traditional futures exchanges, such as NYMEX, where energy futures are traded. Off exchange markets, which are available only to eligible traders of certain commodities under specified conditions, are not regulated, although CFTC may enforce antimanipulation and antfraud provisions of the CEA with respect to trading in those markets. The growth in trading off exchange has raised questions about the sufficiency of CFTC’s limited authority over these markets. These changes and innovations also have brought into question the methods CFTC uses to categorize published data about futures trading by participants in the off exchange markets and whether information about their activities in off exchange markets would be useful to the public. CFTC is taking steps to better understand these issues. Most importantly, it is currently examining the relationship between trading in the regulated and exempt energy markets and the role this trading plays in the price discovery process. It is also examining the sufficiency of the scope of its authority over these markets—an issue that will warrant further examination as part of the CFTC reauthorization process. To help provide transparency in the markets, CFTC provides the public information on open interest in exchange-traded futures and options by commercial and noncommercial traders for various commodities in its weekly Commitment of Traders (COT) reports. As we reported, CFTC observed that the exchange-traded derivatives markets, as well as trading patterns and practices, have evolved. In 2006, CFTC initiated a comprehensive review of the COT reporting program out of concern that the reports in their present form might not accurately reflect the commercial or noncommercial nature of positions held by nontraditional hedgers, such as swaps dealers. A disconnect between the classifications and evolving trading activity could distort the accuracy and relevance of reported information to users and the public, thereby limiting its usefulness for both. In December 2006, CFTC announced a 2-year pilot program for publishing a supplemental COT report that includes positions of commodity index traders in a separate category. However, the pilot does not include any energy commodities. Although commodity index traders are active in energy markets, according to CFTC officials, currently available data would not permit an accurate breakout of index trading in these markets. For example, some traders, such as commodity index pools, use the futures markets to hedge commodity index positions they hold in the OTC market. However, these traders also may have positions in the physical markets, which means the reports that CTFC receives on market activities, which do not include such off-exchange transactions, may not present an accurate picture of all positions in the market place for the commodity. In response to our recommendation to reexamine the COT classifications for energy markets, CFTC agreed to explore whether the classifications should be refined to improve their accuracy and relevance. Now let me address some of the larger policy issues associated with CFTC’s oversight of these markets. Under CEA, CFTC’s authority for protecting market users from fraudulent, manipulative, and abusive practices in energy derivatives trading is primarily focused on the operations of traditional futures exchanges, such as NYMEX, where energy futures are traded. Currently, CFTC receives limited information on derivatives trading on exempt commercial markets—for example, records of allegations or complaints of suspected fraud or manipulation, and price, quantity, and other data on contracts that average five or more trades a day. The agency may receive limited information, such as trading records, from OTC participants to help CFTC enforce the CEA’s antifraud or antimanipulation provisions. The scope of CFTC’s oversight authority has raised concerns among some members of Congress and others that activities on these markets are largely unregulated, and that additional CFTC oversight is needed. While some observers have called for more oversight of OTC derivatives, most notably for CFTC to be given greater oversight authority of this market, others oppose any such action. Supporters of more CFTC oversight authority believe that regulation of OTC derivatives markets is necessary to protect the regulated markets and consumers from potential abuse and possible manipulation. One of their concerns is that, due to the lack of complete information on the size of this market or the terms of the contracts, CFTC may not be assured that trading on the OTC market is not adversely affecting the regulated markets and, ultimately, consumers. However others, including the President’s Working Group, have concluded that OTC derivatives generally are not subject to manipulation because contracts are settled in cash on the basis of a rate or price determined in a separate, highly liquid market that does not serve a significant price discovery function. The Working Group also noted that if electronic markets were to develop and serve a price discovery function, then consideration should be given to enacting a limited regulatory regime aimed at enhancing market transparency and efficiency through CFTC, as the regulator of exchange-traded derivatives. However, the lack of reported data about this market makes addressing concerns about its function and effect on regulated markets and entities challenging. In a June 2007 Federal Register release clarifying its large trader reporting authority, CFTC noted that having data about the off- exchange positions of traders with large positions on regulated futures exchanges could enhance the commission’s ability to deter and prevent price manipulation or other disruptions to the integrity of the regulated futures markets. According to CFTC officials, the commission has proposed amendments to clarify its authority under the CEA to collect information and bring fraud actions in principal-to-principal transactions in these markets, enhancing CFTC’s ability to enforce antifraud provisions of the CEA. Also, in September 2007, CFTC conducted a hearing to begin examining trading on regulated exchanges and exempt commercial markets more closely. The hearing focused on a number of issues, including the current tiered regulatory approach established by the Commodity Futures Modernization Act, which amended the CEA, and whether this model is beneficial; the similarities and differences between exempt commercial markets and regulated exchanges, and the associated regulatory risks of each market; and the types of regulatory or legislative changes that might be appropriate to address any identified risks. Given ongoing questions about the similarity of products traded on the markets and how and whether exempt markets play a role in the price discovery process and whether existing reporting requirements are sufficient, we recommend that Congress take up this issue during the CFTC reauthorization process to begin to answer some of these questions and the implications for the current regulatory structure in light of the changes that have occurred in this market. CFTC provides oversight for commodity futures markets by analyzing large trader reporting data, conducting routine surveillance, and investigating and taking enforcement actions against market participants and others. The commission uses information gathered from surveillance activities to identify unusual trading activity and possible market abuse. In particular, CFTC’s large trader reporting system (LTRS) provides essential information on the majority of all trading activity on futures exchanges. CFTC staff said they routinely investigate traders with large open positions, but do not routinely maintain information about such inquiries, thereby making it difficult to determine the usefulness and extent of these activities. According to recent data provided by CFTC, about 10 percent of the enforcement actions involved energy-related commodities. However, as with programs operating in regulatory environments where performance is not easily measurable, evaluating the effectiveness of CFTC’s enforcement activities is challenging because it lacks effective outcome-based performance measures. CFTC conducts regular market surveillance and oversight of energy trading on NYMEX and other futures exchanges, focusing on detecting and preventing disruptive practices before they occur and keeping the CFTC commissioners informed of possible manipulation or abuse. According to CFTC staff, when a potential market problem has been identified, surveillance staff generally contact the exchange or traders for more information. To confirm positions and determine intent, staff may question exchange employees, brokers, or traders. According to the staff, CFTC’s Division of Market Oversight may issue a warning letter or make a referral to the Division of Enforcement to conduct a nonpublic investigation into the trading activity. Markets where surveillance problems have not been resolved may be included in reports presented to the commission at weekly surveillance meetings. According to CFTC staff, they routinely make inquiries about traders with large open positions approaching expiration, but formal records of their findings are only kept in cases with evidence of improper trading. If LTRS data revealed that a trader had a large open market position that could disrupt markets if it were not closed before expiration, CFTC staff would contact the trader to determine why the trader had the position and what plans the trader had to close the position before expiration or ensure that the trader was able to take delivery. If the trader provided a reasonable explanation for the position and a reasonable delivery or liquidation strategy, staff said no further action would be required. CFTC staff said they would document such contacts on the basis of their importance in either informal notes, e-mails to supervisors, or informal memorandums. According to one CFTC official, no formal record would be made unless some signal indicated improper trading activity. However, without such data, CFTC’s measures of the effectiveness of its actions to combat fraud and manipulation in the markets would not reflect all surveillance activity, and CFTC management might miss opportunities to identify trends in activities or markets and better target its limited resources. In response to our recommendation, CFTC agreed to improve its documentation of its surveillance activities. CFTC’s Division of Enforcement is charged with enforcing the antimanipulation sections of the CEA. The enforcement actions CFTC has taken in its energy-related cases generally have involved false public reporting as a method of attempting to manipulate prices on both the NYMEX futures market and the off-exchange markets. CFTC officials said that from October 2000 to September 2005, the agency initiated 287 enforcement cases and more than 30 of these cases involved energy trading. In the past several months, CFTC has taken a series of actions involving energy commodities, including allegations of false reporting, attempted manipulation of NYMEX natural gas futures prices, and attempted manipulation of physical natural gas prices. Although CFTC has undertaken enforcement actions and levied fines, measuring the effectiveness of these activities is an ongoing challenge. For example, the Office of Management and Budget’s most recent 2004 Program Assessment Rating Tool (PART) assessment of the CFTC enforcement program identified a number of limitations of CFTC’s performance measures. As is the case with most enforcement programs, identifying outcome-oriented performance measures can be particularly challenging. However, as we point out in the report, there are a number of other ways to evaluate program effectiveness, such as using expert panel reviews, customer service surveys, and process and outcome evaluations. We have found with other programs that the form of the evaluations reflects differences in program structure and anticipated outcomes, and that the evaluations are designed around the programs and what they aim to achieve. Without utilizing these or other methods to evaluate program effectiveness, CFTC is unable to demonstrate whether its enforcement program is meeting its overall objectives. CFTC has agreed that this is a matter that should be examined and has included development of measures to evaluate its effectiveness in its strategic plan and has requested funding to study the feasibility of developing more meaningful measures. In closing, I would like to reemphasize the difficulty in attributing increased energy prices to any one of the numerous changes in the physical or derivatives markets. As I have mentioned, our research shows that the physical and derivatives markets have both undergone substantial change and evolution, and market participant and regulatory views were mixed about the extent to which these developments exerted upward pressure on prices. Because of the importance of understanding the potential effects of such developments in these markets, ongoing review and analysis are warranted. As the scope of CFTC’s authority is debated, additional information is needed to understand what may need to be done to best protect investors from fraudulent, manipulative, and abusive practices. Such information includes how different or similar are the characteristics and uses of exchange and off-exchange products being traded and do these continue to justify different regulatory treatment; to what extent does trading in off-exchange financial derivatives affect price discovery and what are the regulatory and policy implications; how large of an effect are nontraditional market participants, such as commodity index funds, having in these markets; and are the changes in the energy markets unique or are such concerns also worth reviewing for other commodity markets. By answering questions such as these, CFTC and the Congress will be better positioned to determine what changes, if any, may be needed to oversee these markets. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee might have. For further information about this testimony, please contact Orice M. Williams on (202) 512-8678 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions include Cody Goebel (Assistant Director), John Forrester, Barbara Roesmann, and Paul Thompson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Energy prices for crude oil, heating oil, unleaded gasoline, and natural gas have risen substantially since 2002, generating questions about the role derivatives markets have played and the scope of the Commodity Futures Trading Commission's (CFTC) authority. This testimony focuses on (1) trends and patterns in the futures and physical energy markets and their effects on energy prices, (2) the scope of CFTC's regulatory authority, and (3) the effectiveness of CFTC's monitoring and detection of abuses in energy markets. The testimony is based on the GAO report, Commodity Futures Trading Commission: Trends in Energy Derivatives Markets Raise Questions about CFTC's Oversight ( GAO-08-25 , October 19, 2007). For this work, GAO analyzed futures and large trader data and interviewed market participants, experts, and officials at six federal agencies. Various trends in both the physical and futures markets have affected energy prices. Specifically, tight supply and rising demand in the physical markets contributed to higher prices as global demand for oil has risen rapidly while spare production capacity has fallen since 2002. Moreover, increased political instability in some of the major oil-producing countries has threatened the supply of oil. During this period, increasing numbers of noncommercial participants became active in the futures markets (including hedge funds) and the volume of energy futures contracts traded also increased. Simultaneously, the volume of energy derivatives traded outside of traditional futures exchanges increased significantly. Because these developments took place concurrently, the effect of any individual trend or factor on energy prices is unclear. Under the authority granted by the Commodity Exchange Act (CEA), CFTC focuses its oversight primarily on the operations of traditional futures exchanges, such as the New York Mercantile Exchange, Inc. (NYMEX), where energy futures are traded. Increasing amounts of energy derivatives trading also occur on markets that are largely exempt from CFTC oversight. For example, exempt commercial markets conduct trading on electronic facilities between large, sophisticated participants. In addition, considerable trading occurs in over-the-counter (OTC) markets in which eligible parties enter into contracts directly, without using an exchange. While CFTC can act to enforce the CEA's antimanipulation and antifraud provisions for activities that occur in exempt commercial and OTC markets, some market observers question whether CFTC needs broader authority to more routinely oversee these markets. CFTC is currently examining the effects of trading in the regulated and exempt energy markets on price discovery and the scope of its authority over these markets--an issue that will warrant further examination as part of the CFTC reauthorization process. CFTC conducts daily surveillance of trading on NYMEX that is designed to detect and deter fraudulent or abusive trading practices involving energy futures contracts. To detect abusive practices, such as potential manipulation, CFTC uses various information sources and relies heavily on trading activity data for large market participants. Using this information, CFTC staff may pursue alleged abuse or manipulation. However, because the agency does not maintain complete records of all such allegations, determining the usefulness and extent of these activities is difficult. In addition, CFTC's performance measures for its enforcement program do not fully reflect the program's goals and purposes, which could be addressed by developing additional outcome-based performance measures that more fully reflect progress in meeting the program's overall goals. Because of changes and innovations in the market, the reports that CFTC receives on market activities may no longer be accurate because they use categories that do not adequately separate trading being done for different reasons by various market participants.
CPSC was created in 1972 under the Consumer Product Safety Act to regulate consumer products that pose an unreasonable risk of injury; to assist consumers in using products safely; and to promote research and investigation into product-related deaths, injuries, and illnesses. The Consumer Product Safety Act consolidated existing federal safety regulatory activity related to consumer products within CPSC. As a result, in addition to general responsibilities for protecting consumers against product hazards, the duties and functions under the following four statutes were transferred to CPSC: the Flammable Fabrics Act, which among other things, authorizes CPSC to prescribe flammability standards for clothing, upholstery, and other fabrics; the Federal Hazardous Substances Act, which establishes the framework for the regulation of substances that are toxic, corrosive, combustible, or otherwise hazardous; the Poison Prevention Packaging Act of 1970, which authorizes CPSC to prescribe special packaging requirements to protect children from injury resulting from handling, using, or ingesting certain drugs and other household substances; and the Refrigerator Safety Act of 1956, which mandates CPSC to prescribe safety standards for household refrigerators to ensure that the doors thereof can be opened easily from the inside. CSPC has also subsequently been charged with administering the Virginia Graeme Baker Pool and Spa Safety Act, which establishes mandatory safety standards for swimming pool and spa drain covers, as well as a grant program to provide states with incentives to adopt pool and spa safety standards. In addition, CPSC has been charged with administering the Children’s Gasoline Burn Prevention Act, which establishes safety standards for child-resistant closures on all portable gasoline containers. Thus, CPSC’s jurisdiction is extremely broad, covering thousands of types of products. According to CPSC, this jurisdiction covers over 100,000 different manufacturers and generally includes all consumer products except food, drugs, and cosmetics, which are regulated by FDA; pesticides, which are regulated by the Environmental Protection Agency; automobiles and other on-road vehicles, which are regulated by the Department of Transportation; flotation devices, which are regulated by the Coast Guard; and firearms, tobacco, and alcohol, which are regulated by the Department of Justice. The Consumer Product Safety Act established CPSC as an independent regulatory commission. The rationale for establishing independent commissions such as CPSC includes these assumptions: (1) long-term appointment of commissioners would promote stability and develop expertise, (2) independent status would insulate the commissioners from undue economic and political pressures, and (3) commissioners with different political persuasions and interests would provide diverse viewpoints. The act provides for the appointment by the President of five commissioners for staggered 7-year terms. However, no more than three commissioners have served at one time since 1986, and the commission has been led by two commissioners since 2006. One of these commissioners is designated the Chairman, who directs all the executive and administrative functions of the agency. CPSC was designed as a complement to tort law, under which one may seek compensation for harm caused by another’s wrongdoing. The threat of legal action under tort law plays an important role in assuring that companies produce safe products. However, tort law is primarily a postinjury mechanism, and foreign manufacturers are usually outside of the U.S. tort law system. Therefore, CPSC has certain authorities intended to prevent unsafe consumer products from entering the market in the first place. Under several of the acts that it administers, CPSC primarily protects consumers from unreasonable risk of injury or death by issuing regulations that establish performance or labeling standards for consumer products. These standards are often referred to as “mandatory standards.” CPSC issued 38 mandatory standards between 1990 and 2007. If CPSC determines that there is no feasible standard that would adequately protect the public from danger, CPSC may issue regulations to ban the manufacture and distribution of the product. Many consumer products are subject to voluntary standards. These voluntary standards, which are often established by private standard- setting groups, do not have the force of law. However, many voluntary standards are established with input from consumer groups and industry and, as a result, are often referred to as “consensus standards.” In addition, the 1981 amendments to the Consumer Product Safety Act require CPSC to defer to a voluntary standard—rather than issuing a mandatory regulation—if CPSC determines that the voluntary standard adequately addresses the hazard and that there is likely to be substantial compliance with the voluntary standard. Between 1990 and 2007, CPSC worked with industry and others to develop 390 voluntary standards related to consumer products. CPSC’s policy on imported products states that the commission will seek to ensure that importers and foreign manufacturers, as well as domestic manufacturers, distributors, and retailers, carry out their obligations and responsibilities under the five acts. The commission will also seek to establish, to the maximum extent possible, uniform import procedures for products subject to the acts the commission administers. Two CPSC staff offices have primary responsibility for carrying out this policy: the Office of International Programs and Intergovernmental Affairs and the Office of Compliance and Field Operations. The Office of International Programs and Intergovernmental Affairs was created in 2004 to provide CPSC with a more comprehensive and coordinated effort at the international, federal, state, and local levels in developing and implementing consumer product safety standards. The office conducts activities and creates strategies aimed at ensuring greater import compliance with recognized American safety standards. A major emphasis of this program is encouraging foreign manufacturers to establish product safety systems as an integral part of the manufacturing process. The office is also involved in coordinating international consumer product safety efforts with such U.S. federal agencies as the Departments of Commerce and State. It also ensures that CPSC regulatory efforts are consistent with U.S. international trade obligations by coordinating with the United States Trade Representative. As of July 2009, the office was staffed by four full- time employees. The Import Surveillance Division within the Office of Compliance and Field Operations was created in March 2008 and has primary responsibility for CPSC’s product surveillance program at ports of entry. CPSC, in cooperation with other appropriate federal agencies, is required to maintain a permanent product surveillance program for preventing the entry of unsafe consumer products into the commerce of the United States. Previously, CPSC operated the import surveillance program through product safety investigators staffed in multiple regions throughout the country who included among their investigative responsibilities ports of entry in their particular regions. Over the years, the numbers of CPSC regional offices and product safety investigators have been reduced. CPSC states that these product safety investigators continue to support the import surveillance program, operating in 48 locations throughout the country. The Import Surveillance Division marks the first permanent, full- time presence of CPSC investigators at key ports of entry, according to CPSC. As of July 2009, the division was staffed by 11 full-time employees— 9 compliance investigators located at seven ports of entry and a Director and Supervisory Compliance Investigator located at CPSC headquarters in Bethesda, Maryland. The compliance investigators are supported by compliance officers, technical staff, attorneys, and other staff at CPSC headquarters. There are over 300 ports of entry in the United States. CBP notifies CPSC and other regulatory agencies with import safety responsibilities of the arrival of imported products and provides information about those products. Under several of the acts that CPSC administers, CPSC identifies potentially unsafe products and requests that CBP set them aside for CPSC examination. CPSC has implemented programs at some ports for CBP to target certain categories of products based on their Harmonized Tariff Schedule (HTS) codes. CBP has import specialists at major ports who specialize in certain commodities, including consumer products. They analyze manifest, entry, and other import data to identify shipments for CPSC review. In some instances, CBP will independently identify shipments for CPSC examination. Once samples are delivered to or taken by CPSC for examination, CPSC may detain the shipment pending further examination and testing, conditionally release the shipment to the importer’s premises pending examination and testing, or release the shipment to the importer outright. Compliance investigators examine the sample to determine whether it complies with the relevant mandatory standard(s); is accompanied by a certification of compliance with the relevant product safety standard that is supported by testing, in some instances by a third party; is or has been determined to be an imminently hazardous product; has a product defect that presents a substantial product hazard; or is produced by a manufacturer who failed to comply with CPSC inspection and recordkeeping requirements. If compliance investigators decide that further testing of a sample is necessary, they will send the sample to the CPSC Product Testing Laboratory or to a CBP laboratory. If the sample is found to violate any of the above criteria, CPSC is authorized to refuse admission of the shipment. Consumer products that are refused admission will be destroyed unless the Secretary of the Treasury allows the product to be exported. CPSC may instead instruct CBP to seize shipments upon finding a prohibited act, which according to CPSC is the most common outcome when a violation is discovered. The importer may be subject to civil or criminal penalties. See figure 1 for an overview of CBP andpections at ports of entry. CPSC’s current process for conducting ins CPSC relies on CBP to carry out key import surveillance activities at ports of entry. In addition to its numerous antiterrorism and trade responsibilities, CBP faces pressure from the international trade community to quickly move compliant shipments into commerce. Factors such as the high volume of containers, financial incentives for longshoremen to unload ships quickly, and the limited amount of time CBP has to identify and examine cargo contribute to the challenges CBP faces in facilitating commerce. In addition, CBP enforces regulations for 45 other federal agencies. Importers place pressure on CBP to correctly identify violations because the cost of storing CBP-detained products at privately run container examination stations is high. CPSC surveillance SC surveillance activity with CBP at ports of entry has fluctuated in recent years. For activity with CBP at ports of entry has fluctuated in recent years. For example, as shown in figure 2, the number of samples that CPSC collecte example, as shown in figure 2, the number of samples that CPSC collecte for examination dropped from 1,348 in fiscal year 1999 to 710 and 514 in for examination dropped from 1,348 in fiscal year 1999 to 710 and 514 in fiscal years 2002 and 2003 and has still not reached the 1999 level, despite fiscal years 2002 and 2003 and has still not reached the 1999 level, despite an increa an increase in imports of products under CPSC jurisdiction of about 101 percent. se in imports of products under CPSC jurisdiction of about 101 percent. Consensus exists that CPSC’s authorities have the potential to be effective in preventing the entry of unsafe products into the United States. Although CPSC has made limited progress in measuring the effectiveness of its authorities over imported products, the agency believes that new authorities granted in CPSIA should increase compliance with mandatory standards and enhance its ability to monitor compliance with voluntary standards at ports of entry. Private industry sources and others we interviewed generally said that CPSC’s authorities are potentially effective but that implementation is limited by competing priorities and resource and practical constraints. There is consensus among those we interviewed that CPSC has broad authority to prevent the entry of unsafe consumer products into the United States, particularly in light of new authorities that strengthen its ability to enforce mandatory standards and protect consumers from unsafe products subject to voluntary standards at ports of entry. As described above, CPSC primarily protects consumers from unreasonable risk of injury by promulgating mandatory standards and working with private standard-setting organizations to promulgate voluntary standards, and CPSC has broad authority to enforce those standards at ports of entry. In particular, CPSC and other product safety experts believe CPSC’s enforcement of mandatory standards at ports of entry will be strengthened because now all products subject to a mandatory standard under any law administered by CPSC must be accompanied by a certification of compliance that is supported by product testing. In addition, every manufacturer or private labeler of a product subject to a children’s product safety rule must have samples of the product tested by an accredited third-party laboratory for conformance with the applicable mandatory standard. For many years, CPSC focused import surveillance activities on enforcement of certain mandatory standards for consumer products, primarily toys, fireworks, and lighters. The new testing requirement puts greater burden on industry to ensure that products comply with mandatory standards. If implemented properly, CPSC should be able to use the testing and certification requirements to strengthen surveillance of regulated products at ports of entry. Furthermore, CPSC believes its ability to monitor compliance with voluntary standards at ports of entry will be strengthened by new authority to create a “substantial product hazard list.” As described above, many consumer products are produced according to voluntary standards. In addition, many products are subject to no standards. CPSC primarily protects consumers from unsafe products subject to voluntary or no standards by declaring them “substantial product hazards” when the products have a defect that creates a substantial risk of injury. However, CPSC faces difficulty at ports of entry identifying defects in products subject to voluntary or no standards because defects are not always apparent until the product has been used by the public. With implementation of the substantial product hazard list, CPSC will be able to target new shipments and refuse admission of products subject to voluntary standards that it has already determined have a defect constituting a substantial risk of injury. Despite this broad authority, CPSC has made limited progress in measuring the effectiveness of its authorities to prevent the entry of unsafe consumer products. CPSC measures the performance of its import surveillance program by the number of product samples collected and by the number of samples ultimately found to be unsafe and therefore seized. CPSC is now considering altering this metric so that it will track all shipments that CPSC investigators examine, rather than just those samples collected and tested. Furthermore, CPSC measures the performance of its Office of International Programs and Intergovernmental Affairs by the number of outreach events conducted. These metrics provide measures of the output of program staff but do not necessarily provide accurate measures of the effectiveness of the programs. In the 1990s, CPSC used industry compliance with mandatory standards as an alternative basis for measuring the agency’s effectiveness, what it termed the Comprehensive Plan. The plan was designed to examine the compliance of these products with mandatory standards on a periodic basis and then identify problem areas for focusing limited agency resources. CPSC did not continue the Comprehensive Plan after the mid-1990s because the data indicated that compliance was high, and CPSC believed that the plan did not help it address problems with noncompliant products. CPSC sought information from the public in 2008 to develop a new methodology that would replace the Comprehensive Plan. CPSC reported receiving two responses, but commission staff stated that they did not pursue further work because the responses did not address their needs for developing new performance measures. While CPSC recognizes the need for outcome-oriented performance measures and has taken steps to develop new measures, without these measures, CPSC may not be able to determine how effective its authorities are for preventing the entry of unsafe products. While CPSC has broad authority to prevent the entry of unsafe consumer products into the United States, there have been delays in implementing new authorities CPSC received in CPSIA. According to CPSC, the agency has more than 40 rulemakings to conduct under CPSIA, including approximately 20 rulemakings to initiate or complete by August 2010, which has contributed to the delay in implementing the act. In particular, the two new authorities discussed above—certain testing and certification requirements and the substantial product hazard list—have not been implemented. CPSC issued a stay of enforcement of certain testing and certification requirements until February 10, 2010, delaying implementation of these standards and raising questions among manufacturers subject to this requirement. CPSC stated that it did not complete the rulemaking process because it was unable to respond to innumerable inquiries from industry seeking relief from the testing requirement at a time when the agency faced severe resource limitations because it was operating under the prior year’s budget. In addition, to date CPSC has not conducted rulemaking to implement the substantial product hazard list. The effectiveness of CPSC’s new authorities will not be clear until CPSC completes its rulemaking and demonstrates the ability to enforce these regulations. Another factor contributing to delays in implementation of new authorities is the need for CPSC to balance its mission to protect consumers with industry interests. CPSC’s mission is to protect the public from unreasonable risk of injury associated with consumer products, and CPSC is also required to work with industry to develop product safety standards, collect information about unsafe products, and conduct recalls. Private companies have expressed concerns about CPSC’s implementation of CPSIA, particularly the expanded testing and certification requirements, which, as noted earlier, helped contribute to CPSC’s decision to delay enforcement of these provisions. In public comments on CPSIA, several industry representatives commented that the certification requirements are duplicative and could cause them to incur tremendous costs due to the complexity of their business operations. For example, industry representatives stated that large manufacturers produce hundreds of thousands of variations of their products that may require testing and certification, while small manufacturers may have limited product lines across which to spread costs. In addition to industry concerns, CPSC has also faced concerns from consumers that CPSC’s implementation of CPSIA has not, at times, fulfilled the consumer protection goals of the act. In one recent example, consumer groups challenged CPSC’s advisory opinion that CPSIA’s provisions prohibiting the sale of children’s products that contain certain chemicals called phthalates did not apply retroactively to inventories existing prior to the effective date of the prohibitions. These groups were concerned that if the phthalate prohibitions were not applied retroactively, consumers would continue to be exposed to unsafe products in the marketplace. The consumer groups filed suit in a federal district court seeking a declaratory judgment that CPSC’s advisory opinion, which was issued at the request of certain wholesale and retail entities, was contrary to CPSIA, and thus violated the Administrative Procedure Act. The district court held that the phthalate prohibitions in CSPIA unambiguously applied to existing inventory and set aside CPSC’s opinion. According to some industry representatives we interviewed, retailers are taking the lead in product testing and certification in response to industry’s uncertainty over how CPSC will enforce CPSIA provisions. These representatives believed that retailers are ahead of CPSC in this regard. For example, one industry group said that although CPSC has stayed enforcement of many of its certification requirements, retailers still require suppliers to provide certifications, and some retailers had more stringent lead standards than CPSIA. According to industry groups, U.S. companies, particularly retailers, have an incentive to institute and enforce stringent product safety standards because selling products that cause injury or death can have negative impacts on their brands. The U.S. tort system that exposes companies selling unsafe products to lawsuits also helps to ensure that companies comply with product safety standards. To respond to industry concerns about how to comply with safety standards under current and prior consumer product safety laws, some industry groups have also developed or are developing their own testing and certification programs. CPSC indicated that while these types of programs can help improve compliance with safety standards, there are limits to how well this type of industry self-regulation can be used to protect consumers. They indicated that there is a trade-off between consumer protection and industry cooperation; if the requirements are too onerous, companies might not participate in these voluntary programs. Balancing the interests of both consumer and industry participants adds complexity in completing CPSC’s implementation of CPSIA. CPSC needs better targeting information to strengthen its ability to identify risks from imported products and communicate inspection priorities to CBP. CPSC and CBP have a cooperative relationship at ports of entry. That is, while CPSC relies on CBP to carry out key import surveillance and targeting activities at ports, CBP relies on CPSC to communicate the greatest risks and its inspection priorities among consumer products. However, CPSC has not developed formal systems for assessing risks and focusing inspection activities with CBP. Furthermore, CPSC does not have access to information that would enable the agency to effectively target potentially unsafe imported products for inspection. In the past, CPSC has generally used informal systems to target risks from imported products and to conduct operations with CBP at ports of entry with some positive results. CPSC has generally been effective using its informal systems to target certain products for inspection, according to several product safety experts we interviewed. For instance, CPSC has targeted imported fireworks for increased inspections during the summer months. CPSC has also had positive results from its participation in Operation Guardian, a multiagency effort to combat the increasing importation of substandard, tainted, and counterfeit products that pose a health and safety risk to consumers. Another program that CPSC stated has produced positive results is an expansion of the CBP Importer Self- Assessment Program that was initiated in October 2008. The expansion, known as the Importer Self-Assessment Product Safety Pilot, aims to prevent unsafe imports from entering the United States by requiring volunteer companies to meet specified internal monitoring criteria in exchange for priority in testing, reductions in the testing conducted, and access to CPSC training programs. However, as discussed above, CPSC targeted relatively few imported consumer products for inspection under its informal system. CPSIA requires CPSC to establish a formal risk assessment methodology that will require updating the terms of the relationship between the agencies. CPSC and the former U.S. Customs Service (now CBP) established a memorandum of understanding (MOU) in 1990 that serves as the foundation for the working relationship of the agencies for enforcement of CPSC’s authorities over imported products. For example, the MOU provides for “the joint conduct of a mutually agreed number of high-visibility, intensive inspection operations annually.” This provision is consistent with CPSC’s informal system for targeting risks. The MOU is now out of date and does not reflect anticipated changes to CPSC’s relationship with CBP required under CPSIA. CPSIA requires CPSC, by August 2010, to develop a methodology for identifying shipments of imported consumer products that are likely to violate import provisions enforced by CPSC. A CPSC official told us that, as part of the agency’s work to develop this risk assessment methodology, CPSC plans to create a flowchart of the current product-entry process to identify gaps in any current CPSC authorities to stop unsafe products at the ports. The official noted that CPSC anticipates completing the flowchart later this year. Updating the 1990 MOU between CPSC and CBP and thereby revisiting the roles and responsibilities of each agency would be a useful way for CPSC to identify gaps in the current product entry process and speed completion of its risk assessment methodology. During interviews with CPSC staff and our visit to a U.S. port of entry to determine how CPSC prevents the entry of unsafe products into the United States, we found that CPSC does not have access to CBP data that would provide CPSC with information about products in a shipment before it arrives in the United States. CPSC has access to entry summary data, which CBP generally receives shortly before a shipment enters the United States or, in some cases, as many as 10 days after the shipment has been released into commerce. However, CPSC does not have access to manifest data, which is provided to CBP 24 hours before a shipping vessel bound for the United States is loaded at a foreign port. CPSC and CBP established a second MOU in 2002, which superseded the 1990 MOU, specifying procedures and guidelines for information sharing between the agencies with a particular focus on CPSC access to CBP data systems. The 2002 MOU was intended to allow CPSC access to both entry summary and manifest data. According to a CPSC official, CBP has not provided CPSC with access to manifest data because it believed the data were not specific enough for CPSC purposes. For instance, the manifest data generally do not include the name of the importer and may not have specific Harmonized Tariff Schedule codes to help CPSC identify the merchandise in the shipment. However, CPSC still believes that manifest data will help the agency improve its targeting, as it will give CPSC more timely information on shipments and potentially more specific information as CPSC seeks to revise the Harmonized Tariff Schedule codes to better align them with the categories of products they regulate. CBP also acknowledged that, while CPSC can use the entry summary data to target future shipments for inspection, CPSC cannot place inspection holds on shipments that are about to depart for or are in transit to the United States without the manifest data. In comparing CPSC border surveillance activities with those of other federal agencies that regulate the safety of products used by consumers, we found that FDA has a stronger capability to target imports using CBP data (discussed further below). FDA receives advance shipment data from CBP of all entries containing food under FDA jurisdiction that arrive at ports, which FDA then screens electronically against criteria it developed to detect potential violations. CPSC and CBP state that they have been working together to resolve information-sharing issues. Specifically, in February 2007, CPSC applied for access to the International Trade Data System (ITDS), which CBP intends to be a single source for import and export documentation that is to provide participating agencies quicker access to data and improved ability to identify potentially unsafe shipments of consumer products. As part of the application process, CPSC has submitted to CBP for review an operations plan (a “Concept of Operations” or “ConOps”) and an update to the 2002 MOU with guidelines for the exchange of information. The agencies have had follow-up discussions on these plans; however, CBP has reported that implementation of ITDS has been delayed. As a result, CPSC’s efforts to access more complete import data to help it better target incoming shipments have also been delayed. CPSC staff said that they anticipate this work will not be completed until at least 2011. In addition to this effort, CPSIA requires CPSC and CBP to improve information sharing and coordination. Specifically, CPSIA requires CPSC to develop, by August 2009, a plan for sharing information and coordinating with CBP. According to CPSIA, the proposed plan is to consider, at a minimum, the number of CPSC staff that should be stationed at U.S. ports and the nature and extent of cooperation between CPSC and CBP at the ports. The plan is also to discuss the nature and extent of cooperation between CPSC and CBP at the National Targeting Center or its equivalent. CPSC has not completed this plan, and it is unlikely to do so until it updates information-sharing agreements with CBP. A CPSC official told us that as part of developing this plan for sharing information with CBP, CPSC is seeking to assign a staff member to a planned CBP targeting center that would focus on health and safety issues. This targeting center, which would be equivalent to the National Targeting Center, would seek to identify shipments of imported products that should be stopped at the ports for further screening and review. A CPSC official said that, in assigning a staff person to this targeting center, the agency would have access to CBP’s Automated Targeting System. However, creation of the health and safety planned targeting center has been delayed, so CPSC has not been able to place staff at the center or access CBP targeting information, delaying its ability to better target imported products. A CPSC official explained that the analytical approach that FDA took by creating its own system for analyzing data would require a considerable investment of both time and money. CPSC prefers the option of working with CBP through the planned targeting center to leverage this analytical capability. CPSC believes this option would be more efficient than developing its own system to analyze data. CPSC’s enforcement of its authorities to prevent the entry of unsafe products into the United States is limited by resource and practical constraints. Specifically, CPSC has few staff at ports of entry and limited analytical and laboratory support. Furthermore, although CPSC has authority to destroy products refused admission, it lacks a source of funding to immediately pay for the costs of destruction. In addition, while CPSC has authority to condition the importation of consumer products based on compliance with CPSC inspection requirements, there are practical constraints on the agency’s ability to conduct inspections of foreign manufacturing plants. CPSC’s ability to inspect shipments for potential violations at ports of entry is limited by resource constraints, such as few staff at ports and limited analytical and laboratory support. In passing CPSIA, Congress recognized the need to strengthen CPSC’s resources, including requirements that CPSC increase the number of full-time employees to at least 500 by fiscal year 2013 and that CPSC hire additional personnel to be assigned to U.S. ports of entry. As noted above, CPSC had 9 compliance investigators stationed at 7 ports as of July 2009, as well as 100 product safety investigators in 48 other locations across that country that may help to conduct periodic inspections at ports of entry. CBP staff indicated that having a CPSC compliance investigator collocated at ports has been useful, and during our visit to a U.S. port of entry we saw the cooperative relationship between agency officials. Furthermore, a CPSC official said that currently there is limited analytical support at CPSC headquarters to assist in import surveillance work. According to CPSC, the agency cannot establish a greater presence at U.S. ports without having the requisite analytical support. CPSC also has limited laboratory support for testing potentially unsafe products and has faced significant backlogs at various times. As of April 2009, CPSC had 28 engineers and scientists at its laboratory. CPSC’s laboratory facility is located across the country from where a large percentage of imported goods enter the United States. Moreover, fireworks, which are heavily targeted for inspection, must be tested at a separate facility under current procedures. As a result of these conditions, testing backlogs have inhibited import surveillance efforts. In May 2009, CPSC announced that it had secured and was in the process of outfitting a new laboratory with enhanced testing facilities. CPSC also announced that certain support staff from CPSC headquarters would be collocated at the lab to assist the laboratory staff. However, the new facilities still cannot accommodate fireworks testing. Moreover, the new facility does not provide CPSC with a presence on the West Coast, where many consumer products enter the United States. As discussed below, in comparing CPSC’s resources supporting border surveillance with those of other federal agencies that regulate the safety of products used by consumers, particularly FDA and USDA, we found that CPSC’s resources are much less than those of these other agencies. According to CPSC and CBP, CPSC can refuse entry for products that violate U.S. laws, but CPSC does not have immediate funding available to subsequently destroy these products if the importers do not destroy or export these products at their own expense. Instead, CPSC generally asks CBP to seize unsafe products, and CBP is authorized to access the U.S. Department of the Treasury’s Forfeiture Fund to cover the cost of product destruction. The Treasury Forfeiture Fund is also available to CBP for other enforcement purposes, so that any money CBP uses for destroying seized products reduces the amount of money available to CBP for other purposes. Moreover, CBP is concerned that the costs of product destruction are likely to increase as CPSC fully implements CPSIA. Although CBP requires that formal entries be covered by a bond, which is another funding source that may be used to cover the cost of product destruction, we found that CBP has not pursued bonds for that purpose because they may not cover the full cost of destruction. CPSC officials also noted that bonds are not immediately available for product destruction but may only be recovered to reimburse destruction costs. However, a new mandate in CPSIA requires CPSC to work with CBP to set bond amounts sufficient to cover these costs. CBP and CPSC’s efforts to implement this requirement are still in process. Given the limited resources immediately available for product destruction, CBP indicated that CPSC and other federal agencies might explore other funding sources for this purpose. However, we previously found that estimating the cost of destroying consumer products is difficult given the wide range of products CPSC oversees, making it challenging to determine the appropriate size of a dedicated fund. In addition to setting aside enough funds for product destruction, CPSC would have to consider establishing parameters on the use of any funding source it administers. While CPSC has broad authority to conduct inspections of manufacturers and importers, significant resource and practical constraints limit its ability to conduct traditional inspections of foreign manufacturing plants. CPSC is required by rule to condition the import of a consumer product on the product manufacturer’s compliance with CPSC inspection and recordkeeping requirements. CPSC does not conduct inspections in foreign countries, and CPSC and many product safety and international trade experts cite several constraints on its ability to do so. Specifically, these parties state that U.S. inspectors would likely need the consent of both the foreign manufacturer and the foreign government to conduct an inspection. Other experts stated that such consent from a foreign government, if granted, may be accompanied by a request for the same rights to inspect U.S. manufacturing plants. Another constraint on inspections of foreign manufacturers is that such a program would need to be prohibitively large in order to be effective, perhaps larger than CPSC’s domestic inspection program. As noted earlier, CPSC had about 100 product safety investigators in 48 locations to conduct its domestic inspections as of July 2009. Also, it is not clear what CPSC would look for when inspecting foreign manufacturing plants given that CPSC evaluates the final product for compliance with product safety regulations rather than the production process. As noted above, CPSC may condition the import of consumer products on cooperation with inspections. However, ensuring that the specific manufacturer’s products do not enter the United States would be difficult without detailed knowledge of individual companies’ supply chains, which could be gained through inspection of the manufacturer’s records. Due to these legal and practical constraints, CPSC stated that expanding its international education and outreach activities rather than conducting inspections of foreign manufacturing plants would more effectively prevent the entry of unsafe consumer products. CPSC’s regulatory authority to prevent the entry of unsafe imports is generally comparable to that of certain other federal agencies with substantial responsibility over the safety of products entering the United States. However, various border surveillance activities of FDA and USDA—particularly with respect to obtaining advance shipment data, allocating staff resources to border operations, and targeting capabilities, as well as efforts to work with foreign governments to educate foreign manufacturers about U.S. safety standards—provide useful information for strengthening CPSC’s efforts to prevent the entry of unsafe products. CPSC’s authorities to prevent the entry of unsafe products are generally comparable to the authorities of four other federal agencies: FDA, which oversees, among other things, food, drugs, and medical devices; NHTSA, which, through delegated authority of the Secretary of Transportation, oversees motor vehicles and equipment; Food Safety and Inspection Service (FSIS), an agency of USDA that oversees egg products, poultry, and meat; and Animal and Plant Health Inspection Service (APHIS), an agency of USDA that oversees plants and animals. CPSC’s authorities provide it with similar or stronger authority to require or engage in certain activities compared with the authorities of the other agencies we studied. Safety standards: All of these agencies have authority to regulate and enforce product safety standards or bans relevant to products under their jurisdiction. Border surveillance: All of these agencies except NHTSA appear to have specific authority to conduct border surveillance activities and broad authority to refuse entry to items that fail to comply with relevant standards, among other things. NHTSA officials told us that, like CPSC, NHTSA requests that CBP detain and seize products at the border on its behalf. Product certification/testing: Similar to FDA, manufacturers must certify to CPSC that their products comply with relevant standards, and this certification must be based on a reasonable testing program or, in the case of certain children’s products, the tests must be performed by third parties. Under FSIS, containers of eggs, egg products, poultry, and meat must be labeled as having passed inspection. Although NHTSA authorities require manufacturers of vehicles and equipment to certify that products comply with applicable federal safety standards, these certifications are not required to be based on testing. Temporary hold at ports: CPSC, FDA, FSIS, and APHIS have the authority to temporarily hold shipments at U.S. ports for inspection. Foreign inspection: Like FDA and FSIS, CPSC is not expressly prohibited from requesting consent to inspect foreign facilities. Specifically, CPSC may request inspection of foreign manufacturing or distribution facilities, third-party testing laboratories, or conveyances used to transport consumer products in commerce. As discussed above, CPSC does not conduct foreign inspections. Both FSIS and FDA have been successful in obtaining access to foreign facilities for the purpose of inspections or audits where incentives are strong for foreign entities to grant this access. For example, access is generally provided for requests that are tied to applications or audits before products may be eligible for import into the United States. FDA officials told us that in practice, if a foreign firm refuses to permit such an inspection, FDA can sometimes refuse admission of products offered for import into the U.S. For example, the refusal to permit an inspection could lead to a product not receiving a required pre-market approval or the refusal to permit an inspection, combined with other information, could support a determination of the appearance of a violation. According to NHTSA, it does not have the authority to inspect foreign facilities for the manufacture of vehicles and vehicle equipment imported into the United States. Consent to local court jurisdiction: Based on our interviews with officials at the federal agencies we studied, none of the agencies requires foreign manufacturers to consent to the jurisdiction of local courts with respect to enforcement actions. Some agencies, including CPSC, told us they do not see a need for this requirement, as they have been able to effectively carry out their enforcement duties under existing authorities. For example, foreign manufacturers seeking to offer motor vehicles for import into the United States are required by statute to designate a U.S. resident or firm as its agent to receive service of notices and process in administrative and judicial proceedings, and service on the agent is deemed to be service on the foreign manufacturer or importer. Also, FSIS told us that they expect foreign governments to carry out enforcement actions for their manufacturers that are certified to export to the United States. CPSC noted that it has satisfied its enforcement objectives by pursuing the domestic partners—manufacturers, importers, and retailers—of the foreign manufacturer without needing to resort to adjudicative proceedings. For example, in June 2009, CPSC reached a $2.3 million settlement with Mattel, Inc., regarding the importation of toys made in China that violated a federal ban on paint containing lead. Furthermore, CPSC also has the ability to settle enforcement actions with foreign parties. For example, in July 2009, CPSC reached a $50,000 settlement with a Hong Kong corporation with offices in the United States regarding the importation of toys manufactured in China that also violated the commission’s lead paint ban. Finally, CPSC staff we interviewed stated that the agency prefers to expand its international education and outreach programs rather than require foreign manufacturers to consent to U.S. jurisdiction to effectively prevent the entry of unsafe products, although they acknowledged that consent to jurisdiction or a requirement of a U.S. agent for service of process would be helpful. Appendix II contains a more detailed discussion of the elements of establishing personal jurisdiction in U.S. courts. The requirements of the Consumer Product Safety Act appear to demand more from manufacturers than NHTSA with respect to preventing the entry of unsafe imports. NHTSA’s key authorities to ensure the safety of imported goods are to prescribe mandatory vehicle safety standards and to require foreign and domestic manufacturers to certify compliance with these standards. However, these certifications are not required to be based on a testing program, unlike CPSC’s new certification requirements for children’s products, nor are the results of any testing required to be reported to NHTSA as a condition to entry. Appendix III contains a more detailed description of the agencies’ key authorities for preventing the entry of unsafe products. Where key differences exist in these agencies’ authorities, they appear to be due to differences in the types of products under an agency’s jurisdiction and the particular risks that are presented. As such, these differences are not directly applicable to CPSC as it improves its ability to ensure the safety of imported goods. FSIS’s foreign country equivalency: A major feature of FSIS’s framework for ensuring the safety of imported meat, poultry, and egg products is a requirement that foreign countries have a certified food safety system equivalent to that of the United States. As of fiscal year 2008, 34 foreign countries were eligible to import these products into the United States. According to an FSIS budget document, the United States invests substantial resources, over $800 million in fiscal year 2008, in the inspection of domestic products. The amount of funds spent on domestic inspection is relevant given that the concept of foreign equivalency is predicated on there being a domestic inspection program. As such, it is unclear how FSIS’s country equivalency program could be adapted for CPSC given that CPSC does not have comparable resources for the inspection of domestic products, with a budget of about $80 million in fiscal year 2008 for all of its activities. Furthermore, the concept of equivalency for meat, poultry, and egg products is established in a 1994 multilateral trade agreement, to which the United States is a signatory. According to the United States Trade Representative, it is not clear whether any WTO Agreement to which the United States is a party specifically precludes application of an equivalency requirement to consumer products. FDA’s preapproval of certain drugs and medical devices: In addition, FDA requires drug manufacturers to obtain prior approval for marketing certain drugs in the United States and for selling certain medical devices. However, FDA’s prior approval requirement would be inefficient for CPSC given the diversity of products it oversees and the frequency with which these products change or are updated. CPSC oversees thousands of types of consumer products, and many of the products it oversees, especially toys, change or are updated every year. Other key statutory differences across agency authorities need not be addressed by providing CPSC with new authorities because CPSC officials have told us they already consider CPSC to have similar authorities. Agreements with foreign governments and overseas presence: FDA is authorized to participate through appropriate processes with representatives of other foreign countries to reduce the burden of regulation, harmonize regulatory requirements, and achieve appropriate reciprocal arrangements, including international agreements such as mutual recognition agreements, agreements to facilitate commerce in devices, and memorandums of understanding, among other things. As discussed below, CPSC already has MOUs with foreign governments, including China and the EU, and is finalizing plans for its first overseas office in Beijing, China, in 2010. As CPSC considers ways to improve its ability to prevent the entry of unsafe imports, various agencies’ border surveillance and outreach activities to foreign governments and industry provide useful information. FDA, FSIS, and APHIS have expansive border surveillance activities based on the amount of data obtained on incoming shipments, number of staff supporting border surveillance operations, and targeting programs and information technology systems that help to integrate data from various sources for use in making border entry decisions. These capabilities enable these agencies to screen incoming shipments for a greater number of risks than CPSC does. According to data provided by CPSC, the agency has generally focused on relatively few categories of consumer products since 2001, specifically toys, fireworks, lighters, and electrical products (such as holiday lights and extension cords). FDA and FSIS have better access to data for screening incoming shipments than CPSC. FDA receives shipment data from CBP for all entries under FDA jurisdiction that are imported or offered for import, which FDA then screens electronically against criteria it developed to detect potential violations, including information from domestic surveillance and outreach to foreign governments. In addition, the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 requires that FDA be given advance notice on shipments of imported food. FDA allows importers to provide this data no more than 30 days in advance of the date of arrival. This advance information helps FDA determine whether the food potentially poses a bioterrorism or other significant health risk such that FDA should deploy resources to the port of arrival so that an inspection can be conducted before the product enters the United States. FDA officials told us that this information has been so important in screening food shipments for potential violations that they are considering expanding prior notification requirements to all products the agency oversees. FSIS requires by regulation that various information accompany shipments of meat, poultry, and egg products in order to be considered for admission into the United States, including a foreign health certificate. As discussed earlier, while CPSC receives entry summary data regarding shipments already released into commerce, CPSC does not receive data on incoming shipments prior to their arrival at U.S. ports of entry, though CBP receives such data as much as 24 hours before the shipment is loaded in the foreign port. Without advance shipment data, CPSC lacks information that other agencies have found useful in screening incoming shipments for potential safety violations. FDA and USDA have significantly more staff supporting border operations than CPSC. Federal agencies assign staff resources to border operations to identify and refuse admission to potentially unsafe imported products. NHTSA has no staff dedicated to border operations, but instead relies on CBP to screen incoming shipments and third-party laboratories to test pulled shipments. However, FDA, FSIS, and APHIS assign significantly more staff resources to border operations. According to FSIS officials, the agency physically examines 100 percent of meat, poultry, and egg product shipments presented for import with about 75 inspectors located at approximately 150 facilities near 35 border entry points. In addition, FSIS employed 20 import surveillance officers as of fiscal year 2009. APHIS officials told us 100 percent of plants and animals are inspected in cooperation with CBP. Because of the high percentage of shipments that are inspected, staff resources are accordingly greater. For example, about 1,800 port staff had been assigned to inspect fruit and plants at 139 ports of entry as of 2003. FDA examines approximately 1 percent of food presented for import and has requested about $382 million for fiscal year 2010 for activities that support import safety. This amount would fund approximately 700 staff supporting import examinations alone, including port operations, of which 78 percent would be field based. FDA personnel cover most ports of entry into the United States, including 297 ports in fiscal year 2008, but for the ports where FDA does not maintain a normal presence, it coordinates with CBP to ensure it is notified of relevant incoming shipments for which examination and/or sampling may take place. FDA’s border inspection activities are supported by compliance programs for agency field staff to use in carrying out inspections, sample collections, and analyses, among other things. For food safety alone, there are approximately 25 compliance programs and 12 that cover different imported foods. While FDA, FSIS, and APHIS have significant resources devoted to the port and overseas activities, they still face significant challenges in ensuring that products entering the United States are safe for consumers. As discussed earlier, CPSC has 9 compliance investigators at seven ports of entry, as well as about 100 product safety investigators located across the United States who work episodically to support the import surveillance program. Although the missions of FDA, USDA, and CPSC differ, CPSC’s staff resources supporting border surveillance are much less than the staff resources of these other agencies and may not be adequate to prevent unsafe products from entering the United States. FDA and USDA have more sophisticated information technology systems and analytical support to target potential risks at border entry points. FDA, FSIS, and APHIS invest significant resources in information technology systems that support border surveillance efforts. To oversee inspection of plants and animals, CBP created positions in each of its 20 district offices for agriculture liaisons. These liaisons not only advise CBP on border surveillance operations but also report back to APHIS on risks detected at the border for the purpose of expanding targeting operations. These liaisons have access to CBP’s Automated Targeting System, a computer system that stores detailed information from cargo manifests and other documents that shipping companies are required to provide before shipments arrive at ports for inspection. This system allows border staff to focus inspections on higher risk cargo. FSIS invests substantially—nearly $1 billion—in data infrastructure systems to assist its border inspections by linking inspection data with other public health information that is designed for FSIS to quickly and accurately identify trends and vulnerabilities affecting meat, poultry, and egg products. In addition, FSIS has developed a centralized computer system—the Automated Import Information System (AIIS)—that links all ports and tracks prior inspection results from each country and each foreign establishment for use in generating the type of inspection required on incoming shipments. FDA also uses an electronic environment— Operational and Administrative System for Import Support (OASIS)—to screen shipments presented for entry for relative risks and for making entry or inspection decisions. OASIS links with other data systems within FDA to leverage the latest information relating to public health. Also, FDA staff manually enter criteria into OASIS from sources such as import alert documents so that products can be flagged as they enter U.S. custom territory for the appearance of violations. According to FDA officials, there are currently about 270 import alerts in effect. FDA officials also told us that the overseas audits and direct communication with foreign governments provide useful information in helping border surveillance agents make entry determination decisions. As discussed earlier, CPSC targets few produ border inspections and has not developed formal systems for assessing risk and providing port staff with risk management tools. Whereas border surveillance efforts are geared toward intercepting potentially unsafe products at U.S. borders, outreach activities focused overseas may prevent potentially unsafe products from being shipped to U.S. ports. To this end, FDA and USDA assign staff to permanent positions in foreign countries and send staff overseas on a temporary basis to conduct educational workshops, as well as to conduct audits and inspections. Furthermore, some agencies have established cooperative agreements with foreign agencies to facilitate product safety. FDA and APHIS overseas outreach efforts help inform agencies about unsafe products. APHIS has more than 80 people around the world working with foreign embassies on plant and animal health issues. FDA announced the opening of offices in three cities in China in November 2008, and it has also announced plans to place technical experts and inspectors in four other regions, including Europe, India, Latin America, and the Middle East. These staff would be supported by approximately 8 staff in FDA headquarters in the United States. In addition, FDA has plans to hire 20 locally employed staff. FDA staff told us that an in-country presence is useful in preventing the entry of unsafe products because it improves the information border agents have to make entry decisions and allows the agency to train foreign establishments about compliance requirements. As discussed in more detail later, CPSC states that, with increased resources, it plans to open its first overseas office in Beijing, China, to facilitate safety efforts with one of the largest exporters of consumer products to the United States. FDA and FSIS conduct temporary visits, audits, or investigations in foreign countries that help to build foreign awareness of U.S. product safety laws. FSIS conducts on-site audits of foreign manufacturers as part of its systems equivalence determinations of foreign countries’ food safety systems. FDA officials told us that the audits and announced inspections it conducts of overseas manufacturers are very useful in training these manufacturers about U.S. standards. Furthermore, FDA has reported that it has engaged in a variety of efforts with foreign governments to build foreign capacity and provide technical assistance. For example, they report holding regional workshops in Peru and China, participating in a multilateral food safety meeting geared toward developing a rapid alert system, and auditing Chinese government inspectors during their review of 13 Chinese firms to detect drug residues in aquaculture products. As discussed earlier, CPSC does not conduct foreign inspections. However, CPSC staff have conducted visits to foreign manufacturing plants with the permission of the foreign government. CPSC also has plans for conducting three outreach and training events each for foreign government officials and foreign manufacturers in fiscal year 2010, but the agency is limited in its outreach efforts due to limited numbers of staff. FDA and FSIS have actively engaged with foreign governments on food safety. FDA has actively engaged with foreign governments to develop cooperative arrangements and agreements, including a substantial number of international government-to-government agreements. FDA’s Web site indicates a total of 63 MOUs or other cooperative agreements with about 25 different foreign countries. FSIS has also negotiated government-to- government agreements as part of the food safety system equivalency determination process. Specifically, some countries have negotiated alternative sanitary measures to obtain this certification. As of July 2009, CPSC has established MOUs for the purpose of consumer product safety with 16 foreign agencies, as discussed later, but this activity has occurred fairly recently and over the last few years. Australia, Canada, the EU, Japan, and the United States have some similar authorities for consumer product safety, but institutional structures to implement these authorities vary from country to country, reflecting unique national approaches. Countries also share similar challenges—such as inconsistent laws and standards and ineffective cooperation and liaison among agencies involved in consumer product safety— and national governments’ efforts to address import safety challenges have intensified in light of the growing volume of imports and recent consumer safety incidents. Among officials we interviewed, there is broad consensus that continued cooperation among governments, regulators and multilateral organizations can improve consumer product safety policy and enforcement consistency and, ultimately, the effectiveness of import safety frameworks. CPSC’s Office of International Programs and Intergovernmental Affairs participates in numerous activities with other countries and multilateral organizations. However, CPSC does not have comprehensive plans to guide its work with these countries and multilateral organizations due to resource constraints and other priorities, according to CPSC officials. Import safety authorities in Australia, Canada, the EU, Japan, and the United States reflect certain shared values and experiences. According to the OECD, a fundamental objective of consumer product safety policy is to prevent consumers from suffering harm as a consequence of using products that present an unreasonable risk of injury. While these countries have similar authorities, however, the implementation of those authorities may be different. For example, all the countries monitor both domestically manufactured and imported products, and all conduct some type of product testing and/or sampling. However, some of the countries monitor goods on their own initiative, while others operate on the basis of complaints that they receive about particular goods and products. According to the officials representing the countries we reviewed, none of those countries has the authority to conduct an extraterritorial inspection of the facilities of a foreign manufacturer that exports products to that country. In most cases, these officials stated that the countries have been more successful in working with the exporting country and its manufacturers in order to correct problems that may arise. These officials also stated that none of the countries we reviewed has the authority to require foreign manufacturers to consent to local jurisdiction. U.S. Embassy and Australian government officials indicated that, under current law, Australia could ask foreign jurisdictions to enforce Australian consumer product safety laws; however, the Australian government prefers other methods, such as approaching manufacturers directly to raise safety concerns. The approach countries take to consumer product safety begins fundamentally with how they define “safe” and “unsafe” products. The definitions vary considerably from country to country, as indicated in appendix IV. According to a report by the OECD, most countries apply broad principles to determine whether a product can be defined as safe. For example, according to this report, in some countries (Japan, the United States, and the EU) all products must meet a positive standard— that is, they should be safe for consumers to use or consume prior to market distribution. Businesses selling unsafe goods may be subject to regulatory action, regardless of whether the product has caused a specific accident, injury, or harm to a consumer. In other countries (Australia and Canada), according to the OECD report, products must not breach a negative standard—that is, once the goods are placed on the market they should not carry an unreasonable risk of injury or death. The report notes that producers are held liable for the negative effects of their products once placed on the market. According to officials, some variations exist with other authorities. They noted that in most of the countries we reviewed only a relatively small number of imported consumer products are subject to mandatory standards. However, according to a senior representative of an industrial association in Europe, the wide variety of product standards among countries, combined with variable concepts and legal interpretations applied by governments makes it difficult for industry to ensure safety and for countries to coordinate enforcement efforts. There are also differences in the case of product certification. Officials stated that neither Australia nor Canada requires certification for imported products. According to Japanese government officials, certain imported and domestic products in Japan are subject to product testing and cannot be sold in Japan without certification to prescribed standards. In the EU, according to official documentation, businesses must carry out conformity and safety assessments of their products in accordance with the General Product Safety Directive (GPSD), and businesses are required to certify that their products are safe, as defined under GPSD. The documentation indicates that for some products self-declaration is sufficient, but other products require third-party verification. According to the OECD report on consumer product safety, institutional structures for product safety can also vary from country to country, which can sometimes create challenges for coordination within and among countries and, in many cases, accounts for differences in enforcement and implementation of authorities. The report states that in Canada, consumer product safety policy, development, enforcement, information, and education functions are in one organization, Health Canada, with the provinces retaining some enforcement responsibilities. In the United States, CPSC is the primary agency responsible for implementing and enforcing federal consumer product safety laws and establishing consumer product safety policy. The OECD report further notes that some countries have institutional arrangements that separate policy and enforcement functions. In Japan, for example, policy responsibility is spread across the government in a range of departments, with a central coordinating function in a central policy agency (the Cabinet Office). Certain other countries, such as Australia, have regionally focused policy and enforcement structures for consumer product safety that reflect a division of powers and responsibilities between the national government and states, provinces or regions. In the EU, policy responsibilities lie with the European Commission, the executive arm of the EU responsible for defining and implementing its policies and running its programs. However, individual EU member countries are responsible in their respective territories for enforcement—market surveillance, product monitoring and testing, and possible restrictive or corrective actions. Countries also share similar challenges as they respond to changing demands in the international market place. Similar to the United States, national governments’ efforts to address import safety problems have intensified in light of the growing volume of imports entering each country and recent consumer safety incidents. According to the OECD report, many countries face enforcement challenges at both domestic and international levels, including inconsistent laws, regulations, standards, and sanctions within countries and across borders; ineffective cooperation and liaison among agencies involved in consumer product safety enforcement; and insufficient sharing of injury information across borders. Governments have taken a variety of actions to address these challenges, including enacting new laws and regulations and, in some cases, they have created new organizations to address new consumer safety challenges. See appendix IV for more information. Officials in Australia, Canada, the EU, Japan and the United States indicate that a mix of bilateral (country-to-country) and multilateral (involving multiple countries) exchanges and agreements among importing and exporting countries has been useful in addressing import safety challenges. The CPSC and its counterparts in other countries have taken a particularly active role in engaging China on consumer safety issues to create more transparent and cooperative relationships. According to the OECD’s 2008 Report on Consumer Product Safety, bilateral engagement helps facilitate an exchange of information regarding consumer product safety issues and provides a mechanism for coordinated action against unsafe products. In the United States, CPSC’s Office of International Programs and Intergovernmental Affairs administers MOUs between CPSC and consumer product safety entities in other countries, maintains regular contact with key exporting countries, and attends meetings and discussions sponsored by multilateral organizations. According to CPSC, as of June 2009, the office had established MOUs for the purpose of consumer product safety with 16 foreign agencies in Brazil, Canada, China, the EU, Israel, South Korea, Peru, Chile, Costa Rica, India, Japan, Mexico, Taiwan, Egypt, Columbia, and Vietnam. CPSC’s Office of International Programs also conducts training sessions in various countries to explain U.S. import safety processes and procedures. According to CPSC, staff hold monthly teleconferences with the agency’s counterparts in Canada, China, and the EU, and every two months CPSC holds a three-way teleconference with Mexico and China to provide additional opportunities for engagement. In 2008, CPSC created a Chinese-language page on the CPSC Web site and, not long after, a Vietnamese-language page to help facilitate information sharing. The pages provide information about U.S. product safety requirements, including relevant regulations and standards for products bound for the U.S. market, as well as information about the new CPSIA. Over the last few years, CPSC has increased its bilateral engagement with China. According to CPSC, the first U.S.-China Product Safety Summit was held in Beijing in 2005 and culminated in a joint Action Plan on Consumer Product Safety. CPSC and its counterpart in China, the General Administration for Quality Supervision, Inspection, and Quarantine (AQSIQ), established four working groups focused on fireworks, toys, lighters, and electrical products. According to CPSC, a third summit will be held in October 2009 and will build on the previous two, with the goal of institutionalizing a culture of product safety among Chinese consumer product manufacturers and exporters. In 2005, CPSC established a China Program Plan as a way of managing CPSC’s various China-related activities and as the basis for an overall strategy to promote the safety and compliance of Chinese consumer products exported to the United States. Although the plan is to be updated on an annual basis to account for changing conditions and new opportunities for progress, CPSC has not updated the China Program Plan since 2007. According to a senior CPSC official, the fiscal year 2008 and 2009 plans were essentially the same as the 2007 plan. He stated, however, that a revised China Program Plan for 2010 will be submitted to the reconstituted commission and will be published when approved. Other countries have also established bilateral agreements with China. The European Commission engages in international contacts and cooperation and has, for instance, agreed on a Memorandum of Understanding with China's AQSIQ. According to the EU, one of the key initiatives launched by the EU and China has involved the RAPEX system, the EU’s Rapid Alert System for nonfood consumer products. In May 2006, according to EU documentation, the European Commission decided to provide China’s AQSIQ with access to the RAPEX system—specifically its notifications on products coming from China. EU officials report that China agreed to investigate all reported cases of dangerous products of Chinese origin and report back to the EU on the results, including withdrawals of export license and other corrective actions. Also, EU officials state that certain individual EU Member States have established limited bilateral contacts with China. According to Health Canada, Canada signed an agreement with China on import safety in 2007. A summit between China, the EU, and the United States occurred in November 2008 to strengthen consumer product safety trilateral cooperation, according to U.S. and EU documents. As a key exporting nation, China has revised some of its own laws, regulations, and procedures in response to high- profile recalls of Chinese-made goods and the consequent international engagement on these issues, according to a senior CPSC official. He indicated that an example of such a change occurred in March 2009, when the Chinese National Institute of Standardization approved Administrative Guidelines for Safe Consumer Product Manufacturing that emphasizes the role of manufacturers in ensuring consumer product safety. In addition, the CPSC official stated that China’s AQSIQ had reported to CPSC that it has increased significantly the inspection of paint on export toys and closed down many factories that failed to implement a government requirement of selecting paint suppliers for toys only from a government- approved list. Multilateral engagement on consumer product safety issues provides other ways to encourage sharing of information and lessons learned on consumer product safety among a larger group of nations. Organizations such as OECD, the Asia-Pacific Economic Cooperation (APEC), and the International Consumer Product Safety Caucus provide additional frameworks for cooperation. U.S. and other officials believe that continued cooperation and coordination among governments and regulators can improve policy consistency and enforcement and, ultimately, the effectiveness of consumer product safety frameworks, particularly since consumer safety enforcement challenges are shared by most nations. On October 23, 2008, the OECD’s Committee on Consumer Policy hosted its first Roundtable on International Consumer Product Safety, with an aim to examine consumer product safety trends and challenges at both domestic and international levels. The Director of CPSC’s Office of International Programs and Intergovernmental Affairs attended this meeting, as did other OECD member nation representatives. The final report identified a number of key issues shared by member nations and initiatives for the future. CPSC representatives have also participated in APEC discussions concerning consumer product safety. In 2007, APEC leaders agreed on the need to develop a more robust approach to strengthening food and consumer product safety standards and practices in the region, using scientific, risk-based approaches and without creating unnecessary impediments to trade, according to APEC documents. APEC members reconvened in 2009 to determine future work on consumer product safety. CPSC’s Chairman and three staff participated in an APEC regulators’ dialogue on toy safety in August 2009 in Singapore aimed at strengthening information exchange among APEC members’ product safety officials. The International Consumer Product Safety Caucus is another platform that facilitates the exchange of information on consumer product safety issues in the area of governmental policy, legislation and market surveillance, with a view to strengthening collaboration and cooperation among governments and regulatory agencies around the world. Current active members include Australia, Canada, China, the EU, Korea, Japan, and the United States (represented by CPSC). The caucus meets at least twice a year. While CPSC participates in numerous activities with other countries and multilateral organizations to establish and strengthen coordinated actions against unsafe consumer products, and has established MOUs with 16 foreign agencies for this purpose, CPSC does not have plans covering its work with these countries and multilateral organizations—except for China. According to CPSC, this is due to resource limitations in CPSC’s Office of International Programs and Intergovernmental Affairs (as discussed earlier, the office has four staff) and because of its focus on China as the single largest source of foreign-made products. A senior CPSC official stated that with the creation of an additional staff position in the Office of International Programs, the office plans to expand its program planning to better address other countries. However, without a long-term plan that incorporates all the office’s activities, it is difficult to accurately assess current and future resource needs and take best advantage of opportunities for future coordination and cooperation among importing and exporting nations that CPSC considers integral to preventing the entry of unsafe products. Long-term planning is particularly important for CPSC’s Office of International Programs and Intergovernmental Affairs because of the diverse nature of its responsibilities and to ensure consistency in CPSC’s policies. CPSC has established annual goals and short-term plans to prevent the entry of unsafe products but lacks a long-term plan to address the agency’s growing role in import safety. Without a long-term plan, CPSC is not fully prepared to use new authorities granted in CPSIA, nor is it able to effectively address the safety of imported products through international means or to appropriately allocate any potential increases in agency resources. In May 2009, CPSC submitted a 2010 Performance Budget Request to Congress, which contains a section called the Import Safety Initiative. This initiative has three key principles: (1) assure that product safety is built into manufacturing and distribution processes from the start, (2) increase enforcement at the border to stop dangerous goods from entering the country, and (3) enhance surveillance of the marketplace to remove unsafe products from store shelves. These three principles are consistent with principles established on a governmentwide basis in 2007. In particular, the principles are consistent with those established by the Interagency Working Group on Import Safety, of which CPSC was a part. The working group issued an Action Plan for Import Safety in November 2007 that established three organizing principles: (1) prevention, which means to prevent harm in the first place by working with the private sector and foreign governments to adopt an approach to import safety that builds safety into manufacturing and distribution processes; (2) intervention, which means to act swiftly and in a coordinated manner when problems are discovered to seize, destroy, or otherwise prevent dangerous goods from advancing beyond the point of entry; and (3) response, which means to take swift action to limit potential exposure and harm to the American public in the event an unsafe import makes its way into domestic commerce. As part of the governmentwide strategy, CPSC developed its Import Safety Initiative, which contains annual goals that are consistent with the initiative’s key principles, but it is a short-term plan. For example, to help assure that product safety is built into manufacturing and distribution processes from the start, CSPC states that it plans to conduct three outreach and training events for foreign government officials in 2010 and three outreach and training events for foreign manufacturers. CPSC also has a short-term plan for how it will manage its various China-related activities and states that, for 2010, staff will review and update this plan. To increase enforcement at the border, CPSC states that it plans to increase the number of full-time staff working at U.S. ports and to increase the number of sample products screened at the ports. CPSC’s Import Safety Initiative also links goals to requests for increased resources. For example, CPSC states that, with increased resources, it plans to increase its presence at U.S. ports of entry and open its first overseas office in Beijing, China. CPSC officials have described to us other short-term plans that they developed to respond to requirements and authorizations in CPSIA. For example, as discussed earlier in this report, CPSC’s decision to assign additional full-time staff to ports responds to Section 202 of CPSIA, which requires CPSC to hire personnel to be assigned to duty stations at U.S. ports of entry, or to inspect overseas manufacturing facilities, subject to the availability of appropriations. In its Import Safety Initiative, CPSC requests funding for 10 additional staff to be assigned to ports in 2010. A CPSC official with whom we spoke said that he expects the number of staff assigned to ports to grow from its current level of 9 to about 50 over the next few years. However, CPSC has conducted limited analyses of how it plans to assign additional staff to ports in the coming years, and standard operating procedures that describe compliance investigators’ roles and responsibilities at ports of entry have not been updated since 1989. CPSC officials acknowledged the need to update these procedures. CPSIA also requires CPSC, as discussed earlier in this report, to develop a methodology for identifying shipments of imported consumer products that are likely to violate import provisions enforced by CPSC due by August 2010. CPSC, as noted earlier, has taken steps to develop a plan for sharing information and coordinating with CBP, but it is unlikely that CPSC will complete this plan by August 2009, as required under CPSIA, because of delays in updating its agreements with CBP. In undertaking its planning efforts, CPSC has recognized the need for U.S. consumer product safety policy to comply with World Trade Organization (WTO) obligations and international trade agreements—a positive recognition on CPSC’s part. A CPSC official involved in international education and outreach activities said that, in working to address U.S. concerns about the safety of imported products, it is also critical to comply with WTO rules. The official said there are statutory requirements—namely, the Trade Act of 1979—mandating U.S. standards for complying with international trade agreements. He said that CPSC has had a productive working relationship with USTR in the past, and that CPSC is looking to formalize its working relationship with USTR in the future by developing internal standard operating procedures for consulting with USTR. The official said that the procedures would be useful to CPSC in identifying issues that should have USTR’s input before they are finalized. CPSC has also recognized the importance of international trade agreements through its work with international groups, such as OECD, as discussed previously in this report. In particular, CPSC has recognized that the WTO Agreements on Technical Barriers to Trade—which establishes rules for preparing, adopting, and applying technical regulations, standards, and conformity assessment procedures—serves to encourage uniformity and predictability in national consumer product safety regimes. Although CPSC has established short-term plans and annual goals to prevent the entry of unsafe products, the agency has not developed a long- term plan for addressing its import safety work. In particular, CPSC has not updated its agencywide Strategic Plan, which was issued in 2003 and was due for revision in 2006. According to the Government Performance and Results Act, strategic plans help agencies establish long-term goals, including identifying the resources needed to accomplish these goals. The act calls for federal agencies to develop multiyear strategic plans and update them at least every 3 years. CPSC’s Strategic Plan does not reflect its import safety work, its plans for international education and outreach activities, its plans to use new authorities granted in CPSIA to prevent the entry of unsafe products, or its plans to respond to mandates in CPSIA to improve its risk assessment and coordination with CBP. CPSC has recently begun efforts to update its Strategic Plan by requesting public comments on revisions to the plan. In addition to lacking a long-term plan to prevent the entry of unsafe products, CPSC does not have outcome-oriented performance measures to assess the effectiveness of its import safety work. One of CPSC’s goals for 2010 is to develop measures of import safety success, according to CPSC’s Import Safety Initiative. CPSC reports that, in 2008, staff researched and evaluated information for an enhanced surveillance system, making contact with FDA, CBP, and Internal Revenue Service staff to discuss methods and requirements of their systems. As discussed earlier in this report, CPSC has also requested public input concerning the development of consumer product safety metrics, but it received only two responses, neither of which addressed CPSC’s need for developing new performance measures. CPSC has established short-terms plans and annual goals for its import safety work, but it does not have goals for these activities beyond 2010. Without a long-term plan for import safety that contains key goals and performance measures, CPSC may be unable to replicate or enhance its short-term efforts over the longer term. For example, CPSC may find insufficient staff to cover meetings and seminars needed to work with foreign governments and foreign manufacturers over the long-term to build product safety into manufacturing and distribution processes from the start. CPSC may also find it difficult to analyze any data it collects through surveillance of the marketplace to strengthen and improve its targeting decisions at the ports. Finally, CPSC may face challenges in ensuring that any further resources it devotes to increasing its port staff and operations are also accompanied by appropriate growth in its analytical and other support staff to help ensure a comprehensive and balanced approach to product safety. Broad agreement exists among CPSC staff, legal experts, industry representatives, and consumer advocates that CPSC’s authorities to prevent the entry of unsafe products into the United States have the potential to be effective, but only if they are implemented more fully. With delays in some rulemakings, such as testing and certification requirements, it remains unclear whether CPSC will be able to implement its authorities effectively. Furthermore, CPSC faces significant challenges due to competing priorities and resource constraints. CPSC has taken positive steps to shift its approach to import product safety from one focused on responding to problems after products have entered the marketplace to an approach focused on preventing harmful products from ever reaching consumers. To implement this preventive approach, CPSC states that it is taking steps to enhance surveillance activities, increase enforcement at the ports, engage foreign governments, and educate foreign manufacturers on U.S. standards for consumer product safety. Our work demonstrates that CPSC needs to strengthen its surveillance activities, particularly its ability to target potentially unsafe products for further screening and review at U.S. ports. CPSC has yet to obtain access to advance shipment data, which FDA’s experience suggests could be useful in targeting incoming shipments. In addition, CPSC’s agreements with CBP are outdated, which hinders CPSC and CBP’s ability to target imports under CPSC’s jurisdiction. CPSIA requires that CPSC and CBP work together to develop a methodology to assess the risks of various imported products and to cooperate on CPSC’s participation in a CBP targeting center. These joint efforts are a key element for improving CPSC’s ability to target shipments for screening and review at the ports and to ensuring consistent enforcement of CPSC’s authorities across the United States. Because CPSC relies heavily on CBP for enforcement at the ports, it is imperative for CPSC and CBP to resolve issues concerning their agreements for sharing information and update their procedures for operating at the port. CPSC’s targeting efforts could be strengthened further through expanded engagement with foreign governments and education of foreign manufacturers on U.S. consumer product safety standards. Such outreach could inform industry of its responsibility for the safety of consumer products entering the United States and provide CPSC with information on manufacturing in the respective countries to assist the agency’s development of a risk assessment methodology for imported products. Without improving its ability to target potential risks across a broad range of product categories, it is unclear how CPSC will succeed in preventing unsafe consumer products from entering the United States. CPSC’s inspection of foreign manufacturing plants faces practical constraints and would likely require tremendous resources to implement. CPSC believes strong cooperative relationships between countries to build strong frameworks for consumer product safety are a more effective approach for the United States. As part of its approach, CPSC is in the process of developing such relationships, and current MOUs between CPSC and certain foreign countries primarily address information sharing. CPSC officials state that expanding CPSC’s education and outreach rather than inspection of foreign plants could serve to more effectively prevent the entry of unsafe consumer products. Similarly, officials from the U.S. agencies, with the exception of FDA, and countries we reviewed, stated that they do not conduct inspections of foreign manufacturing plants. In most cases, officials we interviewed stated that the countries have been more successful in working with exporters in order to correct problems that may arise. Therefore, we are not recommending any additional authorities be granted to CPSC at this time. Efforts to expand U.S. jurisdiction to foreign manufacturers for purposes of enforcement action also present unique practical considerations. It may be argued that if foreign manufacturers were required to consent to U.S. jurisdiction, CPSC’s enforcement ability would be strengthened because CPSC would have one less hurdle to overcome in pursuing enforcement actions. Nevertheless, CPSC staff stated that, at this time, CPSC does not see the need for this requirement in order to effectively carry out its enforcement duties. To date, CPSC has been able to satisfy its enforcement objectives by pursuing the domestic partners—broadly defined to include those companies along the supply chain to the retailer—associated with the foreign manufacturer. CPSC also has the ability to settle enforcement actions with foreign parties. FDA and USDA officials have found that their efforts to educate overseas industry and governments on U.S. safety standards and the particular risks being screened for at the border could reduce the number of unsafe products that reach U.S. consumers. Similarly, CPSC staff we interviewed stated that expanded international education and outreach, rather than expanded enforcement jurisdiction, would more effectively prevent the entry of unsafe products, although they acknowledged that consent to jurisdiction or a requirement of a U.S. agent for service of process would be helpful. Due to the practical considerations associated with requiring foreign manufacturers to consent to U.S. jurisdiction for purposes of CPSC enforcement actions, we make no recommendations for additional CPSC authorities at this time. CPSC’s short-term plans to prevent the entry of unsafe products are consistent with a governmentwide approach taken by the Interagency Working Group on Import Safety in 2007. That group, of which CPSC was a part, established three organizing principles—prevention, intervention, and response—that represent, in our view, a comprehensive approach to import safety. However, CPSC lacks a long-term plan to prevent the entry of unsafe products. CPSC has not updated its September 2003 Strategic Plan, even though the Government Performance and Results Act requires this plan to be updated at least every 3 years. Although CPSC has initiated steps to update its Strategic Plan by requesting public comments, it is important for CPSC to work expeditiously to follow through on its efforts. In addition, while CPSC recognizes the need for outcome-oriented performance measures and has taken steps to develop new measures, it does not currently have such measures in place for its import safety work. Without a long-term plan that contains key goals and measures, CPSC may find it difficult to address its challenges in implementing the new authorities granted in CPSIA to prevent the entry of unsafe products, such as decisions about where and how to allocate any future increases in agency resources. First, to ensure that CPSC is able to exercise its full authority to prevent the entry of unsafe consumer products into the United States, we recommend that CPSC ensure expeditious implementation of key provisions of CPSIA, including establishing the substantial product hazard list and implementing testing and certification requirements that are subject to stay of enforcement until February 2010, and complete its rulemaking as required under the act. Second, to strengthen CPSC’s ability to prevent the entry of unsafe products into the United States, we recommend that the Chairman and commissioners of CPSC take several actions to improve the agency’s ability to target shipments for further screening and review at U.S. ports of entry as follows: 1. To ensure that it has appropriate data and procedures to prevent entry of unsafe products into the United States, we recommend that CPSC update agreements with CBP to clarify each agency’s roles and to resolve issues for obtaining access to advance shipment data; and 2. To improve its targeting decisions and build its risk-analysis capability, a. work with CBP, as directed under CPSIA through the planned targeting center for health and safety issues, to develop the capacity to analyze advance shipment data; and b. link data CPSC gathers from surveillance activities and from international education and outreach activities to further target incoming shipments. Third, to provide better long-term planning for its import safety work and to account for new authorities granted in CPSIA, we recommend that CPSC expeditiously update its agencywide Strategic Plan. In updating its Strategic Plan, we recommend that CPSC consider the impact of its enhanced surveillance of the marketplace and at U.S. ports as discussed above and determine whether requisite analytical and laboratory staff are in place to support any increased activity that may occur at U.S. ports. Furthermore, we recommend that CPSC’s Strategic Plan include a comprehensive plan for the Office of International Programs and Intergovernmental Affairs to work with foreign governments in bilateral and multilateral environments to 1. educate foreign manufacturers about U.S. product safety standards and best practices, and 2. coordinate on development of effective international frameworks for consumer product safety. We provided a draft of this report to CPSC, CBP, USTR, and the Departments of Agriculture; Commerce; Health and Human Services; State; and Transportation; and to EU and Canadian officials for review and comment. CPSC, CBP, USTR, Agriculture, Health and Human Services, Transportation, and EU and Canadian officials provided technical comments, which we incorporated as appropriate. CPSC stated that it concurs with our recommendations. We are sending copies of this report to interested congressional committees and the Chairman and commissioners of CPSC. We are also sending copies to the Secretaries of Agriculture, Commerce, Homeland Security, Health and Human Services, State, and Transportation, the United States Trade Representative, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To determine the effectiveness of the Consumer Product Safety Commission’s (CPSC) import safety authorities, we examined CPSC data and interviewed CPSC officials to learn how the agency measures and assesses its own effectiveness. We also conducted extensive document reviews on consumer product safety generally and import safety specifically. We interviewed legal professionals and consumer and industry representatives to obtain their perspective on the effectiveness of CPSC’s authorities. We also interviewed officials from other federal agencies involved in imported product safety, including U.S. Customs and Border Protection (CBP), the Office of the United States Trade Representative (USTR), and the Departments of State and Commerce. We visited a U.S. port of entry to observe CPSC import surveillance activities and CPSC’s interaction with staff from CBP. We also visited CPSC’s Product Testing Laboratory in Gaithersburg, Maryland, to observe laboratory testing that supports import safety activities. To compare CPSC’s authorities with respect to the safety of imported products with the authorities of select federal agencies, we identified key federal agencies with import regulatory authority over other types of consumer goods. These agencies are the Food and Drug Administration (FDA), which oversees the safety of imported food, drugs, cosmetics, and medical devices; the United States Department of Agriculture’s (USDA) Food Safety and Inspection Service (FSIS), which oversees the safety of imported egg products, meat and poultry; USDA’s Animal Plant Health Inspection Service (APHIS), which oversees the safety of imported plants and animals; and the National Highway Traffic Safety Administration (NHTSA), which oversees the safety of imported motor vehicles and equipment. We interviewed officials from each of these agencies, had them identify the primary statutory authorities for ensuring the safety of imports under their jurisdiction, and discussed various agency activities supporting import safety. For FDA, the primary statutory authority is the Federal Food, Drug, and Cosmetic Act. For USDA, the primary statutory authorities are the Federal Meat Inspection Act, the Poultry Products Inspection Act, the Egg Products Inspection Act, the Animal Health Protection Act, and the Plant Protection Act. For NHTSA, the primary statutory authority is the National Traffic and Motor Vehicle Safety Act, which has been classified, as amended, at Subtitle VI of Title 49 of the U.S. Code. For our comparative analysis of the product safety authorities of foreign countries, we selected countries that are members of the International Consumer Product Safety Caucus, which is an international forum consisting of product safety officials from member governments to facilitate the exchange of information on consumer product safety. Specifically, we selected Australia, Canada, China, the European Union (EU), and Japan. We developed a set of questions concerning consumer product safety authorities, practices, and procedures and worked through the U.S. Department of State to distribute the questions to appropriate contacts at U.S. embassies overseas and, in some cases, to foreign embassies in Washington, D.C. We interviewed desk officers for the selected countries from the Departments of State and Commerce in Washington, D.C., and relied on the Department of State to advise us on the recommended approach to take with each country. We reviewed foreign laws and regulations, as well as other documents regarding product safety, provided by U.S. Embassy officials in the selected countries. We did not independently analyze the laws, regulations, or procedures of these countries; instead, we relied on third-party assessments of each country’s consumer product safety framework. We received written responses to our questions from and conducted interviews with the U.S. embassies in Australia, Canada, and China. U.S. embassy officials told us that their responses were coordinated with country officials knowledgeable of the respective country’s laws, regulations, and procedures. We received written responses to our questions from officials with the Embassy of Japan in Washington, D.C. We received written answers to our questions from consumer product safety officials in the EU. We also received information from the supreme audit institutions in these countries regarding their work on consumer product safety. We conducted interviews with consumer product safety officials from Canada, the EU, and Japan at a conference of the International Consumer Product Health and Safety Organization (ICPHSO) in Orlando, Florida. We reviewed publicly available documents on the Web sites of consumer product safety agencies in each country. We also reviewed and utilized documents provided by the Organization for Economic Cooperation and Development (OECD), including OECD member country responses to a 2008 questionnaire concerning consumer product safety. Department of State officials reviewed a draft of our country summaries and provided comments, which we incorporated. To evaluate CPSC’s plans to prevent the entry of unsafe products in the future, we reviewed CPSC’s 2010 Performance Budget Request and compared CPSC’s planning efforts to guidance GAO has developed for implementation of the Government Performance and Results Act. We examined other CPSC data and interviewed CPSC officials to learn about CPSC’s future plans. We also interviewed legal professionals and consumer and industry representatives to obtain their perspectives on CPSC’s future plans. We conducted this performance audit from September 2008 to August 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Consumer Product Safety Improvement Act contained a mandate requiring that GAO make a recommendation as to whether foreign manufacturers should be required to consent to the jurisdiction of U.S. courts with respect to enforcement actions by the commission. We raised this issue in our interviews with officials at the Consumer Product Safety Commission (CPSC), the United States Department of Agriculture, the Food and Drug Administration, and the National Highway Traffic Safety Administration, as well as officials representing most of the international entities we selected for study—Australia, Canada, the European Union, and Japan. CPSC staff stated that, at this time, CPSC does not see the need for this requirement in order to effectively carry out its enforcement duties. CPSC has authority to institute administrative or civil enforcement actions against manufacturers, distributors, importers, and retailers. CPSC may opt to negotiate a settlement or consent agreement rather than instituting an adjudicative proceeding in a federal or administrative court. Enforcing product safety standards on foreign manufacturers through an adjudicative proceeding could theoretically pose practical challenges. For example, one important prerequisite to maintaining an action against any defendant in a U.S. state, federal, or administrative court is that the court must have the ability to exert personal jurisdiction over that party. A court’s exercise of personal jurisdiction over a party must satisfy the fundamental notions of fairness mandated by the Due Process Clause of the Fifth or Fourteenth Amendments. In the case of a defendant physically located outside the territorial jurisdiction, such as a foreign manufacturer, personal jurisdiction can be established if sufficient contacts exist between a defendant and the territorial jurisdiction where the court sits, and the defendant receives fair notice of the suit. Both requirements are fact-specific and must ultimately be decided by a court, if challenged by the defendant. By requiring foreign manufacturers to consent to U.S. jurisdiction for purposes of CPSC enforcement actions, CPSC’s enforcement process could possibly be expedited in that it would eliminate a personal jurisdictional challenge to the enforcement action. However, as noted above, CSPC did not see a need to take such action at this time. CPSC noted that it can pursue each of the actors in the supply chain, from the manufacturer to the retailer, foreign or domestic. Despite the challenges that could theoretically arise in instituting an enforcement action against a foreign actor, to date, pursuing the domestic partners of such actors has satisfied CPSC’s enforcement objectives. Further, CSPC has the ability to settle enforcement actions with foreign parties. In the event a settlement cannot be reached voluntarily, any formal action against a foreign corporation must be served following the Convention on the Service Abroad of Judicial and Extrajudicial Documents in Civil or Commercial Matters (“Convention”). CPSC also has the authority to file suit against a foreign manufacturer for civil penalties for violations of certain provisions of its statutes, if it can effect service by the Convention or otherwise, and establish that the court has jurisdiction. For example, the Department of Justice on behalf of CPSC recently filed suit against a foreign manufacturer in the U.S. District Court for the District of Minnesota, which was settled in July 2009. CPSC staff we interviewed believe that expanded international education and outreach programs, as opposed to requiring foreign manufacturers to consent to jurisdiction, are preferable tools to effectively prevent the entry of unsafe consumer products, although they acknowledged that consent to jurisdiction or a requirement of a U.S. agent for service of process would be helpful. In addition, each of the U.S. federal agencies and international entities that we interviewed stated that they do not require consent by foreign manufacturers to local jurisdiction with respect to enforcement actions. Therefore, we are not recommending any action at this time. We compared the Consumer Product Safety Commission’s (CPSC) key authorities for preventing the import of unsafe consumer products to those of three federal agencies—the Food and Drug Administration (FDA), the National Highway Traffic Safety Administration (NHTSA), and the United States Department of Agriculture (USDA). Table 1 describes some of the statutory and regulatory provisions of such agencies with respect to various regulatory activities, such as inspecting shipments that are presented for import into the United States. In acknowledgement of the ongoing efforts of the Interagency Working Group on Import Safety, in which these four agencies participate, we present these authorities according to the same principles that are the foundation of the group’s strategic framework—prevention and intervention. Although the group uses a third principle—response—we generally did not evaluate agencies’ authorities to respond after an unsafe import enters U.S. commerce because the scope of our work was limited to those authorities to prevent their entry. . We compared consumer product safety authorities, practices, and procedures for Australia, Canada, China, the European Union (EU), Japan and the United States. Figure 3 identifies the authorities and activities we compared, as well as any future plans for changing the organizations, structures, and/or mechanisms for consumer product safety in the respective countries. Following the figure is a more detailed discussion of the authorities, practices, and procedures. The information in this appendix is based on information we received through U.S. Embassies in these countries, foreign embassies in Washington, D.C., and interviews with country officials. We reviewed documents regarding product safety provided by U.S. Embassy officials in the selected countries. We did not independently analyze the laws or procedures of these countries; instead, we relied on third-party assessments of each country’s consumer product safety framework. Currently, the dominant issue concerning consumer product safety in Australia is the reorganization of its policy framework. On October 2, 2008, the Council of Australian Governments agreed to a new policy framework for implementation in 2010, comprising a single national consumer law and streamlined enforcement arrangements. This more centralized approach to consumer product safety replaces the current system in which the federal, state, and territory governments all share responsibility for consumer policy and enforcement. Key organizations: Currently, responsibility for product safety regulation in Australia is shared between the federal, state, and territory governments. The Australian Treasury is the agency responsible for developing consumer policy, and the Australian Competition and Consumer Commission monitors and enforces compliance with product safety laws. In addition, state and territory governments each have their own fair trading agencies that enact and enforce state-based consumer product safety legislation. Such legislation is similar, but not identical, to federal government legislation, which sometimes leads to legislative inconsistencies between jurisdictions. Resources: Australia has 30 policy staff working in the Product Safety section of the Australian Competition and Consumer Commission and enforcement staff numbering around 150 around Australia. There is no separate international office for consumer product safety. The total combined resources allocated by the Australian Commonwealth (federal), state and territory governments to enforcement in this area of consumer product safety are estimated to be about A$5 million annually. Consumer advocacy: Consumer advocacy groups have emerged in Australia over the last 40 years in response to growing interest in product safety issues. Groups such as the Australian Consumers Association supply information on safety issues to consumers and lobby federal, state and territory governments to address the most serious product-related hazards. Laws and regulations: Currently, Australia’s general consumer product safety system is based on the product safety provisions contained in the Trade Practices Act 1974 and on equivalent provisions in Fair Trade Acts in Australia’s eight states and territories. The administration and enforcement of these provisions, along with other nonregulatory activities conducted by the federal, state, and territory governments, are also part of the system. The Trade Practices Act 1974 contains general product safety provisions, as well as a product liability regime that enables consumers to seek a range of remedies, including damages for loss or damage caused by a defective product. The act provides the Australian government minister responsible for consumer affairs the power to intervene in markets to ensure product safety, including such activities as prescribing consumer product safety and consumer product declaring products unsafe and banning them; investigating products to determine whether they will or may cause injury and/or issuing a warning notice of the risk of using the product; ordering the compulsory recall of products; and obtaining information, documents, and other evidence related to the administration of the safety provisions of the Trade Practices Act. While in law the general regime applies to all consumer products, in effect this system provides the general legal safety net for products not otherwise protected by specific legislation that addresses more hazardous products. Definition of safe products: Australia currently has no definition of safe or unsafe products. However, current law allows the Minister for Consumer Affairs to ban or compulsorily recall consumer products in cases where the products will or may cause injury. Standards: The Trade Practices Act provides the Australian government minister responsible for consumer affairs with the power to establish mandatory standards for a product where it can be demonstrated that it has the potential to cause injury. Standards Australia, an independent, nongovernmental organization, is the sole recognized body for standards development. Only a small number of imported consumer products are subject to mandatory standards, and over half of the standards apply to products that may pose a danger to children. State and territory legislation also allows for the issuance of mandatory standards. At times, the Australia Competition and Consumer Commission, the Treasury, and the state and territory fair trading representatives have participated in Standards Australia processes. Detection, reporting, and removal of unsafe products: The current regulatory system relies on governments (federal, state, and territory) to identify and regulate specific product hazards. According to Australia’s Ministerial Council on Consumer Affairs, the ability of these governments to address potential safety hazards across a great range of products is affected by limitations on their resources and by the time and effort required to implement, enforce, and review product-specific regulations. Currently, the vast majority of product recalls are undertaken voluntarily by businesses that have become aware of a safety problem concerning one of their products. The Trade Practices Act and many of the state and territory Fair Trading Acts contain provisions that allow governments to order compulsory product recalls when necessary. Business responsibility: Businesses promote product safety through industry sector associations, which often undertake such self-regulatory activities as business education, the development of industry codes of conduct, and engagement with law enforcement and standards development bodies on enforcement and policy issues. Currently there is no formal requirement for suppliers to monitor the safety of the products they sell, once those products are released to the marketplace. Under the current regulatory system, businesses are required to report voluntary recalls to the Australian government minister responsible for consumer affairs and to the Office of Fair Trading in some other jurisdictions. The Ministerial Council on Consumer Affairs has proposed that suppliers be required to monitor the ongoing safety of the products they sell and report to the government any products that are under investigation for possible safety risks, have been associated with serious injury and death, or have been the subject of a successful product liability claim. Policy enforcement and compliance: The Australian Competition and Consumer Commission is responsible for enforcing the Trade Practices Act’s product safety regime. To ensure that suppliers subject to mandatory standards and bans are responding appropriately, the commission may compel the provision of information, require evidence under oath, undertake random market surveys, enter premises, and seize documents. In situations where suppliers have failed to comply with mandatory standards or bans, the commission can seek orders in the Federal Court requiring such suppliers to recall the noncomplying products. Additionally, the commission may institute civil proceedings or criminal proceedings under the Trade Practices Act. State and territory governments have enforcement powers similar to those of the Competition and Consumer Commission under their own legislation. The relevant state and territory Fair Trading Acts contain criminal liability provisions similar to those in the Trade Practices Act. On October 2, 2008, the Council of Australian Governments agreed to a new consumer policy framework as proposed by the Ministerial Council on Consumer Affairs. According to the Australian government, the new framework consists of a single national consumer law and streamlined enforcement arrangements. Australia’s states and territories are expected to adopt the new Trade Practices Act in its entirety in 2010, providing a harmonized product safety regime with greater federal government control. The council recognized that while Australia’s current consumer policy framework has strengths, it is in need of significant improvements to overcome existing inconsistencies, gaps, and duplication in Australia’s consumer legislation and its enforcement. The reforms have the following three key elements: the development of a consumer law (called the Australian Consumer Law) to be applied both nationally and in each state and territory, which is based on the existing consumer protection provisions of the Trade Practices Act 1974, and which includes a new national provision regulating unfair contract terms, new enforcement powers and, where agreed, changes based on best practices in state and territory laws; the implementation of a new national product safety regulatory and enforcement framework, as part of the national consumer law; and the development of enhanced enforcement cooperation and information-sharing mechanisms between national and state and regulatory agencies. Changing consumer demands, new technologies, and the increasing complexity of global supply chains are the major influences behind Canada’s current efforts to modernize its regulatory tools for consumer product safety. According to the Canadian government, the authorities governing food, health, and consumer products in Canada derive from legislation developed in the 1950s and 1960s and, as a result, they are out of step with modern realities and needs. For example, the Canadian government lacks sufficient authority to issue a mandatory recall of a health or consumer product if it poses a serious or imminent risk to health and safety or to compel manufacturers to take steps to reduce the risk associated with a product. In addition, according to the Canadian government, fines and penalties are low compared with those of other countries. New legislation will update and strengthen Canada’s consumer product safety framework. Key organizations: Currently, Health Canada regulates the import, sale, and advertisement of hazardous products or substances. Health Canada supports the development of safety standards and guidelines; enforces legislation by conducting investigations, inspections, seizures, tests and conducts research on consumer products; provides importers, manufacturers, and distributors with hazard and publishes product advisories, warnings, and recalls; and promotes safety and the responsible use of products. The Canada Border Services Agency is responsible for stopping goods at the border. The agency has a service agreement with Health Canada under which it seeks to prevent prohibited products from entering Canada and facilitates additional targeted inspections of these products, as well as shipments of products from companies with histories of poor compliance. In addition, the agency’s Single Window Initiative will give the department access to import and export data that will help to efficiently approve shipments of low-risk products from low-risk suppliers or, alternatively, tag suspicious ones before they have left their point of export. Other organizations include the Standards Council of Canada, the Canadian Standards Association, the Canadian General Standards Board, and the Underwriters’ Laboratories of Canada. Also, Canada’s Provincial governments have jurisdiction over the adoption of the National Building Code, which includes certification requirements for electrical, gas, and plumbing products. Resources: Canada’s consumer product safety agency, within Health Canada, consists of 130 employees who serve as laboratory, compliance, and policy development staff. Consumer advocacy: Consumer advocacy groups in Canada are particularly concerned with consumer safety issues related to children’s products and food products. Some advocacy groups in Canada include the Canada Toy Testing Council, the Consumers’ Association of Canada, Consumers Council of Canada, and the Public Interest Advocacy Centre. Laws and regulations: The Canadian government’s key legislation governing consumer product safety is the Hazardous Products Act. Part 1 of the act lists consumer products that are either restricted through regulation or outright prohibited from being advertised, sold, or imported into Canada. There are approximately 30 products and product categories that are regulated, and some 25 others that are prohibited. All imported products are subject to the D Memoranda, which incorporates legislation, regulations, policies, and procedures used by the Canada Border Services Agency. Canada’s Chemical Management Plan also has an impact on consumer product safety. Definition of safe product: Canada has no specific definition under the Hazardous Products Act. However, a new act, currently awaiting Canadian Senate approval as of June 2009, will include a definition of “danger to human health or safety,” which will support a general prohibition. Standards: Some consumer product standards are mandatory legal requirements, others are industry standards developed on a voluntary basis, and some are purely market driven as a particular technology becomes the industry standard. Approximately two-thirds of standards in Canada are voluntary. Federal and provincial legislation may impose mandatory standards for products, typically where health or safety issues are regarded as requiring regulation. Standards can also be written into the legislation itself; such is the case with certain specifications in toy regulations under the Hazardous Products Act. The Standards Council of Canada is the national coordinating body for the development of voluntary standards through the National Standards System. Detection, reporting, and removal of unsafe products: The current Canadian product safety regulatory system follows a reactive approach. When a product has been deemed to pose a risk to users—usually over a period of time, with reported injuries and/or deaths associated with the product’s use—a risk assessment is carried out. The regulatory process involves many steps, including consultation with public, industry, and technical experts. The end result is either that the product remains available for sale in the Canadian marketplace or that Health Canada imposes a legal ban on the product under the Hazardous Product Act. Business responsibility: Currently, there is no mandatory reporting for businesses, and Health Canada relies largely on negotiating with suppliers to voluntarily recall or take other corrective measures to address a product that poses an unreasonable danger to the health or safety of consumers. The new Consumer Product Safety Act (discussed below) will give inspectors the ability to order a suppler to take corrective measures. Policy enforcement and compliance: Canadian authorities have the ability to seize products, prosecute violations through a criminal code, and impose civil money penalties. The maximum amount of a civil money penalty is $1 million per violation, although penalties of $25,000 are most common. On June 12, 2009, Canada’s new Consumer Product Safety Act was passed by the House of Commons and, as of the end of June 2009, is awaiting final action by the Canadian Senate. The Consumer Product Safety Act would replace Part I of the Hazardous Products Act and includes a new regulatory regime. The act focuses on three key areas: Working to address problems before they happen: The legislation introduces a general prohibition against the manufacture, importation, advertisement or sale of consumer products that pose an unreasonable danger to human health or safety. It strengthens compliance promotion and enforcement activities through increased fines up to $5 million for some offenses and fines that are left to the discretion of the courts where the offense is committed knowingly or recklessly. Targeting the highest risk: The act provides the authority to require suppliers to conduct safety tests upon a minister’s orders and to provide the results where there are indications of a problem. The legislation will also require suppliers to notify Health Canada of serious incidents or defects and to provide detailed reports about the incidents. Rapid response: The act allows the Canadian government to take more immediate responsive action to protect the public when a problem occurs. It would authorize inspectors to order mandatory recalls and other corrective measures to address unsafe consumer products and would require suppliers to maintain accurate records to enable quick product tracking. In addition, to further improve the government’s ability to respond effectively, Health Canada would double the number of product safety inspectors. In 2007, in response to massive recalls of consumer products worldwide, the European Commission (EC) conducted an internal review of the European Union (EU) product safety framework. The review concluded that the community regulatory system (discussed below), including the General Product Safety Directive, was capable of providing to European citizens a high level of protection against unsafe consumer products, as long as the rules of the system were properly applied. The review identified areas for improvement and ways of perfecting their system. The review stated that the adoption of the “Commission Decision” on magnets in toys; the revision of the “European Directive” on the safety of toys; and the issuance of rules called the “New Legislative Framework” for the marketing of goods; would also raise the existing level of protection. It also identified some areas for further attention. These findings were subsequently referred to in an official EU report on the implementation of the relevant legislation. Key organizations: The EC Directorates General for Health and Consumers (consumer product safety), Enterprise and Industry (safety of regulated products) and Taxation and Customs Union (import safety) put forth legislation aimed at further ensuring the safety of products. The Directorate General for Health and Consumers, referred to as DG SANCO, is an EU branch that is somewhat equivalent to CPSC in driving consumer product safety matters both within Europe and internationally. However, EU member states (currently 27 individual countries) are responsible for implementation and enforcement of EU legislation. Each member state has established its unique structures for handling product safety given their cultural history and industrial background. The European Commission coordinates their approaches and ensures their cooperation. Resources: Because of the very different role of the EC as compared to CPSC in product safety enforcement, EU officials had difficulties providing us with useful figures on resources and stated they risked being seriously misleading if the member states’ role was not taken into consideration. They did not have conclusive data for the member states. Different commission departments also perform part of their functions related to consumer product safety, such as reviewing European legislation in certain sectors relevant for consumer safety, which would make sense to include in their “central function” resources. While the “Product and Service Safety” unit in DG SANCO is generally comparable to CPSC in terms of policy function, it does not include actual implementation and enforcement at the level of the individual member states. Many staff from the Product and Service Safety unit play a role in the “international” (versus “European”) area, but also have other responsibilities. DG SANCO determines who represents the EU at the international level by the subject-matter expertise for product safety. Laws and regulations: The General Product Safety Directive sets out the basic consumer product safety requirements and defines a “consumer product,” and Article 2(b) and (c) of the directive defines “safe” and “unsafe” product. A “dangerous product” is any product that does not meet the definition of a “safe” product. Definition of safe product: The directive defines a “safe product” as any product which—under normal or reasonably foreseeable conditions of use including duration and, where applicable, putting into service, installation and maintenance requirements—does not present any risk or only the minimum risks compatible with the product’s use. Such a product is considered to be acceptable and consistent with a high level of protection for the safety and health of persons taking into account the following points in particular: the characteristics of the product, including its composition, packaging, instructions for assembly and, where applicable, instructions for installation and maintenance; the effect on other products, where it is reasonably foreseeable that it will be used with other products; the presentation of the product, the labeling, any warnings and instructions for its use and disposal, and any other indication or information regarding the product; and the categories of consumers at risk when using the product, in particular children and the elderly. In addition, the feasibility of obtaining higher levels of safety or the availability of other products presenting a lesser degree of risk shall not constitute grounds for considering a product to be “dangerous.” In response to inconsistencies and identified weaknesses in the EU legislative framework for product safety, the EU issued additional regulations on marketing of products within the member states. Adopted in July 2008, this new framework aims to strengthen accreditation and market surveillance across member states and to remedy existing weaknesses of the legislative framework. It will apply from 2010. Guidance will be issued on its relation with the General Product Safety Directive. Standards: The EU product safety system is based on voluntary standards. However, products that are manufactured to harmonized standards developed by recognized European standardization bodies (CEN, CENELEC, and ETSI) on the basis of a mandate (formal request) by the EC and ultimately referenced on the “Official Journal” of the EU benefit from a presumption of conformity with the safety requirements of the relevant legislative framework that are covered by those standards. The EU considers their safety requirements to be the backbone of its system. The safety requirements are expressed in sectoral (industry or product- specific) directives, conformity assessment measures and, in certain sectors, the availability of European standards. The General Product Safety Directive fills in the gaps when no sectoral Directive exists. For example, the EU has specific sectoral directives on toys, and some industrial products, such as electrical products, machinery, and pressure equipment. Businesses must carry out conformity and safety assessments of their products in accordance with the General Product Safety Directive and/or specific legislation applicable to their products. Businesses are required to certify that their products are safe, as defined under the directive. For some products self-declaration is sufficient, but other products require third-party verification. Detection, reporting and the removal of unsafe products: The EU and its member states have the authorities to require mandatory recalls, and companies can negotiate voluntary recalls as necessary, similar to U.S. recalls. Under the General Product Safety Directive and applicable product-specific legislation, national authorities can ban the marketing of a dangerous product, order or organize its actual and immediate withdrawal, alert consumers to the risks it presents, order or coordinate its recall from consumers, and order or organize its destruction in suitable conditions. Businesses are required to report to the authorities if they detect that they have a dangerous product. Businesses also must remove unsafe products from markets and are under legal obligation to stop, withdraw, and/or recall their distribution. Businesses are required to repair or replace products and/or refund consumers the cost under certain criteria. Policy enforcement and compliance: DG SANCO has no inspection and no jurisdictional authority. Member-state authorities can review technical product files that all businesses are required to maintain to certify general conformity with product standards. Within the common market, the regulations apply to the importer, distributor, manufacturer, and retailer. Imported products must meet the same requirements as domestic products. Member states must have authorities to carry out appropriate sampling and safety testing of domestically manufactured or imported products and to follow-up on consumer complaints. According to the EC report on its consumer product safety framework, the EU considers its customs controls to be reactive—no proactive obligation exists for the customs authorities to carry out controls for unsafe products at the EU borders on their own initiative. In response, the EU issued a “Council Regulation” to provide customs authorities with the legal basis and equally applicable and comparable procedures in all member states to suspend, for no more than 72 hours, the release of products which they suspect pose a serious risk to health and safety. Consumer advocacy: A unique feature of the EU approach to consumer product safety is the funding support they provide to consumer representation groups through grants. As part of this approach, the consumer groups help define priority/future issues that require additional research and contribute to standard-making, and consumer groups can apply for grants to survey and educate consumers on these emerging issues. Japan’s Consumer Product Safety Law regulates the manufacture and sale of specific products to prevent harm and injury to consumers, to secure the safety of consumer products, and to promote voluntary activities of private business for the benefit of general consumers. Japan’s consumer product safety policies and procedures have been developed, implemented, and enforced by a wide variety of government agencies and organizations. As with some of the other countries we reviewed, Japan recently reorganized its approach and mechanisms for consumer product safety and food safety with the creation of the new Consumer Affairs Agency, as follows: Key organizations: The Ministry of Economy, Trade, and Industry (METI) is the key government organization responsible for developing consumer product safety policy in Japan. The National Institute of Technology and Evaluation conducts on-the-spot inspections of enterprises in accordance with METI’s instructions and analyzes the cause of accidents based on accident information to prevent future problems. The Consumer Policy Council, the Quality-of-Life Policy Council, and the National Consumer Affairs Center of Japan also have roles in advising and supporting the government on consumer-related issues in Japan. The Japan Cabinet Office carries out general coordination of basic consumer policies among related ministries and agencies. Resources: According to Japanese officials, 33 staff members are devoted to consumer product safety, as part of the Product Safety Division of METI. However, other staff located in other agencies, as indicated above, also work on these issues. Consumer advocacy: According to U.S. Embassy officials in Tokyo, consumer product safety issues receive considerable attention in the local press and the Japanese government, and consumers view consumer product safety as a major priority. A survey conducted by the cabinet office in 2005 found over 2,800 consumer groups active in Japan. Seven nonprofit consumer groups have been officially accredited by the Japanese government. They are Consumer Organization of Japan, Kansai Consumers Support Organization, Japan Association of Consumer Affairs Specialists, Kyoto Consumer Contract Network, Hyogo Consumers Net, and Saitama Organization to Abolish Damage to Consumers. Laws and regulations: Domestic and imported consumer products are regulated by the following laws in Japan: the Consumer Product Safety Law, the Household Goods Quality Labeling Law, the Law for the Control of Household Goods Containing Harmful Substances, the Chemical Substances Control Law, and the Electrical Appliance and Material Safety Law. Other laws relating to consumer protection include the Product Liability Act, the Consumer Contract Act, the Consumer Basic Act, and the Whistleblower Protection Act. Under Japan’s Consumer Product Safety Law, METI collects and makes public information on serious accidents involving consumer (household) products. The law requires manufacturers, marketers, and/or importers to report actual serious accidents to METI, which in turn informs the public. Manufacturers, marketers, and/or importers are also required to inform the public of the product safety issues involved. Major national consumer centers compile complaints from consumers, including product safety complaints. Depending on the product type, relevant ministries maintain regulations covering consumer products, including ensuring that products meet appropriate standards and labeling and certification requirements. Some ministries may require foreign manufacturers to register foreign manufacturing sites with the local government. Definition of safe product: Article 1 of the Consumer Product Safety Law defines products as unsafe if they cause threat to consumers’ life or health. In addition, Article 2, Section 2, of the Product Liability Act provides that a defective product is to be considered as unsafe, taking into account the nature of the product, the ordinarily foreseeable use of the product, the time when the manufacturer delivered the product, and other circumstances. Standards: Product requirements in Japan fall into two categories: technical regulations (or mandatory standards) and nonmandatory voluntary standards. Compliance with regulations and standards is also governed by a certification system in which inspection results determine whether or not approval (certification/quality mark) is granted. To affix a mandatory quality mark or a voluntary quality mark requires prior product type approval and possibly factory inspections for quality control assessment. Certain regulated products must bear the appropriate mandatory mark when shipped to Japan in order to clear Japanese Customs. Safety standards are specific to types of products and fall under the jurisdiction of the relevant ministry. Policy enforcement and compliance: Generally, the importer of record is responsible for ensuring the quality and safety of imported consumer products in Japan. If a product is deemed harmful or defective, the importer is responsible for working with local authorities and consumer outlets to take necessary measures. The government may take action against the importer, such as conducting on-site inspection of business offices, plants, stores, and/or warehouses; ordering mandatory product recalls; suspending business operations for a certain period of time; or imposing penalties including fines and/or imprisonment. On May 29, 2009, the Japanese Diet approved bills establishing the Consumer Affairs Agency, which will administer consumer protection issues in Japan. The agency will begin operating in the fall with about 200 staff from the Cabinet Office; the Fair Trade Commission; the Ministry of Economy, Trade and Industry; the Ministry of Agriculture, Forestry and Fisheries; and the Ministry of Health, Labor and Welfare. In addition, 60 temporary staff will be appointed from attorneys, consumer affairs consultants, and academics. The Consumer Affairs Agency will be placed under the Cabinet Office, and Consumer Affairs Centers nationwide will be established to provide information about product-related accidents and complaints. In addition, the Consumer Affairs Committee will be established at the same time to monitor the Consumer Affairs Agency. The committee—an independent body— will have authority to request information and reports from ministries and make recommendations for crisis management, through the Prime Minister, in the event of consumer incidents. China imports relatively few manufactured products compared with its exports. Consumer products that are imported tend to come from more developed and more regulated markets in Europe, Japan, and the United States. Therefore, China faces proportionally fewer challenges with regard to import products. Further, the costs of testing products and certifying them for sale, as well as the costs of shipping and tariffs, make it difficult for imported products to compete with Chinese domestic products in similar product categories. Recent high-profile cases in China involving food and product safety have affected China’s export reputation. These cases appear to have prompted the government to raise the priority of food and product safety. Laws and regulations: China’s fundamental law on product safety is The Law of the People’s Republic of China on Product Quality, which was written in 1993 and revised in 2000. This law preceded the establishment of China’s leading quality and safety agency, the General Administration for Quality Supervision, Inspection, and Quarantine (AQSIQ). The law sets a very low standard for corporate liability by defining the amount of damages and fines that can be collected through lawsuits. It also applies some penalties to entities that can claim ignorance of the law. The law makes no reference to foreign manufactures or domestic goods. China’s main regulatory code pertaining to imported consumer goods is found in AQSIQ’s Regulations for Compulsory Product Certification. Issued in 2001, this regulation created a uniform standard for imported and domestically manufactured goods, as well as a certification mark known as China Compulsory Certification (CCC) or 3C. The regulation applies to a catalogue of products that must be approved by AQSIQ prior to general sale in China. The standards for individual products are defined separately by national and industry standards. CCC testing is conducted mostly by enterprises owned in whole or in part by AQSIQ; foreign testing companies have not been approved to conduct CCC tests, but neither have they been explicitly excluded. The Law of the People’s Republic of China on Import and Export Commodity Inspection is the main law pertaining to import and export inspections. The law dates to 1989 and provides for fee-based inspections at port, as well as preshipment inspections that may be conducted in a foreign country. The law empowers the provincial-level organizations of AQSIQ—known as Customs, Inspection and Quarantine (CIQ)—to conduct inspections. The law also provides for an appeal process in case the importer or exporter disagrees with the result of an inspection. The law sets penalties for evading the inspection process, trading in counterfeit goods, and corrupt practices. However, recent problems with food and product safety demonstrate that the local inspection offices responsible for both food and product testing may lack robust technical abilities and the capacity to deal with the current volume of trade. Key organizations: AQSIQ is China’s leading quality and safety agency. Standards are issued by the Standards Administration of China, a division of AQSIQ, which also represents China in international standards organizations. The Certification and Accreditation Administration of the People’s Republic of China (CNCA), which is also technically part of AQSIQ, is responsible for certifying and accrediting functions related to CCC tests. Numerous testing companies, almost all partially owned by AQSIQ, carry out testing for the CCC system. A catalogue of China’s specific products required to obtain CCC approval can be found on CNCA’s Web site. In 2009, CNCA revoked a provision to allow the limited importation of any product not certified with a CCC mark. Unlike similar approvals for the United States, foreign manufacturers applying for CCC approval must pay all costs, including travel and lodging expenses of Chinese inspectors traveling to the foreign country for factory inspections. Routine inspections, factory checks, recalls, and other regulatory actions are carried out mostly by AQSIQ’s provincial representative offices, CIQ. China also has a product recall entity know as the Defective Product Administrative Center, a part of AQSIQ, which is responsible for the oversight of product recalls. The center is technically responsible for consumer products, but its primary focus is on automobile safety. The local CIQ offices are responsible for overseeing recalls at the local level. The national system for product recalls is still undeveloped. Standards: China has a comprehensive body of regulatory standards that fall into four categories: National Standards, Industry Standards, Local Standards, and Enterprise Standards. Each category includes aspects of product safety. The governing law for these categories dates to 1988 and applies to a range of industries and fields, not just consumer products. Some of these regulatory standards are more clearly spelled out in product-specific codes published on the CNCA Web site. Under China’s system, there are both mandatory and voluntary standards. Mandatory standards are those intended to safeguard human health, personal property, and safety and those enforced by laws and administrative regulations. All others are voluntary standards. Similarly, under the CCC product certification system, some products are not subject to compulsory certification but can be certified on a voluntary basis. The voluntary process is known as the China Quality Certification Center’s Voluntary Product Certification System. It applies to products that are not included in an itemized catalogue of products subject to mandatory certification. The system is also an AQSIQ function, and this certification is limited to Chinese companies only. Policy enforcement and compliance: CIQ offices are responsible for local enforcement actions. Local CIQ offices have responsibility for inspecting and certifying factories, issuing and revoking manufacturing licenses, issuing or revoking export permits, conducting preshipment export inspections, and clearing or refusing goods for importation into China. Local CIQ offices have their own laboratory facilities but often call upon AQSIQ headquarters for policy guidance and technical support. Local CIQ offices also have the authority to initiate a mandatory recall, although China’s recall system is still developing. As of July 2009, China’s recall provisions emphasized the cessation of manufacturing and sale of dangerous goods and included no methodical system for the physical collection of goods already sold. According to U.S. Embassy officials in Beijing, draft regulations in China on product recalls represent a positive step for improved product safety. Authority to inspect foreign manufacturing plants: China unambiguously holds and maintains the authority to inspect foreign- owned plants operating in China, and such inspections are conducted by AQSIQ and the Ministry of Health. However, for any plant operating only for the purpose of export manufacturing (i.e., no sales in China), AQSIQ takes a less rigorous approach to regulation. Such plants are not required to undergo the same assessments as plants manufacturing for local consumption. U.S. Embassy officials in Beijing stated that they are unaware of any memorandum of understanding that AQSIQ may have to facilitate foreign inspections. However, the CCC system requires factory inspection to be conducted by a certification body that has been accredited by CNCA. Only one non-Chinese certification and testing body, Underwriters Laboratories, has been accredited to conduct follow-up factory inspections in the United States for CCC approval. CNCA has similar arrangements with other countries. In addition to the individual named above, Debra Johnson, Assistant Director; Meghana Acharya; Philip Curtin; Elizabeth Guran; Ronald Ito; Marc Molino; Omyra Ramsingh; Linda Rego; Jennifer Schwartz; Jay Smale; and Kathryn Supinski made key contributions to this report. Important contributions also were made by Loren Yager, Director, International Affairs and Trade; Richard Stana, Director, Homeland Security and Justice; and Christine Broderick, Assistant Director, International Affairs and Trade.
The growing volume of consumer products imported into the United States has strained the resources of the Consumer Product Safety Commission (CPSC), challenging the agency to find new ways to ensure the safety of these products. The Consumer Product Safety Improvement Act (CPSIA) mandated that GAO assess the effectiveness of CPSC's authorities over imported products. GAO's objectives were to (1) determine what is known about CPSC's effectiveness in using these authorities, (2) compare CPSC's authorities with those of selected U.S. agencies and international entities, and (3) evaluate CPSC's plans to prevent the entry of unsafe consumer products. To address these objectives, GAO analyzed CPSC and other agencies' and entities' authorities, reviewed literature on consumer product safety, and compared CPSC's planning efforts with criteria for effective planning practices. GAO found broad consensus that CPSC's authorities over imported consumer products have the potential to be effective. However, CPSC has made limited progress in measuring the effectiveness of its authorities, and CPSC's ability to implement these authorities has been constrained by competing priorities and limited resources, as well as by delays in implementing key provisions of CPSIA. CPSC's presence at U.S. ports is limited and, in order to identify potentially unsafe products, it must work closely with U.S. Customs and Border Protection (CBP), which faces pressure to quickly move shipments into commerce. CPSC does not have access to key CBP import data it could use to target incoming shipments for inspection, and it has not updated its agreements with CBP to clarify each agency's roles and responsibilities. CPSC's activities at U.S. ports could be strengthened by better targeting incoming shipments for inspection and by improving CPSC's coordination with CBP. Otherwise, CPSC may not be able to carry out key inspection activities efficiently or to effectively leverage its enforcement priorities with CBP. Select federal agencies and foreign governments provide lessons for strengthening CPSC's implementation of its authorities, particularly with respect to border surveillance and information sharing among countries. Both USDA and FDA have more robust border surveillance activities than CPSC because they obtain more data on incoming shipments, have more staff working at U.S. ports, use more developed programs to target risks, and use information technology systems that are integrated with other agency-based and CBP systems to effectively leverage their enforcement priorities with CBP. Other agencies have found that timely CBP import data integrated with other agency surveillance data is useful in screening incoming shipments for potential safety violations. In addition, officials at FDA and USDA have found that efforts to educate overseas industries and governments on U.S. safety standards could reduce the number of unsafe products that reach U.S. consumers. GAO also found broad consensus that continued coordination and information sharing among governments and multilateral organizations can improve the effectiveness of product safety frameworks. CPSC has increased its efforts to coordinate with these other entities, particularly China, but lacks a comprehensive plan for international engagement. CPSC has established annual plans, but lacks a long-term plan with key goals to prevent the entry of unsafe products. CPSC has not yet updated its agencywide Strategic Plan to reflect new authorities granted in CPSIA. This may inhibit CPSC's ability to appropriately allocate any potential increases in agency resources or to address the safety of imported products through international means. An updated Strategic Plan may also help to ensure that CPSC has the requisite compliance and analytical staff to support the full range of CPSC's international efforts.
Prior to the redesign initiated in 2011, TAP consisted of four core components: (1) pre-separation counseling, (2) an employment workshop, (3) an optional briefing on federal veteran benefits, and (4) the Disabled Transition Assistance Program. Pre-separation counseling, which includes VA benefits information, was required by law to be provided prior to the VOW Act. A number of revisions, new requirements, and components were added to TAP by the VOW Act and the VEI Task Force. For example, the VOW Act mandates that DOD require that transitioning servicemembers participate in an employment workshop, with some exceptions. Included among the VEI Task Force’s changes to the program are: an extended curriculum with segments on translating military skills to civilian job requirements, financial planning, and individual counseling and assessment with the goal of each servicemember developing an Individual Transition Plan; an updated employment workshop and briefings on federal veteran benefits divided into two sessions.embed information relevant for those who have or think they have a service-connected disability rather than as part of a separate Disabled Transition Assistance Program component; The VA benefits I and II briefings a series of 2-day, career-specific tracks that focus on (1) pursuing college education, (2) entering a technical skills training program, or (3) starting a business. The track that a servicemember chooses depends on his or her post-separation goals; a capstone event during which servicemembers are to demonstrate— and military service commanders verify—that they have met required career-readiness standards. The standards pertain to employment, education, and technical training, depending on the servicemember’s post-separation goals; and a referral process—called a “warm handover”—to connect servicemembers who do not meet the career readiness standards with the appropriate partner agency (VA or DOL) to provide continued support and services as veterans. Additional details about the previous TAP compared to the revamped program are shown in table 1. Figure 1 shows each of the components in the order a servicemember would currently participate in the redesigned program. The services generally implement these core components at the installation level and may include other components in addition to TAP. Rather than TAP continuing to be an end-of-career event, DOD plans to shift to a Military Life Cycle Transition Model after October 2014. This model is intended to integrate transition preparation—counseling, assessments, and access to resources to build skills or credentials— throughout the course of a servicemember’s military career. The VEI Task Force oversaw the design and development of the revised TAP and was led by DOD and VA. Other agencies participating on the VEI Task Force include DOL, the Department of Education, the Office of Management and Budget, the Office of Personnel Management, and the Small Business Administration (SBA). Members of the White House staff and senior representatives from each service also participated. Each agency is responsible for different activities. For example, DOD provides guidance and monitors compliance with TAP provisions, and DOL facilitates the employment workshop. Also, each service coordinates with agencies on scheduling TAP workshops and briefings. The respective roles and responsibilities are spelled out in a memorandum of understanding (MOU), which the agencies signed in January 2014. A new TAP governance structure, established in October 2013, steers implementation of TAP and will modify the program, as needed, through 2016. The new governance structure is co-led by DOD in 2014 and co- chaired by VA and DOL. With the draw down from the wars in Iraq and Afghanistan and as the military makes ongoing and planned force structure reductions, many servicemembers are projected to depart the military through 2017. TAP is one of a number of federal programs to assist transitioning servicemembers and veterans in developing job skills and securing civilian employment. TAP serves as a gateway to additional information and services that are available, either while servicemembers are on active duty or after they have separated from the military. For example the DOL employment workshop highlights many of the skills and techniques helpful in obtaining employment. After completing the workshop, servicemembers can benefit further by returning to the TAP offices on installations, using services at local VA and DOL offices, or using websites introduced to participants during TAP training. Once servicemembers separate from the military, a number of other federal programs offer assistance. These programs include five employment and training programs overseen by DOL and VA that target veterans.to veterans. For example, DOL provides grants to states to support state workforce agency staff positions, Disabled Veterans’ Outreach Program Specialists and Local Veterans Employment Representatives, who serve veterans through the Jobs for Veterans State Grant Program. In addition, VA provides employment services to certain veterans with disabilities through the Vocational Rehabilitation and Employment Program. As of 2012, the program was offered in 56 regional offices and 169 satellite offices. In addition, DOD helps National Guard and Reserve members obtain civilian employment though its operation of other programs, including the Yellow Ribbon Reintegration Program and Employer Support of the Guard and Reserve. For example, the Yellow Ribbon Reintegration Program serves National Guard and Reserve members and their families by hosting events that provide information on employment opportunities, health care, education/training opportunities, finances, and legal benefits. As of December 2, 2013, DOD, DOL, and VA have implemented changes to the program’s key components at most of the 206 military installations that provide TAP. However, a few program components have not yet been fully implemented by the agencies. For example, the agencies are still using the previous version of the VA benefits briefing at a number of locations and are offering the career technical training track at fewer than half of the TAP locations. Although some agencies had planned to fully implement the revamped TAP at all locations by October 1, 2013, they missed their targeted time frame. According to the revised plan, agencies now expect to implement virtually all components by the end of March 2014, with full implementation planned by June 2014. The planned start dates and the status of agencies’ efforts to implement the key components at the 206 TAP locations are summarized in figure 2. Pre-separation counseling and DOD Core Curriculum: According to DOD officials, eligible servicemembers were participating in pre-separation counseling and the modules in the new DOD core curriculum by November 21, 2012 at all TAP locations. As noted previously, part of the DOD core curriculum includes a module on translating military skills to civilian job requirements. Current military crosswalks map the majority of military occupations to a single civilian occupation. Based on those crosswalks and supplemented by analyses from Army and Navy credentialing websites, tools like DOL’s My Next Move and My Next Move for Veterans can suggest multiple occupations for career exploration. To enhance the existing electronic tools used for the crosswalk, DOL contracted with an organization to identify equivalencies between military and civilian jobs, as required by the VOW Act. According to DOL officials, the results of the military equivalencies study will enhance the military- civilian crosswalk by enabling a mapping of a single military occupation into multiple civilian occupations based on an analysis of embedded skill sets in addition to the similarity of tasks performed. This will be done for a selected set of military occupations that represent 59 percent of current active duty servicemembers. DOL officials said they plan to complete the update of the electronic tools beginning in 2014. DOL Employment Workshop: Effective November 21, 2012, DOD was generally required to mandate participation in the program with some exceptions. According to DOD and DOL officials, the agencies met this participation requirement in the VOW Act by offering eligible servicemembers, with some exceptions, the previous or revised version of the DOL employment workshops at domestic and overseas locations by November 21, 2012. Moreover, as required by the act, the DOL employment workshops are being conducted by contractors at all TAP locations.offered at all TAP locations, according to DOD and DOL officials. As of the spring of 2013, the revised workshop was being VA Benefits Briefing: All departing servicemembers are generally required to be provided the VA benefits briefing. Officials from DOD and VA stated that this participation requirement was met by offering servicemembers the previous version of the VA benefits briefing while implementing a phased rollout of the revised VA benefits I and II briefings. As of December 2, 2013, the revised VA benefits I and II briefings were unavailable at about 8 percent of TAP locations. VA officials said that the revised VA benefits briefings are offered at all domestic locations. However, the revised briefings are not currently offered in all overseas locations, such as Air Force locations in Germany, South Korea, and Japan. Although VA planned to implement the revised benefits briefings by October 1, 2013, full implementation is now expected no later than March 31, 2014, according to VA officials. DOD and VA officials said that VA faced challenges, such as training enough personnel to facilitate the revised briefings at these overseas locations during the extended furlough and delays with negotiating agreements with foreign nations hosting U.S. military forces. Career-specific Tracks: Since late 2012 the agencies had been planning to fully implement these tracks by October 2013. However, as of December 2, 2013, the tracks were not fully implemented, particularly at overseas locations: Entrepreneurial Track: SBA offers the entrepreneurial track at least quarterly at all domestic locations, but this only meets 12 percent of the estimated total demand for the track, according to SBA officials. Moreover, the track is not offered at the majority of overseas locations. After reviewing a draft of this report, SBA officials stated that the track can be extended to meet domestic and overseas demand given additional funding recently provided in the fiscal year 2014 budget. Full implementation at all TAP locations is expected by the end of fiscal year 2014, according to SBA officials. Career Technical Training Track: VA is offering the career technical training track at about 43 percent of the TAP locations. In particular, many overseas locations lack this track. According to VA officials, the delays were mainly due to efforts to incorporate substantial feedback received from participants during the pilot phase. VA completely revised the curriculum and piloted it in the summer of 2013. As a result, VA delayed training the facilitators until the curriculum was finalized and began its roll out in September 2013. According to the services’ implementation plans, full implementation of this track is expected by April 30, 2014. Higher Education Track: DOD is offering the higher education track at about 72 percent of TAP locations. According to DOD officials, they plan to offer the track at all locations, but sequestration and other resource constraints as well as delays in hiring and training facilitators slowed the roll out of this track. Full implementation is expected by April 9, 2014, according to the services’ implementation plans. For servicemembers lacking access to the entrepreneurial track, content from the track is available online. The online version, however, is a poor substitute for the classroom-based track, according to SBA officials. Unlike the technical training and higher education tracks, the entrepreneurial track does not have associated readiness standards. According to SBA officials, the readiness standards for the other tracks were developed prior to the inclusion of this track in TAP. While they considered creating associated standards, SBA officials decided that the track’s main purpose is to help participating servicemembers determine whether or not starting a business in general or their specific ideas for a business is right for them, so standards were not needed. If the higher education or career technical training tracks are not available for servicemembers who wish to attend an institution of higher education or seek technical training, other options are available to meet the readiness standards associated with the tracks. For servicemembers in remote locations or who are rapidly separating from the military, one option is to access a virtual TAP curriculum, according to a draft DOD policy under consideration. However, classroom instruction is the preferred method. Another option for servicemembers located overseas is to take the track at a domestic location when they return to the United States, according to DOD officials. Moreover, according to DOD policy, a servicemember could meet the standards outside of the track by completing the required documents or activities associated with those standards, such as by completing a comparison of academic institution choices and a college or university application. While servicemembers may meet the readiness standards without taking the tracks, they would miss instruction, for example, on resources to cope with challenges transitioning into college as nontraditional students (those who are older or with family obligations). Capstone Event: The services, working with DOD, DOL, and VA, are finalizing implementation of a “capstone event”—a final check to verify that servicemembers have met TAP requirements—and a referral process, known as a “warm handover,” for those servicemembers who do not meet the requirements. Although as of December 2013 the services were holding capstone events at most locations, according to the services’ implementation plans, the details for how the capstone event and warm handover will work are still being finalized by DOD, DOL, VA, and the services. Specifically, the agencies and services must clarify the roles and responsibilities of servicemembers, TAP staff, commanders, and the partner agencies as well as develop policies and guidance that underpin this effort, according to a December 2013 DOD report about implementation of the capstone event at TAP locations. In addition to implementing the key components of TAP at military installations, the services continue to update physical infrastructure at a number of locations to provide an optimal experience for servicemembers participating in TAP components. General standards for infrastructure are set forth in DOD policy. Approximately 6 percent of locations lack computer availability and according to officials from DOD, the Navy and Marine Corps, other infrastructure is still being put in place at a number of their domestic and overseas locations. According to officials from DOD, they expect all of the TAP locations to meet infrastructure standards by March 31, 2014. Military Life Cycle Model: According to the agencies’ implementation plan, the envisioned end state for the redesigned TAP involves integrating transition preparation throughout the course of a servicemember’s career. The agencies refer to this end state as the military life cycle transition model. The services plan to fully implement the military life cycle by October 2014. Under this new transition model, for example, the Army intends for all new servicemembers to receive counseling and initiate an individual development plan regarding their military career goals within 30 days of reporting to their first permanent duty station. Overall, DOD intends such counseling and planning to continue throughout a servicemember’s military career at various “touchpoints”, such as when For example, servicemembers will be expected to they are promoted.create plans for achieving their military and post-military goals for employment, education, or starting their own business. Further, servicemembers are to be made aware of the career readiness standards they must meet long before they separate. Agencies have efforts underway to address three of the five key elements associated with effective program implementation and evaluation for TAP. However, agencies’ efforts to address the remaining two elements are mixed. Effective training programs have systems to track data, and we found that DOD and the services have systems to collect and report on attendance rates and are taking steps to improve the reliability of these data. DOD developed a DOD-wide system to track attendance for all TAP components, which has been in use since October 2012. This system is populated with attendance data from the services. The Army and Air Force each input TAP attendance data into their own systems, which they then transmit to the DOD system, while the Navy and the Marines input TAP attendance data directly into the DOD system because they do not have service-level systems that allow for tracking individual TAP attendance. DOD and the services are taking steps to improve the reliability of these data. According to DOD officials, accuracy will improve now that the capstone event is in place because servicemember attendance at each of the three required TAP training components will be verified at this event. According to DOD officials, the services are taking other steps to improve the accuracy of TAP attendance data, as well. For example, the Navy plans to replace the paper data collection systems that exist at some installations with electronic attendance tracking, which may be less prone to data-entry error. DOD tracks attendance to measure progress toward its performance goal of eligible servicemembers attending the mandatory components each year, beginning in fiscal year 2013. In February 2014, DOD reduced the fiscal year 2014 performance goal from 90 percent to 85 percent. According to DOD officials, the expected attendance rate is less than 100 percent because some servicemembers do not complete all required TAP training for various reasons. For example, some may separate quickly, such as for medical or disciplinary reasons, and not have an opportunity to fully take TAP. For fiscal year 2013, DOD expects an attendance rate of about 75 percent, primarily because servicemembers that transitioned in that year may have taken several TAP components in fiscal year 2012, prior to when DOD and the services began tracking these rates. DOD anticipates meeting their goal in fiscal year 2014. Effective training programs incorporate feedback into training efforts,and each agency involved in TAP plans to use this type of information, as well as monitor their respective components, to improve TAP. To measure participants’ reactions to each TAP component they attend, including the career-specific tracks, DOD launched a participant assessment in April 2013. The assessment seeks feedback on the quality of the instruction, content, and facilities.knowledge of the information presented in each component of the training. It also measures participants’ According to agency officials, the agencies plan to use the results from the assessment to monitor the performance and outcomes for the redesigned TAP, assess trends, determine areas of improvement, and modify TAP components as appropriate. DOD is leading this effort. To help facilitate this effort, DOD officials said that they plan to analyze feedback at the DOD, service, and installation levels and share this information with partner agencies and the four services in a quarterly report. For example, the participant assessment collects demographic information that will allow for a comparison of the responses of servicemembers in different military branches, locations, and groups, such as enlisted personnel and officers. DOD performed a similar analysis in mid-2013 for responses to two questions in the participant assessment. This analysis provided DOD with information that could be used to understand how the program’s usefulness is perceived by different populations. Moreover, according to DOD’s TAP evaluation plan, DOD plans to convene teams with the partner agencies on an as-needed basis to make recommendations to address challenges, concerns, and areas for improvement. According to VA and DOL officials, they also plan to analyze the feedback to improve their respective components. The agencies also monitor their respective TAP components through site visits or plan to do so. According to DOD’s monitoring plan, DOD visits installations to monitor the TAP components it is responsible for as well as overall TAP implementation. According to DOL officials, their staff intends to perform annual monitoring visits to each domestic installation where the employment workshop is provided, but not at overseas installations. They said that at overseas locations they plan to rely on feedback from participant assessments, the company hired to facilitate the workshops, and the military staff at those DOD sites who will monitor the facilitators’ instruction and whether they are engaging and knowledgeable. VA officials said that they have not completed their monitoring plan for the VA benefits I and II briefings and its career technical training track. However, they expect to have a quality assurance plan completed by March 2014. Also, in comments provided after reviewing a draft of this report, VA officials stated that VA is participating in the DOD site visits to installations to monitor the TAP components. These are joint, agency-administered, on-site staff assistance visits led by DOD. Also, VA currently has contract staff at each military installation to monitor implementation of the VA benefits I and II briefings and the career technical training track. that TAP participants have completed products, such as resumes and job applications, which demonstrate that they are career ready. Created a capstone event to verify that standards have been met; the capstone event can be completed one-on-one, in large groups, or in small groups. For example, the Marines model is one-on-one. Assigned the task of verifying career readiness to commanders and identified steps to ensure that commanders or their designees are properly trained to assess an individual servicemember’s career readiness. To document whether or not a servicemember is career ready, DOD developed a new form called the Individual Transition Plan (ITP) Checklist in which each one of the career readiness standards is listed. We have reprinted this form in appendix II. Developed procedures for providing a warm handover, or referral, for servicemembers not meeting the career readiness standards. Generally, the warm handover is to be a confirmed person-to-person contact. According to the December 2013 DOD capstone report, during the recent pilot of the capstone event, several implementation challenges emerged. For example, confusion exists among servicemembers, TAP staff, commanders, and the agencies as to their roles and responsibilities in the capstone event. According to the DOD capstone report, to address this challenge, the agencies are taking actions, such as the services implementing a training program for their TAP staff and planning efforts to educate commanders and servicemembers on the requirements, purpose, and importance of the capstone event. As noted previously, according to DOD guidance, if a servicemember has not met the career readiness standards at the time of the capstone event, they are to be referred to the appropriate curriculum or services before they transition. If they do not meet the standards before separating from the service, the services plan to provide them with a “warm handover”. The delivery of the warm handover may vary at domestic and overseas locations because DOL and VA have limited capacity at overseas According to the DOD capstone report, DOL staff will likely locations.not be present at capstone events overseas and will conduct the warm handover in other ways, and VA staff will be located only at larger overseas installations. For example, servicemembers overseas will be able to contact DOL’s call center within established hours, which, due to time differences, may limit opportunities to make a person-to-person contact. Effective training programs encourage participation and hold both individuals and their leaders responsible. While DOD has taken steps that are consistent with most attributes of effective training programs, two services currently lack an oversight mechanism at the commander level to help ensure participation. DOD’s efforts underway include: (1) prioritizing training for servicemembers based on agreed-upon goals and priorities, (2) encouraging servicemembers to buy in to training goals, and (3) communicating the importance of training. For example, DOD and the services communicate the importance of TAP training by providing information on their websites, including how TAP aids in a successful transition, and through other communications, such as brochures with similar information. In addition, DOD has assigned unit commanders the responsibility of ensuring that eligible servicemembers have full access to and successfully complete required TAP components. However, only the Army and the Air Force possess the capability to gauge the rate at which servicemembers under an individual unit commander participate in TAP. Specifically, the Army and Air Force provide commanders and their leaders information on their unit’s participation levels. In contrast, the Navy and the Marines do not have such systems. According to Navy officials, they obtained funding and plan to develop such a system by late June 2014 and plan to start using the data for oversight no later than fiscal year 2015. Marine officials said that because they have a long-standing culture of requiring servicemembers to attend TAP training, such an oversight mechanism was not necessary. In our 2002 and 2005 reviews of TAP, we found that servicemembers sometimes faced difficulties being released from military duties to attend TAP because of the priority accorded their military mission or the lack of supervisory support for TAP.on the pilot of the capstone event, DOD reported that ensuring servicemember participation in capstone events was a challenge for most of the services. According to DOD, lack of servicemembers’ awareness of this requirement and lack of commanders’ support may have hampered participation in capstone events. Without routine information on servicemembers’ participation by commander, it may be difficult to hold accountable those directly responsible for ensuring participation. Outputs are the direct products and services delivered by a program, and outcomes are the results of those products and services. See GAO, Performance Measurement and Evaluation: Definitions and Relationships, GAO-11-646SP (Washington, D.C.: May 2011). basic end-of-course evaluations to higher level impact evaluations.agencies plan to evaluate TAP at lower levels. Their evaluation plan includes (1) gauging participant reaction to all TAP components through end-of-course evaluations and (2) determining whether servicemembers met the career readiness standards prior to separation. Higher level evaluations are also important to help gauge the effectiveness of TAP, and the agencies have not demonstrated a strategic approach to planning such evaluations. Since the program is administered by multiple agencies, planning an evaluation is more challenging than if it were administered by a single agency. In December 2011, the agencies stated, in an internal report, that they would develop a methodology to assess the effectiveness of TAP by engaging new veterans approximately six months after they have separated from the military. Despite the modular nature of TAP, this report did not specify whether this type of higher level evaluation would assess all of TAP or certain components. DOL and VA are considering higher level evaluations for the employment workshop and VA benefits briefings. For example, DOL is considering different methodologies it could use to conduct an impact evaluation of the employment workshop. However, the agencies could not provide a rationale for not using higher-level evaluations to assess either TAP overall or some of the other TAP components and career-specific tracks. Best practices call for the development of a written evaluation strategy, which the agencies prepared for lower evaluation levels, but not for higher levels. Without a written evaluation strategy that identifies the TAP components to be evaluated, and includes the appropriate level and timing of the evaluation and methods, the agencies may miss important opportunities to obtain information needed to make fully informed decisions on whether and how to modify TAP and may either over invest or under invest in evaluations. Based on GAO’s prior work and interviews with stakeholders, we identified a key remaining challenge; namely, that DOD lacks a process to assess the TAP experience of National Guard and Reserve members. DOD’s policy generally requires all eligible servicemembers, including members of the National Guard and Reserve, to be exposed to the entire TAP experience even if their circumstances differ. Unlike active duty servicemembers, many members of the National Guard and Reserve hold civilian occupations, and federal law protects their employment rights when they return from active duty. If they can document their civilian employment or acceptance to an institution of higher education or accredited technical training, members of the National Guard and Reserve and other eligible servicemembers can be exempted from attending the DOL employment workshop. Nevertheless, they are generally required to complete pre-separation counseling, the VA benefits I and II briefings and, under DOD policy, participate in the capstone event. Further, the policy states that they must attend the DOD core curriculum and applicable career specific tracks if they cannot meet the career readiness standards associated with these two components. However, according to many stakeholders that we talked with, including officials from the services and organizations that support members of the National Guard and Reserve and other servicemembers, DOD has not resolved long-standing concerns that eligible members of the National Guard and Reserve attend TAP offerings in locations and on a timetable that diminish their experience. Specifically, given that eligible National Guard and Reserve members generally demobilize at locations where they neither work nor live, they may be distracted by their desire to return home, which can affect how much the training benefits them. Separating servicemembers need to be aware of the benefits and services and know how to access them to take advantage of them, such as the Post-9/11 GI Bill. In a January 2013 report, the Defense Business Board Task Group recommended that eligible National Guard and Reserve members be afforded the opportunity to demobilize and transition in their home unit’s geographical area. According to stakeholders, eligible National Guard and Reserve members also typically have less time to complete the program because they often demobilize more quickly than active duty servicemembers separate. For example, according to DOD’s 2011 Handbook, the Reserve demobilization timeline makes scheduling a pre- separation counseling appointment not later than 90 days prior to leaving active duty impractical for Reserve members. According to officials from DOD and the services, they took steps to help meet the needs of National Guard and Reserve members eligible for TAP. For example, the agencies tailored the content of TAP components to better suit the needs of National Guard and Reserve members after However, only one analyzing the results of feedback from pilots of TAP.of the steps taken directly addresses the concerns of stakeholders related to the location and timing of TAP. Specifically, eligible members of the Army National Guard are allowed to participate in the DOL employment workshop and the capstone event in their home unit’s geographical area, according to DOD and Army officials. These eligible Army National Guard members will remain on active duty while participating in these two components. Nonetheless, DOD may not be well positioned to determine whether its actions successfully address the long-standing challenges in designing transition services for the National Guard and Reserve. This is because while DOD collects participant feedback through the online assessment, this tool does not ask whether the timing and location met the needs of servicemembers, including the National Guard and Reserve. Moreover, DOD’s planned survey to active duty servicemembers contains questions about the timing of TAP, but DOD has not drafted similar questions for its survey to the National Guard and Reserve. According to OMB, being able to track and measure specific program data can help agencies diagnose problems, identify drivers of future performance, evaluate risk, support collaboration, and inform follow-up actions. Without a systematic process targeted to identify any remaining long-standing concerns, DOD will not be able to make fully informed decisions about the extent to which eligible members of the National Guard and Reserve reap the full benefits of TAP. Helping servicemembers successfully transition to civilian life after their service ends is an important government goal. The agencies undertook an ambitious effort to redesign the 1990s-era program to provide servicemembers with training and skills to successfully transition to civilian life. The increase in the number of servicemembers attending the revised TAP components, along with the significant coordination effort among multiple agencies and military services, pose significant implementation challenges. Efforts to implement the redesigned program continue to evolve. The importance of serving the ongoing and projected wave of servicemembers departing the military increases the urgency to fully implement TAP components as well as to finalize related policies and procedures. To the extent that the program remains behind schedule in implementing TAP, some transitioning servicemembers may be denied the full benefit of the revamped program. Because TAP is not fully implemented, the full impact of the revamped TAP, particularly the warm handover—a key effort to serve those deemed at greater risk of not being career ready—remains to be seen. The revamped TAP exhibits elements important for the effective implementation and evaluation of the program. Yet, we found room for improvement in several key areas. In particular, while federal law generally requires DOD to mandate participation in the employment workshop and DOD policy requires that all eligible transitioning servicemembers participate in TAP overall, the Navy and the Marines lack the ability to provide unit commanders and their leaders with information on participation levels of servicemembers under their command. Without the capability to gauge and report participation in TAP components by unit commander—those directly responsible for ensuring servicemember participation—leaders may find that holding them accountable is difficult. While the administration cites the revamped TAP as a key strategy of its crosscutting goal to improve the career readiness of veterans, it may not be positioned to determine the extent to which the program prepared veterans to pursue their post-separation goals. Without a written evaluation strategy that identifies the TAP components to be evaluated and includes the appropriate level and timing of the evaluation methods, the agencies may miss important opportunities to obtain information needed to make fully informed decisions on whether and how to modify TAP and may either over invest or under invest in evaluations. The circumstances of National Guard and Reserve members differ from those of other active duty servicemembers. Many stakeholders have raised concerns that these circumstances diminish eligible National Guard and Reserve members’ TAP experience. Absent efforts to systematically collect information about eligible National Guard and Reserve members’ experiences under the revamped program, DOD may not be well positioned to determine whether there are problems with how TAP is provided to these groups. Based on our review, we recommend that the Secretary of Defense take the following actions: 1. To better ensure servicemember participation in and completion of TAP, direct the Under Secretary for Personnel and Readiness to require that all services provide unit commanders and their leaders information on TAP participation levels of servicemembers under their command, similar to that provided by the Army and Air Force. Such information could be used to help hold leaders accountable for ensuring TAP participation and completion. 2. To provide information on the extent to which the revamped TAP is effective, direct the Under Secretary for Personnel and Readiness to work with the partner agencies to develop a written strategy for determining which components and tracks to evaluate and the most appropriate evaluation methods. This strategy should include a plan to use the results of evaluations to modify or redesign the program, as appropriate. 3. To ensure that decisions about the participation of eligible members of the National Guard and Reserves in TAP are fully informed, direct the Under Secretary for Personnel and Readiness to systematically collect information on any challenges facing demobilizing members of the National Guard and Reserves regarding the logistics of the timing and location to attend TAP. For example, agencies might add questions to their online assessment tool specific to eligible members of the National Guard and Reserves who participate in TAP. We provided a draft of this report to DOD, VA, DOL, and SBA for comment. In its comments, DOD agreed with one recommendation and disagreed with the other two recommendations. VA generally agreed with our findings. Written comments from DOD and VA are reproduced in appendices III and IV. DOL and SBA did not provide comments, but all four agencies provided technical comments, which we incorporated into the final report as appropriate. DOD agreed with our recommendation to work with partner agencies to develop a written strategy for determining which components and tracks to evaluate and the most appropriate evaluation methods. DOD stated that it will continue to support interagency collaboration, which, as of January 31, 2014, has been formalized in a memorandum of understanding among the agencies administering TAP. DOD disagreed with our recommendation to require that all of the services provide unit commanders and their leaders information on TAP participation levels of servicemembers under their command. DOD stated that we justified the need for a mechanism to ensure a servicemember’s completion of TAP by commander based on concerns that without such a mechanism, commanders will not support the release of servicemembers to attend TAP. DOD also said that these concerns are based on observations that preceded full implementation of the capstone event. In October 2013, DOD launched the capstone as a mandatory process by which commanders verify TAP participation and positively affirm that servicemembers have or have not met career readiness standards. In addition, DOD said that capstone event implementation was accompanied by a communications campaign to inform both commanders and the services’ TAP providers of this key requirement. Finally, DOD said that capstone event completion is monitored departmentwide to ensure compliance. We agree with DOD that departmentwide monitoring of capstone event completion is an important element in helping ensure compliance. However, such monitoring does not provide routine information to commanders and their leaders and not all TAP locations will be monitored routinely. As noted in our report, based on the pilot of the capstone event last fall, DOD reported that ensuring servicemember participation in capstone events was a challenge for most of the services (with the exception of the Air Force), adding that this challenge is possibly due in part to lack of commanders’ support. Commander support for the program has been a long-standing challenge for the program. Therefore, we continue to believe that our recommendation is needed because it would establish an accountability mechanism for TAP that mirrors the level at which responsibility has been assigned. DOD has assigned unit commanders the responsibility of ensuring that eligible servicemembers have full access to and successfully complete required TAP components. Providing information to unit commanders and their leaders on TAP participation levels of servicemembers under their command—similar to that provided by the Army and Air Force—could thus promote accountability and oversight. Servicemember participation in TAP is generally required by law and DOD policy, and also relates to a Cross- Agency Priority Goal, which reinforces the need for an appropriate accountability mechanism. DOD also stated that the Navy is funding IT system upgrades to provide the ability to analyze and report program compliance at the command level. DOD added that the Marine Corps has mandated participation since the program's inception, and Marine Corps commanders leverage the capabilities of the personnel system to identify eligible Marines and schedule their TAP attendance. DOD stated that despite the limitations of a data tracking system that underreports compliance figures, the Marine Corps had the highest compliance rate of all services, as of December 2013. DOD said it recommends that department-wide compliance data be allowed to normalize to show true compliance percentages before the services are required to fund and implement expensive systems for the sole purpose of providing an additional mechanism for commander accountability. However, we continue to believe that our recommendation is needed. Three of the services either have or are working to develop the kind of accountability mechanism that we are recommending, but the Marine Corps does not plan to develop such a system. While DOD reports that the Marine Corps has the highest TAP compliance rate of all services, DOD did not provide any data on these rates. Finally, in response to DOD’s recommendation to wait until compliance data normalize, we believe that this would not be appropriate. We do not see any advantage to delaying the implementation of appropriate accountability mechanisms to provide assurance that the ongoing and projected wave of servicemembers departing the military receive the expected level of services from TAP. DOD disagreed with our recommendation to systematically collect information on challenges facing demobilizing members of the National Guard and Reserve regarding the timing and location to attend TAP. Specifically, DOD stated that it has long understood that the National Guard and Reserves operate under schedules and logistical constraints that differ from those of active duty servicemembers. DOD stated that several processes (which we note in our report) are already in place to identity and rectify any misalignments between TAP services and the needs of eligible National Guard and Reserve members, including regular coordinating councils that include representation from the National Guard and Reserve and TAP managers, as well as a participant assessment that provides ample opportunity for eligible National Guard and Reserve members to voice concerns. In addition, DOD highlighted the implementation of the military life cycle transition model in which DOD plans to provide the services, including to the National Guard and Reserve, with the latitude to optimize placement of certain elements of TAP throughout a servicemember's military service. According to DOD, the military life cycle transition model may help address some of the challenges related to the timing and location of program delivery. As we note in the report, full implementation of the military life cycle transition model is planned by October 2014. Nonetheless, at this time DOD is not fully positioned to know whether or not the revamped program addresses the long-standing challenges facing eligible members of the National Guard and Reserve in taking TAP. As we describe in the report, all eligible servicemembers, including the National Guard and Reserve members, have an opportunity to provide feedback about the instruction, content, and facilities for TAP. However, the participant assessment does not ask questions specific to the National Guard and Reserve experience even though they face different circumstances than active duty servicemembers. Given these differences, we continue to believe that unless DOD systematically collects information on any challenges facing eligible members of the National Guard and Reserve, DOD is at risk of not fully knowing whether the revamped TAP addresses long-standing challenges. In addition, the move to a military life cycle transition model could enable active duty servicemembers and members of the National Guard and Reserve to benefit from transition-related assistance throughout their military service. Recognizing that the military life cycle transition model is not implemented and specific policies and plans are not completed, many unknowns remain about how it will work in general and how National Guard and Reserve members will fare specifically. During our review, we asked DOD how this program would work in practice, including for National Guard and Reserve members, but DOD did not provide us details. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretary of Veterans Affairs, the Secretary of Labor; the Acting Administrator of the Small Business Administration; and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix summarizes our work to assess efforts to implement and evaluate the revamped TAP. For each of the five elements that we identified as important for the effective implementation and evaluation of TAP, we identified relevant attributes that we used to determine the extent to which the elements were addressed in the revamped TAP (see table 2). To determine the extent to which agencies have addressed each element and attribute, we made our assessments in two steps. First, an analyst reviewed all of the evidence. Based on that review, the analyst documented evidence that conformed to the elements and attributes and made and documented a judgment about the status of these efforts. Then a second analyst separately reviewed the assessment and documented their agreement or disagreement. The two analysts discussed any differences in their assessments and made changes based on their verbal resolution of those differences. We assessed the status of the agencies’ efforts to address each relevant attribute as “completed”, “underway”, or “not completed”. Based on this analysis, we then determined the status of efforts to address each respective attribute overall. For example, for the status of efforts to address an attribute overall to be considered “completed”, efforts to address each relevant attribute had to be completed. For the status of efforts to be considered “underway”, it had to be documented that efforts to address each relevant attribute were progressing toward being finalized, or were a mix of underway efforts and finalized efforts. For the status of efforts to be considered “not completed”, efforts to address at least one relevant attribute were not determined to be finalized or underway. We considered the agency to be “adequately addressing” the element if the overall status of the efforts to address the relevant attributes was determined to be completed or underway. However, if some attributes related to an element had efforts completed or underway to address them, but one or more other attributes did not have efforts completed or underway to address it (them), then we consider the overall efforts to address the element to be “mixed”. We used the following general decision rules for characterizing the status of efforts to address each attribute as “completed”, “underway”, and “not completed”: “Completed” means interagency partners (or relevant agency) have a system, policy, or procedure in place to address a relevant attribute. “Underway” means interagency partners (or relevant agency) have an authoritative internal or public document that demonstrate progress toward implementing actions to address the attribute (e.g., agencies have draft policies, operating procedures, or guidance, or they have interim systems in place and/or are developing systems). “Not completed” means that interagency partners (or relevant agency) have shown little to no action toward implementing actions to address the attribute, meaning they lack an authoritative internal or public document (e.g., no draft policies, operating procedures, or guidance or interim systems). As we conducted this analysis, we kept in mind how far along agencies (mainly DOD and the military services) are in implementing each component of TAP (informed by objective 1). Because agencies were in the process of implementing the revamped TAP and the program was not fully operational at the time of our review, we were not able to determine whether the policies and procedures are in place at all sites or if they are working as intended. Similarly, we were not able to comment on whether the changes to TAP are yielding desired benefits or improvements in outcomes. Andrew Sherrill, (202) 512-7215 or [email protected]. In addition to the contact named above, individuals making key contributions to this report were Patrick Dibattista, Assistant Director; Julianne Hartman Cutts; and James Whitcomb. In addition, key support was provided by James Bennett, Rachael Chamberlin, David Chrisinger, Brett Fallavollita, Alex Galuten, Kathy Leslie, David Moser, and Michael Silver.
Over the next few years, over a million military servicemembers are expected to transition to civilian life and some may face challenges such as finding employment. To help them, TAP provides departing servicemembers employment assistance and information on VA benefits, among other things. Begun in 2011, efforts to revamp TAP are underway based on the VOW to Hire Heroes Act of 2011 and the administration's recommendations. The act also mandated GAO to review TAP. This report addresses: 1) the status of TAP implementation; 2) the extent to which elements of effective implementation and evaluation of TAP have been addressed; and 3) any challenges that may remain. To do this GAO identified five elements of effective implementation and evaluation based on relevant federal laws and previously established GAO criteria for training programs; reviewed related GAO work; assessed reports, plans, and policies provided by agencies that administer TAP; interviewed officials from entities that support servicemembers and veterans; and conducted four nongeneralizable discussion groups with servicemembers who had taken TAP at three military installations. The Departments of Defense (DOD), Labor (DOL), and Veterans Affairs (VA) have implemented most of the key components of the Transition Assistance Program (TAP), a gateway to information and services available to servicemembers transitioning to civilian life. However, the agencies are still in the process of implementing other key components of TAP. While originally planned for October 2013, agencies now plan to implement virtually all components by the end of March 2014, with full implementation expected by June 2014. Agencies' efforts are underway to adequately address three of five elements that GAO identified as important for effective implementation and evaluation of TAP: 1-Track attendance : DOD has systems to collect and report on attendance, which help measure the extent to which TAP achieves its attendance goals. 2-Ensure training quality : The agencies collect and plan to use participant feedback on instruction, content, and facilities to improve training. Each agency also plans to monitor its respective TAP components through site visits. 3-Assess career readiness : The agencies developed standards to assess servicemembers' career readiness. During a capstone assessment, commanders are expected to verify and document whether standards were met. Agencies' efforts to address the remaining two elements are mixed: 4-Ensure participation and completion : DOD has assigned commanders the responsibility for overseeing participation and has required the services to schedule training and communicate its importance to servicemembers. While the Army and Air Force gauge participation at the command level, the Navy and Marines lack a similar oversight mechanism. 5-Measure performance and evaluate results : The agencies have established certain measures to assess program performance, but their TAP evaluation approach is incomplete. For example, the agencies have established measures to track program outputs, such as the percentage of servicemembers who have participated in TAP. However, the agencies' efforts to evaluate TAP results have focused on basic end-of-course evaluations and gauging servicemembers' readiness prior to separation instead of higher impact program evaluations, such as assessing the effectiveness of TAP on servicemembers 6 months after they have separated from the military. According to agency officials, such evaluations are being considered for certain components of TAP, but they could not provide GAO with a justification for including or excluding specific components of TAP in their evaluation planning efforts. Based on GAO's prior work and according to officials from the agencies and organizations GAO spoke with, a key remaining challenge for TAP may be the unfavorable timing and location of program delivery for National Guard and Reserve members. Unlike active duty servicemembers, National Guard and Reserve members receive TAP services closer to their transition and in locations that are generally neither where they work nor live. As a result, they may be distracted and have less time to benefit from TAP services. DOD is not well positioned to verify these concerns because it is not collecting data about these members' experiences with the timing and location of TAP service delivery. GAO recommends that DOD improve oversight and implementation of TAP, including actions to gauge participation for all of the services and collect data about National Guard and Reserve members' experiences. DOD disagreed with two of GAO's three recommendations. GAO continues to believe that the recommendations are needed as discussed in the report.
The Workforce Investment Act created a new, comprehensive workforce investment system designed to change the way employment and training services are delivered. When WIA was enacted in 1998, it replaced the Job Training Partnership Act (JTPA) with three new programs—Adult, Dislocated Worker, and Youth—that allow for a broader range of services, including job search assistance, assessment, and training for eligible individuals. In addition to establishing three new programs, WIA requires that a number of employment-related services be provided through a one- stop system, designed to make accessing employment and training services easier for job seeker customers. WIA also requires that the one-stop system engage the employer customer by helping employers identify and recruit skilled workers. While WIA allows states and localities flexibility in implementing these requirements, the law emphasizes that the one-stop system should be a customer-focused and comprehensive system that increases the employment, retention, and earnings of participants. The major hallmark of WIA is the consolidation of services through the one-stop center system. About 17 categories of programs, totaling over $15 billion from four separate federal agencies, are required to provide services through the system. (See table 1.) WIA allows flexibility in the way these mandatory partners provide services through the one-stop system, allowing co-location, electronic linkages, or referrals to off-site partner programs. While WIA requires these mandatory partners to participate, WIA did not provide additional funds to operate one-stop systems and support one-stop partnerships. As a result, mandatory partners are expected to share the costs of developing and operating one-stop centers. However, several of the programs have limitations in the way the funds may be used, giving rise to one-stop funding challenges. Beyond the mandatory partners, one-stop centers have the flexibility to include other partners in the one-stop system. Labor suggests that these additional, or optional partners, may help one-stop systems better meet specific state and local workforce development needs. These optional partners may include Temporary Assistance for Needy Families (TANF) or local private organizations, for example. States have the option of mandating particular optional partners to participate in their one-stop systems. For example, in 2001, 28 states had formal agreements between TANF and WIA to involve TANF in the one-stop system. In addition, localities may partner with other programs to meet the specific needs of the community. One-stop centers serve two customers—job seekers and employers (see fig. 2). In serving job seekers, one-stop centers are encouraged to develop strategies to achieve a streamlined set of services. In past reports, we identified key areas critical to successfully integrating services under WIA, such as ensuring that job seekers have ready access to employment and program information, reducing job seekers’ confusion by providing them with a streamlined path from one program to another, providing job seeker services that are tailored and seamless, and helping job seekers identify and obtain needed program services without the burden of completing multiple intake and assessment procedures. One-Stop centers provide job seekers with job search and support services, while the job seekers act as an available labor pool for the one-stops’ employer customers. In serving employers, one-stops have the flexibility under WIA to provide a variety of tailored services, including hiring, assessments and training services that meet the specific needs of each employer. The degree to which the one- stop system engages and provides services to employers is left to the discretion of state and local officials. However, Labor suggests that employer involvement is critical for one-stop officials to better understand what skills are needed, what jobs are available, and what career fields are expanding. In order to demonstrate the effectiveness of the WIA programs, WIA requires that states and localities track performance and Labor holds states accountable for their performance. The measures gauge outcomes in the areas of job placement, employment retention, and earnings change, as well as skill attainment and customer satisfaction. In addition to the WIA programs, most of the 17 employment and training programs have their own performance measures. There are no overall one-stop performance measures. The one-stop centers we visited embraced the customer-focused provisions of WIA by streamlining one-stop services for job seekers. All 14 one-stop centers we visited used at least one of three different strategies to build a streamlined one-stop system—ensuring job seekers could readily access needed services, educating program staff about all of the one-stop services available to job seekers, and consolidating case management and intake procedures (see fig. 3). To ensure that job seekers could readily access needed services, one-stops we visited allocated staff to help job seekers navigate the one-stop system, expanded services for one-stop customers, and provided support to customers with transportation barriers. Ensuring access is designed to minimize confusion for job seekers as they navigate one-stop services. To educate program staff about one-stop services, centers used cross-training sessions in which partners informed one another about the range of services available at the one-stop. Finally, centers sought to reduce the duplication of effort across programs and the burden on job seekers navigating programs by consolidating case management and intake procedures across multiple programs through joint service plans for customers and shared computer networks. Nearly all of the one-stop centers we visited implemented specific strategies to ensure that job seekers had access to needed services. We previously reported that the range of services provided by multiple programs in the one-stop center caused confusion for job seekers. To minimize confusion, nearly all of the sites we visited looked for ways to ensure job seekers would have ready access to program information and a clear path from one program to another within the one-stop system. For example, when one-stop center staff in Killeen, Texas, and Clarksville, Tennessee, referred job seekers to another partner, the staff personally introduced the job seeker to the referred program staff to prevent job seekers from getting lost between programs. Similarly, officials in Erie, Pennsylvania, positioned a staff person at the entrance to the one-stop to help job seekers entering the center find needed services and to assist exiting job seekers if they did not receive the services they sought. (See app. II for more examples from each of the sites we visited.) In addition to improving access to one-stop center services on-site, some of the one-stops we visited found ways to serve job seekers who may have been unable to come into the one-stop center for services. For example, in Boston, Massachusetts, the one-stop placed staff in off-site locations, including family courts, correctional facilities, and welfare offices to give job seekers ready access to employment and program information. Specifically, Boston one-stop staff worked with an offender re-entry program that conducted job fairs inside the county prison to facilitate incarcerated offenders’ transition back into the workplace. One-stops also improved job seeker access to services by expanding partnerships to include optional service providers—those beyond the 17 program partners mandated by WIA. These optional partners ranged from federally funded programs, such as TANF, to community-based organizations providing services tailored to meet the needs of local job seekers. The one-stop in Dayton, Ohio, was particularly proactive in forming optional partnerships to meet job seekers’ service needs. At the time of our visit, the Dayton one-stop had over 30 optional partners on-site, including the Montgomery County Combined Health District, which operated a health clinic on-site; Clothes that Work!, which provides free business attire to low-income women; and an alternative high school. The most common optional partner at the one-stops we visited was the TANF program—which was an on-site partner at 13 of the 14 sites. One-stop managers in Clarksville told us that co-location with the Tennessee Department of Human Services, which administers TANF, benefited all job seekers because the department helped to fund various services, including computer classes, soft skills classes, and parenting classes that could be provided to those not eligible for TANF. Additionally, Killeen, Texas, one- stop staff told us that co-location with TANF helped welfare recipients address barriers to employment by facilitating easier access to other services, such as housing assistance and employment and training programs. Many one-stop centers, such as in Killeen, Texas, and Vineland, New Jersey, ensured access to one-stop services by addressing customers’ transportation challenges. Staff in Killeen partnered with the libraries in rural areas to provide computer access to one-stop resume writing and job search services and an on-line TANF orientation. In Kansas City, Missouri, the one-stop management decided to locate the one-stop center next to the bus company, the Area Transit Authority (ATA). This strategic decision meant that all bus routes passed by the one-stop center, thus ensuring that customers with transportation challenges could access one-stop center services. Additionally, the ATA partnered with the one-stop to create an Urban Employment Network program to assist job seekers with transportation to and from work. The ATA provided bus service 7 days a week from 5:00 in the morning until midnight and set up a van service to operate during off-peak hours. To help ensure that job seekers receive services tailored to meet their needs, nine of the one-stops we visited focused on educating all one-stop staff about the range of services available through the one-stop. In earlier work, we identified challenges for job seekers in receiving the right set of services to meet their needs. One-stop officials at the centers we visited reported that staff who were cross-trained could better assess the particular needs of job seekers, including identifying barriers to getting a job, and were able to provide job seekers with more appropriate referrals. Cross-training activities ranged from conducting monthly educational workshops to offering shadow programs through which staff could become familiar with other programs’ rules and operations. Officials in Salt Lake City, Utah, reported that cross–training improved staff understanding of programs outside their area of expertise and enhanced their ability to make referrals. The Pikeville, Kentucky, one-stop supported cross-training workshops in which one-stop staff from different partner programs educated each other about the range of services they could provide. After learning about the other programs, Pikeville staff collaboratively designed a service delivery flow chart that effectively routed job seekers to the appropriate service providers, providing a clear entry point and a clear path from one program to another. In addition, the Vocational Rehabilitation staff at the Pikeville one-stop told us that cross-training enabled other program staff to more accurately identify hidden disabilities and to better refer disabled customers to the appropriate services. In the one-stop centers we visited, cross-training occurred among the majority of on-site co-located partners as well as between some of the on- site and off-site one-stop programs. One-stop managers in Dayton, Ohio, told us that cross-training staff resulted in increased referrals to service providers that had previously been unknown, such as to smaller programs within the one-stop or to local neighborhood programs that could better meet the specific needs of particular job seekers. Specifically, Dayton managers also reported that cross-training one-stop staff dramatically improved referrals to the Child Support program, thereby enhancing efforts to establish paternity, a requirement for determining eligibility for TANF. To provide streamlined service delivery, 10 of the 14 one-stops we visited consolidated their intake processes or case management systems, reducing the need for job seekers to go through multiple intake processes. This consolidation took many forms, including having case workers from different programs work as a team to develop service plans for customers and having a shared computer network across programs. As a result, case workers reduced the duplication of effort across programs and job seekers were not burdened with completing multiple intake and assessment procedures. For example, the Youth Opportunity Program and Workforce WIA Youth program staff at the one-stop in Kansas City, Missouri, shared intake and enrollment forms to streamline the delivery of services to youth. In Blaine, Minnesota, job seekers were originally served by multiple service providers, meeting independently with each provider for each program service received. Caseworkers from the various one-stop programs in Blaine met regularly to collaborate in developing and implementing joint service plans for customers who were co-enrolled in multiple programs. As a result, the number of caseworkers involved had been reduced significantly, and job seekers received a more efficient and tailored package of services. To efficiently coordinate multiple services for one-stop customers in Erie, Pennsylvania, the staff used a networked computer system with a shared case management program, so that all one-stop program staff could share access to a customer’s service plan and case file. All of the one-stops we visited implemented at least one of three different approaches to engage and provide services to employers—dedicating specialized staff to establish relationships with employers or industries, working with employers through intermediaries, and providing tailored services to meet employers’ specific workforce needs (see fig.4). All of the one-stops dedicated staff to establish relationships with employers, minimizing the burden on employers who previously may have been contacted by multiple one-stop programs. A few of these one-stops also had employer-focused staff work with specific industries in order to respond better to local labor shortages. Many of the one-stops also worked with employers through intermediaries, such as the Chambers of Commerce or economic development entities, in order to market one-stop services and expand their base of employer customers. Finally, most one- stops went beyond providing basic services to employers by tailoring services to meet individual employers’ needs, such as specialized recruiting and applicant pre-screening, customized training opportunities, and assessments using employer specifications. Tailored services were used to maintain employer involvement and increase employment opportunities for job seekers. To help employers access the workforce development system, all of the one-stops we visited dedicated specialized staff to establish relationships with employers. One-stop officials told us that engaging employers was critical to successfully connecting job seekers with available jobs. Specialized staff outreached to individual employers and served as employers’ primary point of contact for accessing one-stop services. For example, the one-stop in Killeen, Texas, dedicated specialized staff to serve not only as the central point of contact for receiving calls and requests from employers but also as the primary tool for identifying job openings available through employers in the community. A one-stop manager in Killeen told us that in the past, staff from each partner agency would outreach to employers to find jobs for their own job seekers. Now they have eliminated the duplication of effort and burden on employers by designating specialized staff to conduct employer outreach for all one-stop programs. In addition to working with individual employers, staff at some of the one- stops we visited also worked with industry clusters, or groups of related employers, to more efficiently meet local labor demands—particularly for industries with labor shortages. One-stop managers at these sites told us that having staff work with industry clusters helped them better respond to labor shortages because it enabled staff to develop a strong understanding of the employment and training needs of those industries. These one-stops were better prepared to match job seekers with appropriate training opportunities, enabling those job seekers to become part of a qualified labor pool for these industries. For example, the one-stop in Santa Rosa, California, assigned staff to work with employers in local high-demand industries, including health care, high tech, and tourism. These staff established relationships with employers from these industries, assessed their specific workforce needs, and shared this information with one-stop case workers. Specifically, when Santa Rosa’s tourism industry was in need of more skilled workers, the one-stop worked with the local community college to ensure there were certification courses in hotel management and the culinary arts, for exam. The one-stop in Aurora, Colorado, also dedicated staff to work with specific industries. For example, in response to a nursing shortage of 1,600 nurses in the Denver metro area, staff from the Aurora one-stop assisted in the creation of a healthcare recruitment center designed to provide job placement assistance and access to health-care training. In addition to dedicating specialized staff, all of the one-stops we visited worked with intermediaries to engage and serve employers. Intermediaries, such as local Chambers of Commerce or economic development entities, served as liaisons between employers and the one- stop system, helping one-stops to engage employers while connecting employers with one-stop services. For example, the one-stop staff in Clarksville, Tennessee, worked with Chamber of Commerce members to help banks in the community that were having difficulties finding entry- level employees with the necessary math skills. To help connect job seekers with available job openings at local banks, the one-stop developed a training opportunity for job seekers that was funded by Chamber members and was targeted to the specific skills needed for employment in the banking community.Similarly, staff at the one-stop in Kenosha, Wisconsin, were in frequent contact with the Kenosha Area Business Alliance, a community development corporation, to identify and address hiring and training needs of the local manufacturing industry. This partnership not only helped employers access human resources assistance—such as recruitment, networking, and marketing—but it also assisted employers with assessment and training of new and existing employees. Specialized staff at most of the one-stops we visited also worked with local economic development entities to serve employers or recruit new businesses to the area. For example, the staff at the Erie, Pennsylvania, one-stop worked with a range of local economic development organizations to develop an outreach program that assessed the workforce needs of employers, linked employers with appropriate services, and developed incentive packages to attract new businesses to the community. In addition to dedicating specialized staff to engage employers and working with intermediaries, all of the one-stops we visited tailored their services to meet employers’ specific workforce needs by offering an array of job placement and training assistance designed for each employer. These services included specialized recruiting, pre-screening, and customized training programs. For example, when one of the nation’s largest cabinet manufacturers was considering opening a new facility in the eastern Kentucky area, the one-stop in Pikeville, Kentucky, offered a tailored set of services to attract the employer to the area. The services included assisting the company with pre-screening and interviewing applicants and establishing an on-the-job training package that used WIA funding to offset up to 50 percent of each new hire’s wages during the 90-day training period. According to a company representative, the incentive package offered by the one-stop was the primary reason the company chose to build a new facility in eastern Kentucky instead of another location. Once the company arrived, the Pikeville one-stop administered the application and assessment process for job applicants and held a 3-day job fair, resulting in the company hiring 105 people through the one-stop and planning to hire an additional 350 employees. To help industries address labor shortages and strengthen local businesses, several of the one-stops we visited actively developed and marketed training opportunities for current and potential new employees, helping to keep jobs in the community and promote local economic growth. For example, Pikeville, Kentucky, encountered a labor shortage in the local coal mining industry. Because of the high cost of training for inexperienced miners, many companies considered hiring experienced coal miners from foreign countries. To help companies save on training costs and hire workers from the local area—one of historically high unemployment—the Pikeville one-stop created an on-the-job training program using WIA funds, which paid for half of new miners’ salaries during their training period. The co-owner of a local mining company, who hired 15 percent of his workforce through the one-stop, told us that, without the assistance of the one-stop, he would not have been able to hire as many miners. Because he saved money on training costs, the co-owner said he was also able to promote his experienced workers to more advanced positions and provide better benefits, such as health insurance, for all his employees. Tailored services were used not only to attract new employers, but to retain employers in the one-stop system and train new workers for employers struggling to find job-ready staff. For example, for over 9 years, the Clarksville, Tennessee, one-stop has provided tailored hiring services, including drug-testing and pre-screening of applicants, for a nearby manufacturing company. As a result, the company has hired over 75 people through the one-stop. One-stops also provided customized workshops and classes to help employers train new and current workers. When a local nursing home expressed concern about hiring non-English-speaking workers, the one-stop in Blaine, Minnesota, created a job-specific English as a Second Language course that was taught on-site at the nursing home by one-stop staff. Many of the one-stops we visited also provided employers with tailored business support services and educational resources. One-stop managers told us that these services helped the one-stops attract and retain employer involvement in the one-stop system and enhanced employers’ ability to maintain a skilled workforce. For example, some one-stops we visited allowed employers to use office space in the one-stop for interviewing job applicants. A few of the one-stop centers had specific business centers on- site, such as the Business Resource Center in Killeen, Texas. The center served entrepreneurs and over 400 small businesses by providing information about starting a small business, such as tax information, economic development information, marketing resources, and business workshops. Similarly, the Sunnyvale, California, one-stop addressed the specific needs of customers seeking entrepreneurial opportunities by co- locating with a patent and trademark library that is electronically linked to the national trademark office. Finally, several one-stops offered employers help with accessing business tax credits. For example, when the employer services staff at the one-stop in Vineland, New Jersey, realized the application process for tax credits was cumbersome for employers, they began automatically completing the required paperwork for employers so that the employers could more readily apply for the tax credit incentives. To build the solid infrastructure needed to support better services for job seekers and employers, many of the one-stops we visited developed and strengthened program partnerships and raised funds beyond those provided under WIA. Center operators fostered the development of strong program partnerships by encouraging communication and collaboration among partners through functional teams and joint projects. As shown in figure 5, this collaboration allowed one-stop partners to better integrate their respective programs and services. Many one-stops also worked toward improving one-stop operations and services by raising additional funds through fee-based services, grants, and contributions from partners and state or local government. The revenue raised through these activities helped one-stops improve operations and services despite the lack of WIA funding for one-stop operations and restrictions on the ways in which one- stop programs can spend their funds. In order to build a cohesive, well-functioning one-stop infrastructure, 9 of the 14 one-stop centers we visited gave partners the opportunity to collaborate through functional teams and joint projects. One-stop managers told us that collaboration through teams and joint projects allowed partners to better integrate their respective programs and services, as well as pursue common one-stop goals and share in one-stop decision making. For example, partners at the Erie, Pennsylvania, one-stop center were organized into four functional teams—a career resource center team, a job seeker services team, an employer services team, and an operations team—which together operated the one-stop center. As a result of the functional team meetings, partners reported that they worked together to solve problems and develop innovative strategies to improve services in their respective functional area. For instance, to improve intake and referral processes, the Erie job seeker services team created a color-coded intake form shared by multiple partners. Certain customers, such as veterans and dislocated workers, received intake forms that were a different color from those of other customers, so that staff could easily identify the different customer groups and direct each toward the services that best met their needs. Similarly, in Salt Lake City, Utah, partners created a committee to address issues of common concern, such as cross-program referrals, cross-training of partner staff, and employer involvement. Staff from the Vocational Rehabilitation Program in Salt Lake City told us that this committee helped to increase referrals to their program by producing flow charts of the service delivery systems of various partner programs to identify points at which referrals and staff collaboration should occur. In addition to fostering integration across programs, one-stop managers said that the joint decision making done through functional teams facilitated the development of a shared one-stop identity. Pikeville, Kentucky, one-stop managers told us that shared decision-making was instrumental in developing a common one-stop identity and in ensuring partners’ support for the one-stop system. The process of creating a shared one-stop identity in Pikeville was also supported by the adoption of a common logo, nametags, and business cards, and was reinforced by a comprehensive marketing campaign, which gave partners a common message to rally behind. Pikeville one-stop managers told us that, as a result of this shared one-stop identity, partner staff no longer focused exclusively on serving their individual program customers; rather, staff developed a “can-do” attitude of meeting the needs of all one-stop customers. In addition, managers told us that because of their shared one- stop identity, partners were more willing to contribute resources to one another and to the center as a whole. For instance, in order to streamline services for job seekers, the Adult Basic Education Program administered skills assessments to all one-stop customers, regardless of which program they were enrolled in. One-stop managers at several of the sites we visited told us that the co- location of partner programs in one building facilitated communication and collaboration. For this reason, one-stop managers at several of the centers we visited reported that they fostered co-location by offering attractive physical space and flexible rental agreements. For example, in Pikeville, Kentucky, the local community college donated free space to the one-stop on its conveniently located campus, making it easier to convince partners to relocate there. Partners were also eager to relocate to the Pikeville one- stop because they recognized the benefits of co-location for their customers. For instance, staff from the Vocational Rehabilitation Program said that co-location at the one-stop increased their customers’ access to employers and employment services. Pikeville managers also told us that co-location at the community college made it easier for partners to share information and made them more visible to students likely to need employment services in the near future. In addition, because co-location sometimes presents a challenge to partners with limited resources, several centers offered flexible rental agreements to make it easier for partners to co-locate. For example, the Kansas City, Missouri, one-stop enabled the Adult Basic Education Program to co-locate by allowing it to contribute instructors and General Educational Development (GED) classes instead of paying rent. Partners in some locations, including Dayton, Ohio, and Kenosha, Wisconsin, donated space to enable other partners to be on-site. Several one-stops where all partners were not co-located found ways to create strong linkages with off-site partners. For example, in addition to regular meetings between on-site and off-site staff, the one-stop in Aurora, Colorado, had a staff person designated to act as a liaison and facilitate communication between on-site and off-site partners. When an on-site partner specializing in senior services raised concerns about the lack of employment opportunities for its customers, the liaison set up a meeting with Vocational Rehabilitation, an off-site partner, to enable both parties to begin exchanging referrals to jobs and services. Managers at all but two of the one-stops we visited said that they were using the flexibility under WIA to creatively increase one-stop funds through fee-based services, grants, or contributions from partner programs and state or local governments. Managers said these additional funds allowed them to cover operational costs and expand services in spite of the lack of WIA funding to support one-stop infrastructure and restrictions on the use of program funds. For example, one-stop operators in Clarksville, Tennessee, reported that they raised $750,000 in fiscal year 2002 through a combination of business consulting, drug testing, and drivers’ education services. Using this money, the center was able to purchase a new voicemail and computer network system, which facilitated communication among staff and streamlined center operations. Similarly, in Sunnyvale, California, one-stop managers said they raised $20,000 through downsizing and training services for employers, and used this money to expand the one-stop’s training services. Centers have also been proactive about applying for grants from public and private sources. For example, the one-stop center in Kansas City, Missouri, had a full-time staff person dedicated to researching and applying for grants. The one-stop generated two-thirds of its entire program year 2002 operating budget of $21 million through competitive grants available from the federal government as well as from private foundations. This money allowed the center to expand its services, such as through an internship program in high-tech industries for at-risk youth. One-stop centers also raised additional funds by soliciting contributions from local or state government and from partner agencies. For instance, Boston one-stop managers reported that the state of Massachusetts matched the one-stop’s Wagner-Peyser funds dollar for dollar, which enabled the center to fund its resource room and library. In addition, the Dayton, Ohio, one-stop received $1 million annually from the county to pay for shared one-stop staff salaries and to provide services to job seekers who do not qualify for services under any other funding stream. Dayton one-stop partners also contributed financial and in-kind resources to the center on an as-needed basis. In addition to raising money through grants, managers at the one-stop in Santa Rosa, California, told us that they made more efficient use of existing funds by having staff use a funding source determination worksheet to maximize training funds from various sources. The worksheet is continually updated to show how much funding is available through each program, allowing caseworkers to choose the most economical source for eligible customers’ Individual Training Accounts (ITAs) based on the amount of money available through each funding stream and the date it is scheduled to expire. While Labor currently tracks outcome data—such as job placement, job seeker satisfaction, and employer satisfaction—and funds several studies to evaluate workforce development programs and service delivery models, little is known about the impact of various one-stop service delivery approaches on these and other outcomes. Labor’s studies largely take a program-by-program approach rather than focusing on the impact on job seekers of various one-stop integrated service delivery approaches, such as sharing customer intake forms across programs, or on employers, such as dedicating staff to focus on engaging and serving employers. Further, Labor’s efforts to collaborate with other federal agencies to assess the effects of different strategies to integrate job seeker services or to serve employers through the one-stop system have been limited. In addition, one- stop administrators do not have enough opportunities to share existing information about how to improve and integrate services for job seeker and employer customers. While Labor has developed a promising practices Web site to facilitate such information sharing, it is unclear how well the site currently meets this objective. Labor currently tracks performance measures under the three WIA programs using 17 separate outcome measures, including job placement and job seeker and employer customer satisfaction, designed to gauge the success of WIA funded services. However, managers at a few of the one- stop centers we visited told us that customer satisfaction data, for example, could not be linked to specific services or strategies, so one-stop managers could not use the data to improve services for their job seeker and employer customers. While outcome measures are an important component of program management in that they assess whether a participant is achieving an intended outcome-such as obtaining employment—they cannot measure whether the outcome is a direct result of program participation. Other influences, such as the state of the local economy, may affect an individual’s ability to find a job as much or more than participation in an employment and training program. Many researchers consider impact evaluations to be the best method for determining the effectiveness of a program—that is, whether the program itself rather than other factors leads to participant outcomes. While Labor is currently supporting a large number of impact and process evaluations of various workforce development programs and initiatives, none of these studies include an evaluation of the impact of different integrated service delivery approaches on outcomes, such as job placement or retention, or job seeker and employer satisfaction (see table 2). Examples of integrated service delivery initiatives that we observed at one- stops and that Labor could evaluate include cross-training one-stop staff, sharing customer intake across programs, and consolidating case management for customers enrolled in multiple programs. While these integrated service delivery approaches were common at the one-stops we visited, little is currently known about their impact on one-stop customer outcomes and satisfaction. In addition, there is a lack of information about which approaches to serving employers are most effective, such as dedicating staff to engage and serve employers or tailoring services for employers by offering customized training or pre-screening job applicants, for example. Employment and Training Administration (ETA) officials provided us with information on their current research, such as the Microanalysis of the One-Stop—a process evaluation that Labor has initiated to analyze how job seekers and employers access the array of available one-stop services. While this study offers an analysis of the implementation and operation of integrated service delivery, it does not measure the impact of this integration on one-stop customers’ satisfaction or outcomes. In addition, the impact evaluations that Labor is currently undertaking typically take a program-by-program approach and do not measure the effectiveness of integrated services. For instance, Labor’s evaluation comparing the impact of various approaches to implementing Individual Training Accounts only includes WIA program participants, and its evaluation of self-directed job search in a one-stop environment focuses only on UI recipients. ETA officials told us that a major barrier they face to conducting a broader array of impact studies is their limited research budget—$35 million for demonstration grants and $9 million for evaluations in fiscal year 2003. In a few cases, Labor has sought to address these funding limitations by collaborating with other federal agencies to fund studies. For example, Labor is helping HHS fund the $26 million Employment, Retention and Advancement Study, an evaluation assessing strategies to promote employment retention and advancement among welfare recipients and low- wage workers. Labor is also collaborating with the Department of Education on a process evaluation examining the implementation of school-to-work programs at selected Job Corps centers. Such collaboration not only enables Labor to address funding limitations, but it also has the potential to facilitate evaluations of service delivery approaches that span multiple programs overseen by different agencies. However, in spite of these benefits, Labor is currently engaging in only a limited number of such collaborations. Moreover, none of these collaborative studies are specifically directed towards evaluating the impact of one-stop services or integrated service delivery approaches. While Labor has developed several mechanisms for providing guidance and allowing local one-stop administrators to share information on how to move beyond the basic requirements of WIA toward improving and integrating one-stop services, these efforts have been limited. Labor’s primary mechanisms for disseminating information about promising practices at one-stop centers are a Web site, forums, and conferences. The promising practices Web site, which is funded by Labor and is operated by Northern Illinois University’s Center for Governmental Studies, represents a promising step toward building a mechanism to support information sharing among one-stop administrators. However, neither ETA nor the Web site’s administrators have conducted a customer satisfaction survey or user evaluation of the site, so little is known about how well the site currently meets its objective to promote information sharing about promising practices. Much of the information available on the Web site comes from submissions by one-stop centers or research organizations, yet Web site administrators told us that these submissions have not been screened to ensure that their content is useful. Furthermore, relevant literature stresses that information presented through Web sites should be accessible, useful, and well organized. When we attempted to use the Web site, we found that useful information on the site was difficult to access. In order to find information about promising practices through the site, one must conduct a search by key word, which often did not yield satisfactory results. Search results were organized alphabetically, not by relevance, and some of the results addressed the search topic only marginally. In addition, search results included a disorganized mixture of external documents, links to other Web sites, and submissions. For instance, a search under the keywords "service integration" yielded six results, including two links to external Web sites, two external documents, and two promising practices submissions. Of these six results, two did not directly address promising practices in the area of service integration. In addition to the Web site, Labor hosts regular regional meetings and cosponsors several national conferences to promote information dissemination and networking opportunities for state and local grantees and stakeholders. Labor also hosted several forums during WIA implementation to allow information exchanges to occur between the department and state and local one-stop administrators. While these conferences and forums provide a venue for one-stop managers to talk with one another about what is and is not working at their centers, participation is limited to those who can physically take part. The workforce development system envisioned under WIA represents a fundamental shift from prior systems, and barely 3 years have passed since it was fully implemented. States and localities are learning how to use the flexibility afforded by WIA to develop systems that work for their local areas and that implement WIA’s vision of a customer-focused system. The one-stop centers we visited are coordinating with the 17 mandatory partners, and often multiple optional partners, to create a one-stop system that strives to streamline services for job seekers and make employers a significant part of the one-stop system. While the one-stops we visited ranged in terms of their location—from urban to suburban to rural—we saw numerous examples of one-stops streamlining services for job seekers, developing business-related services to meet the needs of employers, and supporting a one-stop infrastructure that provides the full range of assistance needed by job seekers and employers to serve local workforce needs. While these strategies show promise for improving services to job seekers and employers alike, there is no clear understanding of whether these integrated service delivery approaches are actually increasing job seeker and employer satisfaction or outcomes, such as job placement and retention. Labor’s current research efforts focus within individual programs and have yet to take into account that customers are now served by a one-stop system where multiple programs from four federal agencies provide services. Moreover, few efforts have been made to share information on promising practices. It is unclear whether one effort, a promising practices Web site supported by Labor, is effective in meeting its objective to promote information sharing about promising practices. Without the right research or information sharing tools, it is difficult to know which one-stop practices are, in fact, successful and how the system might be improved in the long run. In order to better understand and disseminate information on how well different approaches to program integration are meeting the needs of one- stop job seekers and employers, we recommend that the Secretary of Labor collaborate with the Departments of Education, Health and Human Services, and Housing and Urban Development to develop a research agenda that examines the impacts of various approaches to program integration on job seeker and employer satisfaction and outcomes, such as job placement and retention and conduct a systematic evaluation of the promising practices Web site and ensure that it is effective. We provided a draft of this report to Labor for comment. Labor agreed with our recommendations and expressed appreciation for our acknowledgment of the progress made by local one-stop centers. However, Labor suggested we recognize other research activities undertaken by ETA and its efforts to share promising practices. We have incorporated Labor’s comments in our report, as appropriate. A copy of Labor’s response is in appendix III. Specifically, Labor agrees with our recommendation that better information is needed to assess the impact of integrated services on customer outcomes and satisfaction, but noted that it collects performance information that includes job seeker and employer customer satisfaction data. In addition, Labor told us it is working on implementing common performance measures for the one-stop system. As we noted in the report, outcome measures are an important part of program management, but alone, do not allow for an understanding of whether the outcome is a direct result of program participation. We continue to stress the need for more impact studies in order to understand whether integrated services are making a difference. Labor agrees with our recommendation that Labor conduct a systematic evaluation of the Web site to ensure that it is effective. Labor told us that it is undertaking a strategic review of its Web sites, including the promising practices site that is intended to identify ways to improve customer access to information. Labor also said that it is engaged in other activities to effectively share information about what is working well in one-stop centers. For example, ETA hosts regular regional meetings for state administrators and funds a number of efforts that produce, recognize, and share promising practices within the workforce system. We are sending copies of this report to the Secretary of Labor, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix IV. Arapahoe/Douglas Works! Arapahoe/Douglas Works! (local government consortium) Workforce Essentials, Inc. Workforce Essentials, Inc. (nonprofit) Partner consortium, led by Greater Erie Community Action Committee (nonprofit) Full Employment Council (nonprofit) Suburban, rural area. Fort Hood is nearby. Eastern Kentucky Concentrated Employment Program (nonprofit) While sites were identified as exemplary based on their promising practices in one of three key areas—serving job seekers, engaging employers, and operating the one-stop center—we found that all 14 of the one-stops we visited exhibited numerous promising practices in multiple areas. The selection of promising practices described below represents some of the strongest or most unique examples from each site. Arapahoe/Douglas Works! Colorado Workforce Center 14980 E. Alameda Drive Working with intermediaries to engage and serve employers - Arapahoe/Douglas Works! works closely with local Chambers of Commerce and economic development entities to conduct outreach to employers. Each year Arapahoe/Douglas Works! and the local Chamber hold an employer recognition awards event, which not only markets the one-stop system to business, but also encourages workplace innovation by honoring three employers with awards for work-life balance, community partnerships, and outstanding youth employer. Dedicating specialized staff to address local industry needs - Because of a local nursing labor shortage, the one-stop dedicated specialized staff to establish an on-site healthcare recruitment center to help job seekers find training opportunities in the healthcare field. Promoting partner collaboration - In addition to regular meetings between on-site and off-site staff, the one-stop has a staff person designated to act as a liaison and facilitate communication between on- site and off-site partners. Developing optional partnerships to expand services - Arapahoe/Douglas Works! partners with the Department of Corrections to provide transition services for juvenile offenders. Raising additional funds to expand services - Arapahoe/Douglas Works! raised about $620,000 through contracts with local schools to provide workforce development services for at-risk high school students. The one-stop also raised about $80,000 through an on-site learning lab for students at risk of dropping out of school. Anoka County Workforce Center 1201 89th Avenue NE Ensuring partner staff understand the range of services –Staff periodically participate in center-wide meetings where they make presentations to one another about their program’s services and role at the center. In addition, partners lead workshops on how to better serve their particular customer base. Officials reported that cross-training results in increased referrals across partner programs. Streamlining services through consolidated case management - The caseworkers from the various one-stop programs meet regularly to collaborate in developing and implementing joint service plans for customers who are co-enrolled in multiple programs. Tailoring services to meet employers’ specific workforce needs - The one-stop developed an English-as-a-Second-Language course tailored to the needs of a local nursing home. The course was taught on- site at the nursing home by one-stop staff. Promoting partner collaboration - Partners collaborate in functional teams to manage the one-stop. Collaboration among partners was enhanced when they jointly applied for a One-Stop Implementation grant from the state of Minnesota. Because of the strong sense of cooperation among them, partners pooled their resources when possible to ensure the continued funding of services. Raising additional funds to expand services - An H1-B grant and a grant from the McKnight Foundation enabled the center to expand services for customers. The grants enabled the center to implement a training program in healthcare-related fields and develop a social services and car donation program for people who do not qualify for any other program. The Work Place 99 Chauncy Street Ensuring job seekers’ access to services - Because the majority of the Work Place’s partners are located off-site, the one-stop placed staff in off-site locations, including family courts, correctional facilities, and welfare offices to give job seekers ready access to employment-related services. Dedicating specialized staff to establish relationships with employers - The Work Place has staff dedicated to recruiting, engaging, and maintaining employer involvement. The Work Place has focused on measuring employer satisfaction and soliciting employer feedback to guide them in improving their employer services. The center has established employer focus groups to identify the services employers used and their satisfaction with those services. Tailoring services to meet employers’ specific workforce needs - The Work Place screens applicants and provides referrals to the Marriott Hotel’s Pathways to Independence program, a nationwide job readiness program for people with multiple barriers to employment. About 75 percent of program graduates over the past 5 years were recruited through The Work Place. Developing optional partnerships to expand services - The Work Place has developed an optional partnership with the Suffolk County House of Corrections to provide community reintegration services for prisoners prior to their release. One of the programs is an offender re- entry program that conducts job fairs inside the county jail to facilitate incarcerated offenders’ transition back into the workplace. Raising additional funds to expand services - The state of Massachusetts matches the Boston one-stop’s Wagner-Peyser funds dollar for dollar, which enables the center to fund its resource room and library. WorkForce Essentials, Inc. 110 Main Street Ensuring job seekers’ access to services - The Clarksville one-stop provides a clear path for job seekers to follow between one-stop services. When job seekers are referred to another partner program, staff personally walk them over to the referred program staff to prevent them from getting lost between programs. Dedicating specialized staff to establish relationships with employers - WorkForce Essentials, Inc., dedicates staff to conduct employer outreach in order to provide employer services and identify employment opportunities for job seekers. One-stop operators said that outreach to employers has helped engender employer trust in the organization and the job seekers it serves. Working with intermediaries to engage and serve employers - The Clarksville one-stop staff worked with Chamber of Commerce members to provide math training in order to improve the pool of entry-level employees for the local banking industry. This helped connect job seekers with available job openings at local banks. Tailoring services to meet employers’ specific workforce needs - The one-stop provided tailored hiring services, including drug testing and pre-screening of applicants, for a manufacturing company, resulting in the company hiring over 75 people through the one-stop. Developing optional partnerships to expand services- Managers in Clarksville told us that co-location with the Tennessee Department of Human Services, which administers TANF, benefits all job seekers because the department helps fund various services, including computer classes, soft skills classes, and parenting classes that can be provided to those not eligible for TANF. Raising additional funds to expand services - WorkForce Essentials, Inc., raised $750,000 in fiscal year 2002 through drivers’ education courses, drug testing, recruitment, and skills assessment services. This money was used to pay salaries and to purchase voicemail and a computer network system. In addition, the one-stop received a $2.8 million H-1B Technical Skills Training Grant from DOL, through which it has provided high-skills training to over 900 workers so far. The Job Center 111 S. Edwin C. Moses Boulevard Streamlining services through consolidated case management - Caseworkers from various programs, including TANF, Medicaid, Food Stamps, and WIA share caseloads and coordinate their service plans for job seekers. Ensuring partner staff understand the range of services - One- stop managers reported that cross-training on-site and off-site partners dramatically improves referrals to the Child Support Program, thereby enhancing efforts to establish paternity, a requirement for determining eligibility for TANF. Additionally, they indicated that their cross-trained staff referred job seekers to service providers that had previously been unknown, such as to smaller programs within the one-stop or local neighborhood programs. Promoting partner collaboration and co-location - Partners collaboratively operate the one-stop through four councils. All partners are asked to participate and all have equal voice in decision-making. Additionally, partners contributed space and other resources to help other partners co-locate. The Center is housed in a former shopping mall, which offers plenty of flexible space to allow all partners to co- locate. Developing optional partnerships to expand services - At the time of our visit, the Dayton one-stop had over 30 optional partners on-site, including the Montgomery County Combined Health District, which operates a health clinic on-site; and Clothes that Work! which provides free business attire to low-income women; and an alternative high school. Raising additional funds to expand services - The one-stop receives $1 million annually from the county to pay for shared one-stop staff salaries and to provide services to job seekers who do not qualify for services under any other funding stream. Dayton one-stop partners also contribute financial and in-kind resources to the center on an as-needed basis. Pennsylvania CareerLink 1309 French Street Streamlining services through consolidated case management - To efficiently coordinate multiple services for one-stop customers, Erie one-stop staff use a networked computer system with a shared case management program, so that they can share access to a customer’s service plan and case file. Ensuring job seekers’ access to services - The one-stop positions a staff person at the doors to the center to help job seekers entering the center find needed services and to ensure that exiting job seekers received the services they sought. Working with intermediaries to engage and serve employers - CareerLink staff collaborated with numerous local economic development entities to develop an outreach program that assesses the workforce needs of employers and links employers with appropriate services. Promoting partner collaboration - The one-stop staff is organized into four functional teams that meet weekly to work on common goals and develop new strategies. These teams have developed innovative strategies to improve service delivery, including the creation of a resource guide for caseworkers and a color-coded intake form. Strengthening relationships among partners - Staff at CareerLink participate in frequent team-building activities, such as social events and recognition ceremonies, to promote a positive, integrated working environment. Full Employment Council 1740 Paseo Kansas City, MO 64108 Streamlining services through consolidated intake procedures - Youth Opportunity and the WIA Youth program staff share intake and enrollment forms to streamline the delivery of services to youth. This process alleviates the burden of multiple intake and assessment forms when registering participants. Ensuring job seekers’ access to services - The one-stop management decided to locate the one-stop center next to the bus company, the Area Transit Authority, (ATA). This strategic decision meant that all bus routes passed by the one-stop center, ensuring that customers with transportation problems could access one-stop services. Additionally, the ATA partners with the one-stop to create an Urban Employment Network program to assist job seekers with transportation to and from work, 7 days a week from 5:00 in the morning until midnight and has set up a van service to operate during off-peak hours. Working with intermediaries to engage and serve employers - The Full Employment Council uses the Chamber of Commerce as an intermediary with employers. The chamber has a workforce issues division that does outreach to educate employers about recruitment and retention strategies and services offered at the one-stop center. While staff at the Kansas City one-stop assist job seekers with disabilities, the Chamber works with local employers to educate them about hiring disabled workers and integrating them into the workplace. Promoting partner co-location - The one-stop enabled the Adult Basic Education program to co-locate by allowing it to contribute instructors and GED classes instead of paying rent. Raising additional funds to expand services - The Kansas City one- stop has a staff person specifically designated to researching grant opportunities and writing grant applications. Through pursuing grant opportunities, the center has been able to raise about $14 million, which represents two-thirds of its total budget in program year 2002. These additional funds enable the one-stop staff to address local workforce concerns and provide services, such as internship opportunities in high- tech industries for at-risk youth. Kenosha County Job Center 8600 Sheridan Road Streamlining services through consolidated and case management - Case files for economic support, case management, job placement, and childcare services are shared on a networked computer system that staff from these four programs can access. Staff from these programs collectively develop an action plan for their customers and share an electronic calendar for scheduling customers’ appointments and workshops. Working with intermediaries to engage and serve employers - The one-stop collaborates with local community colleges and the Kenosha Area Business Alliance, an economic development association, to identify labor and skills shortages in local industry. These partnerships have not only helped employers with human resources assistance--such as recruitment, networking, and marketing--but they have also assisted employers with assessment and training of new and existing employees. For example, the one-stop’s relationship with a local community college led to the development of a Certified Nursing Assistant course taught in Spanish. Promoting partner collaboration - Regular functional team meetings allow partners to share ideas, work together to solve problems, and develop strategies to improve services. For example, through functional teams, partners were able to establish an on-site childcare center. Promoting partner co-location - Goodwill Industries, a one-stop partner, pays rent for smaller partners that cannot afford to pay rent on their own to expand services for job seekers. Central Texas Workforce Center 300 Cheyenne Ensuring job seekers’ access to services - To serve customers with transportation challenges, staff in Killeen partner with the libraries in rural areas to provide computer access to one-stop resume writing and job search services. They also provide an on-line TANF orientation, so that customers can access it remotely. Additionally, when one-stop center staff refer job seekers to one of their many partners, the staff personally introduce the job seeker to the referred program staff to prevent job seekers from getting lost between programs. Developing optional partnerships to expand services - The one- stop improved job seeker access to services by forming relationships with optional partners such as TANF. One-stop staff told us that co- location with TANF services helps welfare recipients address barriers to employment by facilitating easier access to services, such as housing assistance and employment and training programs. Dedicating specialized staff to establish relationships with employers - The one-stop has specialized staff serving not only as the central contact for receiving calls and requests from employers but also as the primary source for identifying job openings available through employers in the community. Tailoring services to meet employers’ specific workforce needs - In collaboration with local community colleges and the Chamber of Commerce, the one-stop created a Business Resource Center that offers services specifically for entrepreneurs and new businesses, including tax assistance and workshops on starting or improving a business. Raising additional funds to expand services - The one-stop has applied for multiple transportation grants to improve access to jobs for rural job seekers. In addition, the one-stop raised $309,000 in fiscal year 2001 by renting out space to local businesses and by providing services to employers. Pike County JobSight Center 120 South Riverfill Drive Ensuring partner staff understand the range of services - Cross- training workshops taught by partner staff educate staff about the one- stop’s diverse array of services. Although partners specialize in a particular area of expertise, cross-training has improved referrals and enabled staff to better ensure that job seekers get the tools they need to become successfully employed. Tailoring services to meet employers’ specific workforce needs - When eastern Kentucky encountered a labor shortage in the coal mining industry, the one-stop recruited a large pool of local applicants and created an on-the-job training program using WIA funds, which paid for half of new miners’ salaries during their training period. Dedicating specialized staff to establish relationships with employers - Specific JobSight staff are dedicated to employer outreach and customizing services. These staff were able to help attract a large cabinet manufacturer to the area by offering a customized service package, including prescreening and assessment, on the job training, and a 3-day job fair. Promoting partner collaboration - When the one-stop was created, partners participated in intensive workshops and collaboratively designed a service delivery plan to reduce service duplication. In addition, partners collaboratively designed a common intake form and a service delivery flow chart. Creating a shared one-stop identity - One-stop managers told us that shared decision making was instrumental in developing a common one- stop identity and in ensuring partners’ support for the one-stop system. The process of creating a shared one-stop identity in Pikeville was also supported by the adoption of a common logo and nametags, and was reinforced by a comprehensive marketing campaign. Promoting partner co-location - The local community college donated free space to the one-stop on its conveniently located campus, making it desirable for partners to relocate there. South County Employment Center 5735 S. Redwood Road Streamlining services through consolidated case management - The caseworkers at the Salt Lake City one-stop are divided into four teams that share case management of customers. The Job Connection Team is stationed at the front desk and helps customers by doing quick assessments, referrals, UI profiling, and assisting with computer access. Caseworkers from the three Employment Teams specialize in a particular program and all caseworkers meet once a month to discuss program requirements and how to streamline services for co-enrolled customers. Ensuring partner staff understand the range of services - Department of Workforce Services and Vocational Rehabilitation caseworkers participate in frequent cross-training sessions, so they are capable of assisting co-enrolled customers. One-stop managers reported that cross–training has improved staff understanding of programs outside their area of expertise and enhanced their ability to make referrals. There is also a shadow program in which staff members shadow one another for a few hours to learn about one another’s jobs and the programs they administer. Ensuring job seekers’ access to services - The one-stop established a Web-based job search program on which job seekers can post resumes and look for jobs. This Web site reduces customer flow, saves money, and makes it more convenient for people to look for jobs from their homes or offices. Dedicated specialized staff to establish relationships with employers – Employers have a separate one-stop center where they can conduct interviews, access labor market information, attend seminars, and use computers. The center has specialized employer outreach and business services staff that act as liaisons to employers, organize job fairs, and assist with job placements. Promoting partner collaboration - Partners created a “MOUse” committee to address Memorandum of Understanding (MOU) issues, including referrals, information systems, employer involvement, cross- training, and service accessibility. Staff from the Vocational Rehabilitation Program in Salt Lake City told us that this committee helped to increase referrals to their program by producing flow charts of the service delivery systems of various partner programs to identify points at which referrals and staff collaboration should occur. Sonoma County Job Link One-Stop Center 2245 Challenger Way Santa Rosa, CA 95407 Dedicating specialized staff to establish relationships with industries - In Santa Rosa, staff are dedicated to specific industries in order to better address local labor shortages. When Santa Rosa’s tourism industry was in need of more skilled workers, the one-stop worked with the local community college to ensure that job seekers were connected to certification courses in hotel management and the culinary arts. Also, the one-stop center has a Small Business Development Center, funded by the Small Business Administration, that provides consulting services to small businesses. Working with intermediaries to engage and serve employers - The one-stop focuses heavily on using existing partnerships with intermediaries, such as the Economic Development Board, to market their services to employers and to utilize information gathered from employer surveys. Managers told us this partnership has helped caseworkers better understand particular industries and job market fluctuations. Developing optional partnerships to expand services - The one- stop is collaborating with CalWORKS, the state TANF program, which allows them to provide additional services, such as the employer account representatives. These representatives work with employers, the Workforce Investment Board, and caseworkers to gather and disseminate information about the labor market, particularly high- demand industries. Raising additional funds to expand services - Santa Rosa has been better equipped to receive national grants and grants from the state of California by collaborating with three other Workforce Investment Boards in the area. In addition, this collaboration has improved local labor market information and sharing of promising practices. Improving one-stop operations - Partner staff use a Funding Source Determination Worksheet to ensure that customers’ services are paid for by the most appropriate grant or by a variety of funding streams to maximize funding in the long run. The funding sheet helps alleviate some cost burden on partners with tighter training budgets. Connect! 420 S. Pastoria Avenue Dedicating specialized staff to establish relationships with employers - Connect! has dedicated staff to providing a variety of services (both free and fee-based) to meet business needs, including staffing services, such as prescreening of job applicants and on-site recruiting; transition/outplacement services to help downsizing businesses assist displaced workers; educational resources; and training, such as technical training for small business IT workers. Tailoring services to meet employers’ specific workforce needs - The one-stop is co-located with a patent and trademark library that is electronically linked to the national trademark office to assist customers seeking entrepreneurial opportunities. Gathering labor market information on local industries - Connect! conducted Labor Market Information Plus (LMI+) studies of local industries to gather information on current workforce issues and challenges and predict future labor market trends. Raising additional funds to expand services - One-stop managers raised $20,000 through fee-based downsizing and training services for employers and used this money to expand the one-stop’s business services. Improving one-stop operations - In order to improve its operations, Connect! conducted an assessment (Voice of the Customer Initiative) to better understand customer expectations and needs. As a result, the one-stop reorganized its operations, redefined relationships with partners, developed a new outcome budget structure, and created specialized one-stop centers for businesses, job seekers, and youth. Cumberland County One-stop 415 Landis Avenue Ensuring job seekers’ access to services - By addressing customers’ transportation challenges, the Cumberland County One-Stop enhanced access to training and employment opportunities for rural customers. The one-stop now provides transportation to employment sites that are difficult for customers to access, such as the Atlantic City casino industry. Ensuring partner staff understand the range of services - Program staff attend monthly meetings to educate one another about various program rules, which improves referrals and eligibility determination for customers. For example, all program staff attended training on how to assess customers’ eligibility for the TANF program. Tailoring services to meet employers’ specific workforce needs - When employer services staff realized the application process for tax credits was cumbersome for employers, they completed the required paperwork themselves so that employers could receive the tax credit incentives. Working with intermediaries to engage and serve employers - The Cumberland County One-Stop negotiated an agreement with the local Empowerment Zone Office, requiring that new businesses use the one- stop center for recruitment before using their own private resources. This arrangement allows the one-stop to stay informed of employer needs and potential opportunities for job seekers. Working with intermediaries to engage and serve employers - The Vineland one-stop belongs to the three Chambers of Commerce in the area and attends many of their events. Business services staff make presentations about the one-stop’s services at professional conferences, chamber meetings, and other local events. Elisabeth Anderson, Elizabeth Caplick, and Tamara Harris made significant contributions to this report. In addition, Shana Wallace assisted in the study design; Jessica Botsford provided legal support; and Patrick DiBattista assisted in the message and report development. Workforce Investment Act: Exemplary One-Stops Devised Strategies to Strengthen Services, but Challenges Remain for Reauthorization. GAO- 03-884T. Washington D.C.: June 18, 2003. Workforce Investment Act: Issues Related to Allocation Formulas for Youth, Adults, and Dislocated Workers. GAO-03-636. Washington D.C.: April 25, 2003. Multiple Employment and Training Programs: Funding and Performance Measures for Major Programs. GAO-03-589. Washington, D.C.: April 18, 2003. Food Stamp Employment and Training Program: Better Data Needed to Understand Who Is Served and What the Program Achieves. GAO-03-388. Washington D.C.: March 12, 2003. Workforce Training: Employed Worker Programs Focus on Business Needs, but Revised Performance Measures Could Improve Access for Some Workers. GAO-03-353. Washington, D.C.: February 14, 2003. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003 Workforce Investment Act: States’ Spending Is on Track, but Better Guidance Would Improve Financial Reporting. GAO-03-239. Washington, D.C.: November 22, 2002. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Workforce Investment Act: Coordination of TANF Services Through One- Stops Has Increased Despite Challenges. GAO-02-739T. Washington, D.C.: May 16, 2002. Workforce Investment Act: Youth Provisions Promote New Service Strategies, but Additional Guidance Would Enhance Program Development. GAO-02-413. Washington, D.C.: April 5, 2002. Workforce Investment Act: Coordination of TANF Programs and One- Stop Center Is Increasing, but Challenges Remain. GAO-02-500T. Washington, D.C.: March 12, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: Oct. 4, 2001. Workforce Investment Act: New Requirements Create Need for More Guidance. GAO-02-94T. Washington, D.C. October 4, 2001. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T-HEHS-00-145. Washington, D.C.: June 29, 2000. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
To create a more comprehensive workforce investment system, the Workforce Investment Act (WIA) of 1998 requires states and localities to coordinate most federally funded employment and training services into a single system, called the one-stop center system. This report examines how selected one-stop centers have used the law's flexibility to implement their own vision of WIA and provides information on promising practices for (1) streamlining services for job seekers, (2) engaging the employer community, (3) building a solid one-stop infrastructure by strengthening partnerships across programs and raising additional funds. In addition, it provides information on the actions the Department of Labor is taking to collect and share information about what is working well for job seeker and employer customers in one-stop centers. Of the 14 one-stop centers in GAO's study that were identified as exemplary by government officials and workforce development experts, all had implemented a range of promising practices to streamline services for jobseekers, engage the employer community, and built a solid one-stop infrastructure. The one-stop centers GAO visited streamlined services for job seekers by ensuring access to needed services, educating program staff about all of the one-stop services available to job seekers, and consolidating case management and intake procedures. In addition, all of the one-stop centers GAO visited used at least one of the following three methods to engage employers--dedicating specialized staff to work with employers or industries, working with employers through intermediaries, such as Chambers of Commerce or economic development entities, or tailoring services to meet specific employers' needs. To provide the infrastructure to support better services for job seekers and employers, many of the one-stops GAO visited found innovative ways to strengthen program partnerships and to raise additional funds beyond those provided under WIA. Center operators fostered the development of strong program partnerships by encouraging partner collaboration through functional work teams and joint projects, and they raised additional funds through fee-based services, grants, and contributions from partners and state or local governments. While Labor currently tracks outcome data--such as job placement, job seeker satisfaction and employer satisfaction--and funds several studies to evaluate workforce development programs and service delivery models, little is known about the impact of various one-stop service delivery approaches on these and other outcomes. Labor's studies largely take a program-by-program approach rather than focusing on the impact on job seekers of various one-stop integrated service delivery approaches, such as sharing customer intake forms across programs, or on employers, such as dedicating staff to focus on engaging and serving employers. Further, Labor's efforts to collaborate with other federal agencies to assess the effects of different strategies to integrate job seeker services or to serve employers through the one-stop system have been limited. While Labor has developed a promising practices Web site to facilitate such information sharing, it is unclear how well the site currently meets this objective.
IRS’s fiscal year 2006 budget request reflects a continuing shift in priorities by proposing reductions in taxpayer service and increases in enforcement activities. The request does not provide details about how the reductions will impact taxpayers in the short-term. Nor does IRS have long-term goals; thus the contribution of the fiscal year 2006 budget request to achieving IRS’s mission in the long-term is unclear. Because of budget constraints and the progress IRS has made improving the quality of taxpayer services, this is an opportune time to reconsider the menu of services IRS offers. IRS is requesting $10.9 billion, which includes just over a 1 percent decrease for taxpayer service, a 2 percent decrease for BSM, and nearly an 8 percent increase for enforcement, as shown in table 1. As table 1 further shows, the changes proposed in the 2006 budget request continue a trend from 2004. In comparison to the fiscal year 2004 enacted budget, the 2006 budget request proposes almost 4 percent less for service, almost 49 percent less for BSM, and nearly 14 percent more for enforcement. As table 1 also shows, taxpayer service sustained a reduction of $104 million or 2.8 percent between fiscal years 2004 and 2005. According to IRS officials, the majority of this reduction was the result of consolidating paper-processing operations, shifting resources from service to enforcement, and reducing some services. IRS officials said that this reduction is not expected to adversely impact the services they provide to taxpayers but added that the agency cannot continue to absorb reductions in taxpayer service without beginning to compromise some services. For fiscal years 2005 and 2006, table 2 shows some details of changes in both dollars and full-time equivalents (FTE). Both are shown because funding changes do not translate into proportional changes in FTEs due to cost increases for salaries, rent, and other items. For example, the $39 million or 1.1 percent reduction in taxpayer service translates into a reduction of 1,385 FTEs or 3.6 percent. Similarly, the over $500 million or 7.8 percent increase in enforcement spending translates into an increase of 1,961 FTEs or 3.4 percent. The difference between changes in dollars and FTEs could be even larger because of unbudgeted expenses. Unbudgeted expenses have consumed some of IRS’s budget increases and internal savings increases over the last few years. Unbudgeted expenses include unfunded portions of annual salary increases, which can be substantial given IRS’s large workforce, and other costs such as higher-than-budgeted rent increases. According to IRS officials, these unbudgeted expenses accounted for over $150 million in each of the last 4 years. An IRS official also told us they anticipate having to cover unbudgeted expenses in 2006. As of March 2005, IRS officials were projecting unbudgeted salary increases of at least $40 million. This projection could change since potential federal salary increases for 2006 have not been determined. The budget request provides some detail on how IRS plans to absorb cost increases in the taxpayer service budget. IRS is proposing a gross reduction of over $134 million in taxpayer service from reexamining the budget’s base and plans to use more than $95 million of it to cover annual increases such as salaries. This leaves a net reduction of nearly $39 million or 1.1 percent in the taxpayer service budget. The extent to which IRS is able to achieve the gross reductions will impact its ability to use the funds as anticipated. Decisions on how the $134 million gross reduction would be absorbed were not finalized prior to releasing the budget. According to IRS officials, some of the reductions would result from efficiency gains such as reducing printing and postage costs; however, others would result from reductions in the services provided to taxpayers such as shortening the hours of toll- free telephone service operations. The officials also said most decisions have now been made about general areas for reduction and most changes will not be readily apparent to taxpayers. Although IRS has made general decisions about the reductions, many of the details have yet to be determined. Therefore, the extent of the impact on taxpayers in the short term is unclear. For example, IRS plans to reduce dependence on field assistance, including walk-in sites, but has not reached a final decision on how to reduce services. Table 3 provides further detail on how IRS is proposing to reduce funding and resources for taxpayer service. IRS’s fiscal year 2006 budget request is the sixth consecutive year the agency has requested additional staffing for enforcement. However, up until last year, IRS was unable to increase enforcement staffing; unbudgeted costs and other priorities consumed the budget increase. IRS’s proposal for fiscal year 2006, if implemented as planned, would return enforcement staffing in these occupations to their highest levels since 1999. Of the more than $500 million increase requested for 2006, about $265 million would fund enforcement initiatives, over $182 million would be used in part for salary increases, and over $55 million is a proposal to transfer funding authority from the Department of Justice’s Interagency Crime and Drug Enforcement. The $500 million increase would be supplemented by internal enforcement savings of $88 million. As is the case with taxpayer service savings, the extent to which IRS achieves enforcement savings will affect its ability to fund the new enforcement initiatives. The $265 million for new enforcement initiatives consist of: $149.7 million and 920 FTEs to attack corrosive non-compliance activity driving the tax gap such as abusive trusts and shelters, including offshore credit cards and organized tax resistance; $51.8 million and 236 FTEs to detect and deter corrosive corporate non- compliance to attack complex abusive tax avoidance transactions on a global basis and challenge those who promote their use; $37.9 million and 417 FTEs to increase individual taxpayer compliance by identifying and implementing actions to address non-compliance with filing requirements; increasing Automated Underreporter resources to address the reporting compliance tax gap; increasing audit coverage; and expanding collection work in walk-in sites; $14.5 million and 77 FTEs to combat abusive transactions by entities with special tax status by initiating examinations more promptly, safeguarding compliant customers from unscrupulous promoters, and increasing vigilance to ensure that the assets of tax-exempt organizations are put to their intended tax-preferred purpose and not misdirected to fund terrorism or for private gain; and $10.8 million and 22 FTEs to curtail fraudulent refund crimes. The $88 million in internal savings would be reinvested to perform the following activities: $66.7 million and 585 FTEs to devote resources to front-line $14.9 million and 156 FTEs to, in part, address bankruptcy-related $6.7 million and 52 FTEs to address complex, high-risk issues such as compliance among tax professionals. In the past, IRS has had trouble achieving enforcement staffing increases because other priorities, including unbudgeted expenses, have absorbed additional funds. IRS achieved some gains in 2004 and expects modest gains in 2005. Figure 1 shows that the number of revenue agents (those who audit complex returns), revenue officers (those who do field collection work), and special agents (those who perform criminal investigations) decreased over 21 percent between 1998 and 2003, but increased almost 6 percent from 2003 to 2004. IRS’s recent gains in enforcement staffing are encouraging, as tax law enforcement continues to remain an area of high risk for the federal government because the resources IRS has dedicated to enforcing the tax laws have declined, while IRS’s enforcement workload—measured by the number of taxpayer returns filed—has continually increased. Figure 2 shows the trend in field, correspondence, and total audit rates since 1995. Field audits involve face-to-face audits and correspondence audits are typically less complex involving communication through notices. IRS experienced steep declines in audit rates from 1995 to 1999, but the audit rate—the proportion of tax returns that IRS audits each year—has slowly increased since 2000. The figure shows that the increase in total audit rates of individual filers has been driven mostly by correspondence audits, while more complex field audits, continue to decline. The link between the decline in enforcement staff and the decline in enforcement actions, such as audits, is complicated, and the real impact on taxpayers’ rate of voluntary compliance is not known. This leaves open the question of whether the declines in IRS’s enforcement programs are eroding taxpayers’ incentives to voluntarily comply. IRS’s National Research Program (NRP) recently completed a study on compliance by individual tax filers based on tax data provided on 2001 tax returns. The study estimated that the tax gap—the difference between what taxpayers owe and what they pay—is at least $312 billion per year as of 2001 and could be as large as $353 billion. This study is important for several reasons beyond measuring compliance. It is intended to help IRS better target its enforcement actions, such as audits, on non-compliant taxpayers, and minimize audits of compliant taxpayers. It should also help IRS better understand the impact of taxpayer service on compliance. IRS is developing but currently lacks long-term goals that can be used to assess performance and make budget decisions. Long-term goals and results measurement are a component of the statutory strategic planning and management framework that the Congress adopted in the Government Performance and Results Act of 1993. As a part of this comprehensive framework, long-term goals that are linked to annual performance measures can help guide agencies when considering organizational changes and making resource decisions. A recent Program Assessment Rating Tool (PART) review conducted by OMB reported that IRS lacks long-term goals. As a result, IRS has been working to identify and establish long-term goals for all aspects of its operations for over a year. IRS officials said these goals will be finalized and provided publicly as an update to the agency’s strategic plan before May 2005. For IRS and its stakeholders, such as the Congress, long-term goals can be used to assess performance and progress towards these goals, and determine whether budget decisions contribute to achieving those goals. Without long-term goals, the Congress and other stakeholders are hampered in evaluating whether IRS is making satisfactory long-term progress. Further, without such goals, the extent to which IRS’s 2006 budget request would help IRS achieve its mission over the long-term is unclear. For at least two reasons, this is an opportune time to review the menu of taxpayer services that IRS provides. First, IRS’s budget for taxpayer services was reduced in 2005 and an additional reduction is proposed for 2006. As already discussed, these reductions have forced IRS to propose scaling back some services. Second, as we have reported, IRS has made significant progress in improving the quality of its taxpayer services. For example, IRS now provides many Internet services that did not exist a few years ago and has noticeably improved the quality of telephone services. This opens up the possibility of maintaining the overall level of taxpayer service but with a different menu of service choices. Cuts in selected services could be offset by the new and improved services. Generally, as indicated in the budget, the menu of taxpayer services that IRS provides covers assistance, outreach, and processing. Assistance includes answering taxpayer questions via telephone, correspondence, and face-to-face at its walk-in sites. Outreach includes educational programs and the development of partnerships. Processing includes issuing millions of tax refunds. When considering program reductions, we support a targeted approach rather than across-the-board cuts. A targeted approach helps reduce the risk that effective programs are reduced or eliminated while ineffective or lower priority programs are maintained. With the above reasons in mind for reconsidering IRS’s menu of services, we have compiled a list of options for targeted reductions in taxpayer service. The options on this list are not recommendations but are intended to contribute to a dialogue about the tradeoffs faced when setting IRS’s budget. The options presented meet at least one of the following criteria that we generally use to evaluate programs or budget requests. These criteria include that the activity duplicates other efforts that may be more effective and/or efficient; historically does not meet performance goals or provide intended results as reported by GAO, TIGTA, IRS, or others; experiences a continued decrease in demand; lacks adequate oversight, implementation and management plans, or structures and systems to be implemented effectively; has been the subject of actual or requested funding increases that cannot be adequately justified; or has the potential to make an agency more self-sustaining by charging user fees for services provided. We recognize that the options listed below involve tradeoffs. In each case, some taxpayers would lose a service they use. However, the savings could be used to help maintain the quality of other services. We also want to give IRS credit for identifying savings, including some on this list. The options include closing walk-in sites. As the filing season section of this testimony discusses, taxpayer demand for walk-in services has continued to decrease and staff answer a more limited number of tax law questions in person than staff answer via telephone. limiting the type of telephone questions answered by IRS assistors. IRS assistors still answer some refund status questions even though IRS provides automated answers via telephone and its Web site. mandating electronic filing for some filers such as paid preparers or businesses. As noted, efficiency gains from electronic filing have enabled IRS to consolidate paper processing operations. charging for services. For example, IRS provides paid preparers with information on federal debts owed by taxpayers seeking refund anticipation loans. Although IRS has implemented important elements of the BSM program, much work remains. In particular, the BSM program remains at high risk and has a long history of significant cost overruns and schedule delays. Furthermore, budget reductions have resulted in significant adjustments to the BSM program, although it is too early to determine their ultimate effect. IRS has long relied on obsolete automated systems for key operational and financial management functions, and its attempts to modernize these aging computer systems span several decades. IRS’s current modernization program, BSM, is a highly complex, multibillion-dollar program that is the agency’s latest attempt to modernize its systems. BSM is critical to supporting IRS’s taxpayer service and enforcement goals. For example, BSM includes projects to allow taxpayers to file and retrieve information electronically and to provide technology solutions to help reduce the backlog of collections cases. BSM is important for another reason. It allows IRS to provide the reliable and timely financial management information needed to account for the nation’s largest revenue stream and better enable the agency to justify its resource allocation decisions and congressional budgetary requests. Since our testimony before this subcommittee on last year’s budget request, IRS has deployed initial phases of several modernized systems under its BSM program. The following provides examples of the systems and functionality that IRS implemented in 2004 and the beginning of 2005. Modernized e-File (MeF). This project is intended to provide electronic filing for large corporations, small businesses, and tax-exempt organizations. The initial releases of this project were implemented in June and December 2004, and allowed for the electronic filing of forms and schedules for the form 1120 (corporate tax return) and form 990 (tax-exempt organizations’ tax return). IRS reported that, during the 2004 filing season, it accepted over 53,000 of these forms and schedules using MeF. e-Services. This project created a Web portal and provided other electronic services to promote the goal of conducting most IRS transactions with taxpayers and tax practitioners electronically. IRS implemented e-Services in May 2004. According to IRS, as of late March 2005, over 84,000 users have registered with this Web portal. Customer Account Data Engine (CADE). CADE is intended to replace IRS’s antiquated system that contains the agency’s repository of taxpayer information and, therefore, is the BSM program’s linchpin and highest priority project. In July 2004 and January 2005, IRS implemented the initial releases of CADE, which have been used to process filing year 2004 and 2005 1040EZ returns, respectively, for single taxpayers with refund or even-balance returns. According to IRS, as of March 16, 2005, CADE had processed over 842,000 tax returns so far this filing season. Integrated Financial System (IFS). This system replaces aspects of IRS’s core financial systems and is ultimately intended to operate as its new accounting system of record. The first release of this system became fully operational in January 2005. Although IRS is to be applauded for delivering such important functionality, the BSM program is far from complete. Future deliveries of additional functionality of deployed systems and the implementation of other BSM projects are expected to have a significant impact on IRS’s taxpayer services and enforcement capability. For example, IRS has projected that CADE will process about 2 million returns in the 2005 filing season. However, the returns being processed in CADE are the most basic and constitute less than 1 percent of the total tax returns expected to be processed during the current filing season. IRS expects the full implementation of CADE to take several more years. Another BSM project—the Filing and Payment Compliance (F&PC) project—is expected to increase (1) IRS’s capacity to treat and resolve the backlog of delinquent taxpayer cases, (2) the closure of collection cases by 10 million annually by 2014, and (3) voluntary taxpayer compliance. As part of this project, IRS plans to implement an initial limited private debt collection capability in January 2006, with full implementation of this aspect of the F&PC project to be delivered by January 2008 and additional functionality to follow in later years. The BSM program has a long history of significant cost increases and schedule delays, which, in part, has led us to report this program as high- risk since 1995. Appendix II provides the history of the BSM life-cycle cost and schedule variances. In January 2005 letters to congressional appropriation committees, IRS stated that it had showed a marked improvement in significantly reducing its cost variances. In particular, IRS claimed that it reduced the variance between estimated and actual costs from 33 percent in fiscal year 2002 to 4 percent in fiscal year 2004. However, we do not agree with the methodology used in the analysis supporting this claim. Specifically, (1) the analysis did not reflect actual costs, instead it reflected changes in cost estimates (i.e., budget allocations) for various BSM projects; (2) IRS aggregated all of the changes in the estimates associated with the major activities for some projects, such as CADE, which masked that monies were shifted from future activities to cover increased costs of current activities; and (3) the calculations were based on a percentage of specific fiscal year appropriations, which does not reflect that these are multiyear projects. In February 2002 we expressed concern over IRS’s cost and schedule estimating and made a recommendation for improvement. IRS and its prime systems integration support (PRIME) contractor have taken action to improve their estimating practices, such as developing a cost and schedule estimation guidebook and developing a risk-adjustment model to include an analysis of uncertainty. These actions may ultimately result in more realistic cost and schedule estimates, but our analysis of IRS’s expenditure plans over the last few years shows continued increases in estimated project life-cycle costs (see fig. 3). The Associate CIO for BSM stated that he believes that IRS’s cost and schedule estimating has improved in the past year. In particular, he pointed out that IRS met its cost and schedule goals for the implementation of the latest release of CADE, which allowed the agency to use this system to process certain 1040EZ forms in the 2005 filing season. It is too early to tell whether this signals a fundamental improvement in IRS’s ability to accurately forecast project costs and schedules. The reasons for IRS’s cost increases and schedule delays vary. However, we have previously reported that they are due, in part, to weaknesses in management controls and capabilities. We have previously made recommendations to improve BSM management controls, and IRS has implemented or begun to implement these recommendations. For example, in February 2002, we reported that IRS had not yet defined or implemented an IT human capital strategy, and recommended that IRS develop plans for obtaining, developing, and retaining requisite human capital resources. In September 2003, TIGTA reported that IRS had made significant progress in developing a human capital strategy but that it needed further development. In August 2004, the current Associate CIO for BSM identified the completion of a human capital strategy as a high priority. Among the activities that IRS is implementing are prioritizing its BSM staffing needs and developing a recruiting plan. IRS has also identified, and is addressing, other major management challenges in areas such as requirements, contract, and program management. For example, poorly defined requirements have been among the significant weaknesses that have been identified as contributing to project cost overruns and schedule delays. As part of addressing this problem, in March 2005, the IRS BSM office established a requirements management office, although a leader has not yet been hired. The BSM program is undergoing significant changes as it adjusts to reductions in its budget. Figure 4 illustrates the BSM program’s requested and enacted budgets for fiscal years 2004 through 2006. For fiscal year 2005, IRS received about 29 percent less funding than it requested (from $285 million to $203.4 million). According to the Senate report for the fiscal year 2005 Transportation, Treasury, and General Government appropriations bill, in making its recommendation to reduce BSM funding, the Senate Appropriations Committee was concerned about the program’s cost overruns and schedule delays. In addition, the committee emphasized that in providing fewer funds, it wanted IRS to focus on its highest priority projects, particularly CADE. In addition, IRS’s fiscal year 2006 budget request reflects an additional reduction of about 2 percent, or about $4.4 million, from the fiscal year 2005 appropriation. It is too early to tell what effect the budget reductions will ultimately have on the BSM program. However, the significant adjustments that IRS is making to the program to address these reductions are not without risk, could potentially impact future budget requests, and will delay the implementation of certain functionality that was intended to provide benefit to IRS operations and the taxpayer. For example: Reductions in Management reserve/project risk adjustments. In response to the fiscal year 2005 budget reduction, IRS reduced the amount that it had allotted to program management reserve and project risk adjustments by about 62 percent (from about $49.1 million to about $18.6 million). If BSM projects have future cost overruns that cannot be covered by the depleted reserve, this reduction could result in (1) increased budget requests in future years or (2) delays in planned future activities (e.g., delays in delivering promised functionality) to use those allocated funds to cover the overruns. Shifts of BSM management responsibility from the PRIME contractor to IRS. Due to budget reductions and IRS’s assessment of the PRIME contractor’s performance, IRS decided to shift significant BSM responsibilities for program management, systems engineering, and business integration from the PRIME contractor to IRS staff. For example, IRS staff are assuming responsibility for cost and schedule estimation and measurement, risk management, integration test and deployment, and transition management. There are risks associated with this decision. To successfully accomplish this transfer, IRS must have the management capability to perform this role. Although the BSM program office has been attempting to improve this capability through, for example, implementation of a new governance structure and hiring staff with specific technical and management expertise, IRS has had significant problems in the past managing this and other large development projects, and acknowledges that it has major challenges to overcome in this area. Suspension of the Custodial Accounting Project (CAP). Although the initial release of CAP went into production in September 2004, IRS has decided not to use this system and to stop work on planned improvements due to budget constraints. According to IRS, it made this decision after it evaluated the business benefits and costs to develop and maintain CAP versus the benefits expected to be provided by other projects, such as CADE. Among the functionality that the initial releases of CAP were expected to provide were (1) critical control and reporting capabilities mandated by federal financial management laws; (2) a traceable audit trail to support financial reporting; and (3) a subsidiary ledger to accurately and promptly identify, classify, track, and report custodial revenue transactions and unpaid assessments. With the suspension of CAP, it is now unclear how IRS plans to replace the functionality this system was expected to provide, which was intended to allow the agency to make meaningful progress toward addressing long-standing financial management weaknesses. IRS is currently evaluating alternative approaches to addressing these weaknesses. Reductions in planned functionality. According to IRS, the fiscal year 2006 funding reduction will result in delays in planned functionality for some of its BSM projects. For example, IRS no longer plans to include Form 1041 (the income tax return for estates and trusts) in the fourth release of Modernized e-File, which is expected to be implemented in fiscal year 2007. The BSM program is based on visions and strategies developed in 2000 and 2001. The age of these plans, in conjunction with the significant delays already experienced by the program and the substantive changes brought on by budget reductions, indicate that it is time for IRS to revisit its long- term goals, strategy, and plans for BSM. Such an assessment would include an evaluation of when significant future BSM functionality would be delivered. IRS’s Associate CIO for BSM has recognized that it is time to recast the agency’s BSM strategy because of changes that have occurred subsequent to the development of the program’s initial plans. According to this official, IRS is redefining and refocusing the BSM program, and he expects this effort to be completed by the end of this fiscal year. IRS has requested about $1.62 billion for IT operations and maintenance in fiscal year 2006, within its proposed new Tax Administration and Operations account. Under the prior years’ budget structure, these funds were included in a separate account, for which IRS received an appropriation of about $1.59 billion in fiscal year 2005. The $1.62 billion requested in fiscal year 2006 is intended to fund the personnel costs for IT staff (including staff supporting the BSM program) and activities such as IT security, enterprise networks, and the operations and maintenance costs of its current systems. We have previously expressed concern that IRS does not employ best practices in the development of its IT operations and maintenance budget request. Although IRS has made progress in addressing our concern, more work remains. The Paperwork Reduction Act (PRA) requires federal agencies to be accountable for their IT investments and responsible for maximizing the value and managing the risks of their major information systems initiatives. The Clinger-Cohen Act of 1996 establishes a more definitive framework for implementing the PRA’s requirements for IT investment management. It requires federal agencies to focus more on the results they have achieved and introduces more rigor and structure into how agencies are to select and manage IT projects. In addition, leading private- and public-sector organizations have taken a project- or system-centric approach to managing not only new investments but also operations and maintenance of existing systems. As such, these organizations identify operations and maintenance projects and systems for inclusion assess these projects or systems on the basis of expected costs, benefits, and risks to the organization; analyze these projects as a portfolio of competing funding options; and use this information to develop and support budget requests. This focus on projects, their outcomes, and risks as the basic elements of analysis and decision making is incorporated in the IT investment management approach that is recommended by OMB and GAO. By using these proven investment management approaches for budget formulation, agencies have a systematic method, on the basis of risk and return on investment, to justify what are typically substantial information systems operations and maintenance budget requests. In our assessment of IRS’s fiscal year 2003 budget request, we reported that the agency did not develop its information systems operations and maintenance request in accordance with the investment management approach used by leading organizations. We recommended that IRS prepare its future budget requests in accordance with these best practices. To address our recommendation, IRS agreed to take a variety of actions, which it has made progress in implementing. For example, IRS stated that it planned to develop an activity-based cost model to plan, project, and report costs for business tasks/activities funded by the information systems budget. The recent release of IFS included an activity- based cost module, but IRS does not currently have historical cost data to populate this module. According to officials in the Office of the Chief Financial Officer, IRS is in the process of accumulating these data. These officials stated that IRS needs 3 years of actual costs to have the historical data that would provide a basis for future budget estimates. Accordingly, these officials expected that IRS would begin using the IFS activity-based cost module in formulating the fiscal year 2008 budget request and would have the requisite 3 years’ of historical data in time to develop the fiscal year 2010 budget request. In addition, IRS planned to develop a capital planning guide to implement processes for capital planning and investment control, budget formulation and execution, business case development, and project prioritization. IRS has developed a draft guide, which is currently under review by IRS executives, and IRS expects it to become policy on October 1, 2005. Although progress has been made in implementing best practices in the development of the IT operations and maintenance budget, until these actions are completely implemented IRS will not be able to ensure that its request is adequately supported. Results to date show IRS has generally maintained or improved its 2005 filing season performance in key areas compared to last year despite a decrease in the 2005 budget for taxpayer service. These key areas are paper and electronic processing, telephone assistance, IRS’s Web site, and walk- in assistance. Table 4 shows performance to date in these four areas. Overall IRS’s filing season performance to date is good news because, as table 1 shows (page 6), IRS’s budget for taxpayer service is $104 million less than the year before. According to IRS officials, it absorb this reduction by generating additional internal savings and program reductions. However, because the filing season is not over, the extent to which IRS will achieve efficiency gains and the full impact of reductions on taxpayers in this or future filing seasons is not yet known. As of March 18, IRS processed about 63 million individual income tax returns and 57 million refunds. According to IRS data and information from external stakeholders such as paid practitioners, processing has been uneventful and without significant disruptions. IRS officials attribute this year’s smooth processing to adequate planning and few tax law changes. This year’s processing activities are important, in part, because for the first time during the filing season, IRS is using CADE to process the simplest taxpayer accounts (1040EZ without problems or balance due). As we note in the BSM section, CADE is the foundation of IRS’s modernization effort and will ultimately replace the Individual Master File that currently houses taxpayer data for individual filers. As of March 16, 2005, CADE has processed over 842,000 tax returns without significant problems. Growth in electronic filing (e-filing) helps fund IRS’s modernization. Electronic filing allows IRS to control costs by reducing labor-intensive processing of paper tax returns. E-filing also improves taxpayer service by eliminating transcription errors associated with processing paper returns. E-filing also has benefits for taxpayers, primarily by allowing them to get their refunds in half the time of paper filers. As shown in figure 5, the number of e-filed returns has increased since 1999 and the number of paper returns has decreased. The figure also shows that these changes have allowed IRS to reduce the staff devoted to processing paper returns between 1999 and 2004 by just over 1,100 staff years. As the number of e-filed returns has increased, the number of staff years used to process those returns has not. The decline in paper processing staff allowed IRS to close its Brookhaven processing center in 2003. In addition, IRS is in the process of closing its paper processing operation in Memphis. Although the growth in e-filing is about 6.7 percent over the same period last year, it is growing at a slower rate than previous years. Based on the current trend and the fact that the percentage of returns e-filed traditionally declines as April 15 approaches, it appears that IRS will not achieve its goal of having 68.2 million individual tax returns e-filed this year (an 11 percent increase over last year). Over recent years, IRS has undertaken numerous initiatives to increase e-filing. However, neither this year’s current growth rate nor the projected annual growth rate will enable IRS to achieve its goal of 80 percent of all individual tax returns being e-filed in 2007. This goal has focused attention on increasing e-filing. As we reported last year, IRS officials believe that achieving the goal would require additional measures to convert the tens of millions of taxpayers and tax practitioners who prepare individual income tax returns on a computer, but filed on paper to e-filing. IRS officials also stated that the additional measures might need to include legislation that mandates e-filing for certain classes of returns, such as those prepared by practitioners. Last year we reported five states, including California, that mandated the e-filing of state tax returns, also showed increases in the e- filing of federal returns. This year, three additional states have introduced mandatory e-filing of state returns by tax practitioners. Between January 1 and March 12, IRS received approximately 23 million calls. As shown in table 4, IRS’s automated service handled nearly 14 million calls and customer service representatives (CSRs) handled just over 9 million. The percentage of taxpayers who attempted to reach CSRs and actually got through and received service—referred to as the CSR level of service—remained relatively stable at 83 percent compared to 84 percent at the same time last year. IRS reduced its 2005 goal for CSR level of service from 85 percent in 2004 to 82 percent because of the budget reduction for taxpayer service. However, IRS has been able to achieve a relatively stable CSR level of service of 83 percent since last year. According to IRS officials, this level of performance is due to staff plans being made before the level of service goal was reduced; the agency receiving fewer calls due to fewer tax law changes than in the agency improving methods for handling calls; and an increased use of IRS’s Web site. Although CSR level of service is about the same as last year, down one percentage point, there are other indications of slippage in telephone access. Specifically, taxpayers are waiting longer to speak to a CSR. Wait times have increased by about 35 seconds or 15 percent compared to the same period last year. Additionally, the rate at which taxpayers abandon their calls to IRS increased from 10 percent to 11.5 percent, which translates into about 99,000 calls. The responsible IRS official considers the increase in wait time and increase in abandon rate to be acceptable, in part because IRS data are showing that the agency is using 9 percent fewer FTEs than last year and answering 195 more calls per FTE. IRS officials said they lowered the CSR level of service goal in response to the reduction in the taxpayer service budget, and will adjust staffing plans after the filing season to address the taxpayer service budget reduction. IRS officials believe the adjustments will likely result in a lower level of service than is currently being achieved. IRS estimates that the accuracy of CSRs’ answers to taxpayers’ tax law questions improved compared to last year. Specifically, tax law accuracy increased to an estimated 87 percent as compared to 76 percent at the same time last year. This represents a significant change from last year, when we drew attention to the declining tax law accuracy rate. According to IRS officials and staff, the improvement is primarily due to formatting changes made in 2004 to the guide that CSRs use to help them answer taxpayers’ tax law questions that have enhanced the usability of the guide. IRS officials stated that the revised guide is better and more user-friendly, partly because many of the suggested improvements were from CSRs who use the guide daily. In addition, IRS officials stated that the improved tax law accuracy rate reveals that the previous version of the guide was indeed the reason for last year’s decline in tax law accuracy, and attributed fluctuations in the tax law accuracy rate to changes in the guide in past years. IRS estimates that accounts accuracy (the accuracy of answers to questions from taxpayers about the status of their accounts) has improved compared to last year and since 2002. Taxpayers who called about their accounts received correct information an estimated 92 percent of the time, which is an improvement compared to last year’s 89 percent rate and the 88 percent rate seen in 2002 and 2003. The responsible IRS official told us that accounts accuracy rates have improved because IRS has improved its ability to monitor and manage staff, expanded training, and improved its ability to search for account information. Various data indicate that IRS’s Web site is performing well. We found it to be user-friendly because it was readily accessible and easy to navigate. Problem areas that we reported in the past, such as the search function, were much improved this filing season, thus eliminating our previous concerns about the search function. Furthermore, an independent weekly study done during the filing season has reported that IRS’s Web site has ranked in the top 4 out of 40 government Web sites and that users were able to access the IRS Web site in .65 seconds or less. The same independent weekly assessment reported that IRS ranked first or second in response time of downloading data. Finally, the electronic tax law assistance program on IRS’s Web site has shown marked improvement this year over last. For example, the average response time is down from 3.8 days to 1.6 days and the accuracy rate has improved from 56.9 percent to 87.5 percent. According to IRS officials, this significant improvement is due to a decrease in the number of tax law questions being submitted—down from about 56,000 to 8,700 for the same time period. IRS’s Web site is experiencing extensive usage this filing season based on the number of visits, pages viewed, and forms and publications downloaded. As of February 28, 2005, the Web site was visited about 83 million times by users who viewed about 628 million pages. This is the first time that IRS has publicly reported the number of visits to and number of pages viewed on its Web site. Further, about 70.3 million forms and publications had been downloaded this fiscal year through February, with about 45 million of those downloads occurring in January and February. IRS’s Web site continues to provide two very important tax service features: (1) “Where’s My Refund,” which enables taxpayers to check on the status of their refund and (2) Free File, which provides taxpayers the ability to file their tax return electronically for free via IRS’s Web site. As of March 20, 2005, about 16 million taxpayers accessed the “Where’s My Refund” feature to check the status of their tax refund—about a 15 percent increase over the same time period last year. Also, IRS provided new functionality for “Where’s My Refund” whereby a taxpayer whose refund could not be delivered by the Postal Service (i.e., returned as undeliverable mail), can change their address on the Web site. In addition, as of March 16, 2005, 3.6 million tax returns had been filed via Free File, which represents a 44 percent increase over the same time period last year. In the 2005 filing season, all individual taxpayers are eligible to file free via IRS’s Web site. As of March 12, assistance provided at IRS’s approximately 400 walk-in sites declined by 11 percent compared to the same time last year, with the number receiving tax preparation assistance declining by about 22 percent. Staff at those sites provides taxpayers with information about their tax accounts and answer a limited scope of tax law questions. If staff cannot answer taxpayers’ questions, they are required to refer taxpayers to IRS’s telephone operations or have taxpayers correspond via IRS’s Web site. In combination with decreased demand, IRS reduced the staff used at walk-in sites for return preparation assistance and continues to encourage taxpayers to use volunteer sites for return preparation. These declines are consistent with IRS’s goal to further limit return preparation and tax law assistance at walk-in sites by 2007 and with its 2006 budget request. As reflected in table 4 and figure 6, in contrast to IRS walk-in sites, the number of taxpayers seeking return preparation assistance at volunteer sites has increased this year and every year since 2001. These sites, staffed by volunteers certified by IRS, do not offer the range of services IRS provides, but instead focus on preparing tax returns primarily for low- income and elderly taxpayers and operate chiefly during the filing season. IRS officials estimated that the number of taxpayers receiving assistance at approximately 14,000 volunteer sites has increased over 23 percent compared to the same time last year. The shift of taxpayers from walk-in to volunteer sites is important, because it has transferred time-consuming services, particularly return preparation, from IRS to volunteer sites and allowed IRS to concentrate on services that only it can provide such as account assistance or compliance work. As a result, IRS has devoted fewer resources to return preparation. While this shift is important to IRS, others have been more cautious. For example, in her January 2005 report, the Taxpayer Advocate has expressed concern about the reduction of face-to-face services, such as those offered at walk- in sites. She stated that IRS’s plan does not adequately provide for the segment of the population that continues to rely on the interaction provided by walk-in sites. At the same time, last year, we and TIGTA called attention to issues related to the quality of service at both IRS walk- in and volunteer sites. IRS has separate quality initiatives under way at both IRS walk-in sites and volunteer sites, although data remain limited and cannot be compared to prior years. As IRS shifts its priorities to enforcement and faces tight budgets for service, the agency will be challenged to maintain the gains it has made in taxpayer service. In order to avoid a “swinging pendulum,” where enforcement gains are achieved at the cost of taxpayer service and vice versa, IRS and the Congress would benefit from a set of agreed-upon long- term goals. Long-term goals would provide a framework for assessing budgetary tradeoffs between taxpayer service and enforcement and whether IRS is making satisfactory progress towards achieving those goals. Similarly, long-term goals could help identify priorities within the taxpayer service and enforcement functions. For example, if the budget for taxpayer service were to be cut and efficiency gains did not offset the cut, long-term goals could help guide decisions about whether to make service cuts across the board or target selected services. To its credit, IRS has been developing a set of long-term goals, so we are not making a recommendation on goals. However, we want to underscore the importance of making the goals public in a timely fashion, as IRS has planned. The Congress would then have an opportunity to review the goals and start using them as a tool for holding IRS accountable for performance. In addition, the Congress would benefit from more information about the short-term impacts of the 2006 budget request on taxpayers. The 2006 budget request cites a need for reducing the hours of telephone service and scaling back walk-in assistance but provides little additional detail. Without more detail about how taxpayers will be affected, it is difficult to assess whether the 2006 proposed budget would allow IRS to achieve its stated intent of both maintaining a high level of taxpayer service and increasing enforcement. BSM and related initiatives such as electronic filing hold the promise of delivering further efficiency gains that could offset the need for larger budget increases to fund taxpayer service and enforcement. Today, taxpayers have seen payoffs from BSM; however, the program is still high risk and budget reductions have caused substantive program changes. IRS has recognized it is time to revisit its long-term BSM strategy and is currently refocusing the program. As we did with long-term goals above, we want to underscore the importance of timely completion of the revision of the BSM strategy. We recommend that the Commissioner of Internal Revenue supplement the 2006 budget request with more detailed information on how proposed service reductions would impact taxpayers. IRS's proposed new budget structure as depicted in figure 7 combines the three major appropriations that the agency has had in the past— Processing, Assistance, and Management; Tax Law Enforcement; and Information Systems into one appropriation called Tax Administration and Operations. The Business Systems Modernization and Health Insurance Tax Credit Administration appropriations accounts remain unchanged. The Tax Administration and Operations appropriation is divided among eight critical program areas. These budget activities focus on Assistance, Outreach, Processing, Examination, Collection, Investigations, Regulatory Compliance, and Research. According to IRS, as it continues to move forward with developing and implementing this new structure, these program areas and the associated resource distributions will be refined to provide more accurate costing. IRS reported that the new budget structure has a more direct relationship to its major program areas and strategic plan. We did not evaluate IRS's proposed budget structure as part of this engagement because it was not within the scope of our review. However, we have recently completed a study on the administration's broader budget restructuring effort. In that study we say that, going forward, infusing a performance perspective into budget decisions may only be achieved when the underlying information becomes more credible and used by all major decision makers. Thus, the Congress must be considered a partner. In due course, once the goals and underlying data become more compelling and used by the Congress, budget restructuring may become a better tool to advance budget and performance integration. The table below shows the life-cycle variance in cost and schedule estimates for completed and ongoing Business Systems Modernization (BSM) projects, based on data contained in IRS's expenditure plans. These variances are based on a comparison of IRS's initial and revised (as of July 2004) cost and schedule estimates to complete initial operation or full deployment of the projects. Figures 8 and 9 illustrate how the Internal Revenue Service (IRS) allocated expenditures and full-time equivalents (FTE) in fiscal year 2004. Figure 8 shows total expenditures. The percentage of expenditures devoted to contracts decreased from 9 percent in 2002 to 5 percent in 2004, because of fewer private contracts. The percentage of expenditures devoted to other nonlabor costs increased from 8 percent in 2002 to 12 percent in 2004, due to increases in miscellaneous costs. Figure 9 shows IRS’s total FTEs. FTEs have decreased slightly from 99,180 in 2002 to 99,055 in 2004. We previously reported that processing FTEs declined 1 percentage point between 2002 and 2003. Between 2003 and 2004, IRS’s allocation of FTEs remained similar with a 1 percentage point increase in conducting examinations, and in management and other services.
The Internal Revenue Service (IRS) has been shifting its priorities from taxpayer service to enforcement and its management of Business Systems Modernization (BSM) from contractors to IRS staff. Although there are sound reasons for these adjustments, they also involve risks. With respect to the fiscal year 2006 budget request, GAO assessed (1) how IRS proposes to balance its resources between taxpayer service and enforcement programs and the potential impact on taxpayers, (2) the status of IRS's efforts to develop and implement the BSM program, and (3) the progress IRS has made in implementing best practices in developing its Information Technology (IT) operations and maintenance budget. For the 2005 filing season, GAO assessed IRS's performance in processing returns and providing taxpayer service. IRS's fiscal year 2006 budget request of $10.9 billion proposes increased funding for enforcement, but reduced funding for taxpayer service and BSM. However, the potential impact of these changes on taxpayers in either the short- or long-term is unclear, because IRS has not provided details of proposed taxpayer service reductions, and although it is developing longterm goals, they are not yet finalized. Because of the proposed reductions and new and improved taxpayer services in recent years, this is an opportune time to examine the menu of services IRS provides. It may be possible to maintain the overall level of service to taxpayers by offsetting reductions in some areas with new and improved service in other areas. Taxpayers and IRS are seeing some payoff from the BSM program, with the deployment of initial phases of several modernized systems in 2004. Nevertheless, the BSM program continues to be high-risk, in part, because projects have incurred significant cost increases and schedule delays and the program faces major challenges in areas such as human capital and requirements management. As a result of budget reductions and other factors, IRS has made major adjustments. It is too early to tell what effect these adjustments will have on the program, but they are not without risk and could potentially impact future budgets. Further, the BSM program is based on strategies developed years ago, which, coupled with the delays and changes brought on by budget reductions, indicates that it is time for IRS to revisit its long-term goals, strategy, and plans for BSM. Because of these challenges, IRS is redefining and refocusing the BSM program. IRS has generally maintained or improved its filing season performance in 2005. Processing is more efficient, the accuracy of answers provided by telephone assistors is improved, and telephone access is relatively comparable to last year. This is particularly noteworthy, because IRS received less funding for taxpayer service in 2005 than it spent in 2004. Because the filing season is not over, the full impact on taxpayers and IRS operations is not yet known. However, there are indications of slippage in telephone access such as more abandoned calls and longer wait times.
The national information and communications networks consist of a collection of mostly privately owned networks that are critical to the nation’s security, economy, and public safety. The communications sector operates these networks and is comprised of public- and private-sector entities that have a role in, among other things, the use, protection, or regulation of the communications networks and associated services (including Internet routing). For example, private companies, such as AT&T and Verizon, function as service providers, offering a variety of services to individual and enterprise end users or customers. The Internet is a vast network of interconnected networks. It is used by governments, businesses, research institutions, and individuals around the world to communicate, engage in commerce, do research, educate, and entertain. customers that are positioned at the ends of the network, or the “last mile,” as referred to by industry. The core networks transport a high volume of aggregated traffic substantial distances or between different service providers or “carriers.” These networks connect regions within the United States as well as all continents except Antarctica, and use submarine fiber optic cable systems, land-based fiber and copper networks, and satellites. In order to transmit data, service providers manage and control core infrastructure elements with numerous components, including signaling systems, databases, switches, routers, and operations centers. Multiple service providers, such as AT&T and Verizon, operate distinct core networks traversing the nation that interconnect with each other at several points. End users generally do not connect directly with the core networks. Access networks are primarily local portions of the network that connect end users to the core networks or directly to each other and enable them to use services such as local and long distance phone calling, video conferencing, text messaging, e-mail, and various Internet-based services. These services are provided by various technologies such as satellites, including fixed and portable systems; wireless, including cellular base stations; cable, including video, data, and voice systems, and cable system end offices; and wireline, including voice and data systems and end offices. Communications traffic between two locations may originate and terminate within an access network without connecting to core networks (e.g., local phone calling within the wireline network). Communications traffic between different types of access networks (e.g., between the wireline and wireless networks) may use core networks to facilitate the transmission of traffic. Individual and enterprise users connect to access networks through various devices (e.g., wired phones, cell phones, and computers). Figure 1 depicts the interconnection of user devices and services, access networks, and core networks. Figure 2 depicts the path that a single communication can take to its final destination. Aggregate traffic is normally the multimedia (voice, data, video) traffic combined from different service providers, or carriers, to be transported over high-speed through the core networks. Roll over each below to view more information. The nation’s communications infrastructure also provides the networks that support the Internet. In order for data to move freely across communications networks, the Internet network operators employ voluntary, self-enforcing rules called protocols. Two sets of protocols— the Domain Name System (DNS) and the Border Gateway Protocol (BGP)—are essential for ensuring the uniqueness of each e-mail and website address and for facilitating the routing of data packets between autonomous systems, respectively. DNS provides a globally distributed hierarchical database for mapping unique names to network addresses. It links e-mail and website addresses with the underlying numerical addresses that computers use to communicate with each other. It translates names, such as http://www.house.gov, into numerical addresses, such as 208.47.254.18, that computers and other devices use to identify each other on the network and back again in a process invisible to the end user. This process relies on a hierarchical system of servers, called domain name servers, which store data linking address names with address numbers. These servers are owned and operated by many public and private sector organizations throughout the world. Each of these servers stores a limited set of names and numbers. They are linked by a series of root servers that coordinate the data and allow users’ computers to find the server that identifies the sites they want to reach. Domain name servers are organized into a hierarchy that parallels the organization of the domain names (such as “.gov”, “.com”, and “.org”). Figure 3 below provides an example of how a DNS query is turned into a number. BGP is used by routers located at network nodes to direct traffic across the Internet. Typically, routers that use this protocol maintain a routing table that lists all feasible paths to a particular network. They also determine metrics associated with each path (such as cost, stability, and speed) and follow a set of constraints (e.g., business relationships) to choose the best available path for forwarding data. This protocol is important because it binds together many autonomous networks that comprise the Internet (see fig. 4). Like those affecting other cyber-reliant critical infrastructure, threats to the communications infrastructure can come from a wide array of sources. These sources include corrupt employees, criminal groups, hackers, and foreign nations engaged in espionage and information warfare. These threat sources vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include monetary gain or political advantage, among others. Table 1 describes the sources in more detail. These sources may make use of various cyber techniques, or exploits, to adversely affect communications networks, such as denial-of-service attacks, phishing, passive wiretapping, Trojan horses, viruses, worms, and attacks on the information technology supply chains that support the communications networks. Table 2 provides descriptions of these cyber exploits. In addition to cyber-based threats, the nation’s communications networks also face threats from physical sources. Examples of these threats include natural events (e.g., hurricanes or flooding) and man-made disasters (e.g., terrorist attacks), as well as unintentional man-made outages (e.g., a backhoe cutting a communication line). While the private sector owns and operates the nation’s communications networks and is primarily responsible for protecting these assets, federal law and policy establish regulatory and support roles for the federal government in regard to the communications networks. In this regard, federal law and policy call for critical infrastructure protection activities that are intended to enhance the cyber and physical security of both the public and private infrastructures that are essential to national security, national economic security, and public health and safety. The federal role is generally limited to sharing information, providing assistance when asked by private-sector entities, and exercising regulatory authority when applicable. As part of their efforts in support of the security of communications networks, FCC, DHS, DOD, and Commerce have taken a variety of actions, including ones related to developing cyber policy and standards, securing Internet infrastructure, sharing information, supporting national security and emergency preparedness (NS/EP), and promoting sector protection efforts. FCC is a U.S. government agency that regulates interstate and international communications by radio, television, wire, satellite, and cable throughout the United States.for certain communications providers to report on the reliability and security of communications infrastructures. These include disruption- reporting requirements for outages that are defined as a significant degradation in the ability of an end user to establish and maintain a Its regulations include requirements channel of communications as a result of failure or degradation in the performance of a communications provider’s network. The Commission’s Public Safety and Homeland Security Bureau has primary responsibility for assisting providers in ensuring the security and availability of the communications networks. The bureau also serves as a clearinghouse for public safety communications information and emergency response issues. In addition, its officials serve as Designated Federal Officers on the Communications Security, Reliability, and Interoperability Council. The Communications Security, Reliability, and Interoperability Council is a federal advisory committee whose mission is to provide recommendations to FCC to help ensure, among other things, secure and reliable communications systems, including telecommunications, media, and public safety systems. The council has provided recommendations in the form of voluntary best practices that provide companies with guidance aimed at improving the overall reliability, interoperability, and security of networks. Specifically, it is composed of 11 working groups that consist of experts from industry and other federal agencies. The working groups focus on various related topics, including those related to network security management, as well the security of the Border Gateway Protocol and the Domain Name System. The working groups develop recommendations through industry cooperation and voluntary agreements. For example, in March 2012, the commission announced the voluntary commitments by the nation’s largest Internet service providers, including AT&T and Verizon, to adopt the council’s recommendations aimed at better securing their communications networks. The recommendations covered a variety of security practices, including those related to the security of the Domain Name System and BGP. The key FCC and council efforts related to the security of the communications sector are detailed in table 3 below. DHS is the principal federal agency to lead, integrate, and coordinate the implementation of efforts to protect cyber-critical infrastructures. DHS’s role in critical infrastructure protection is established by law and policy. The Homeland Security Act of 2002, Homeland Security Presidential Directive 7, and the National Infrastructure Protection Plan establish a cyber protection approach for the nation’s critical infrastructure sectors— including communications—that focuses on the development of public- private partnerships and establishment of a risk management framework. These policies establish critical infrastructure sectors, including the communications sector; assign agencies to each sector (sector-specific agencies), including DHS as the sector lead for the communications and information technology sectors; and encourage private sector involvement through the development of sector coordinating councils, such as the Communications Sector Coordinating Council, and information-sharing mechanisms, such as the Communications Information Sharing and Analysis Center. Additionally, DHS has a role, along with agencies such as DOD, in regard to national security and emergency preparedness (NS/EP) communications that are intended to increase the likelihood that essential government and private-sector individuals can complete critical phone calls and organizations can quickly restore service during periods of disruption and congestion resulting from natural or man-made disasters. In particular, Executive Order No.13618 established an NS/EP Communications Executive Committee to serve as an interagency forum to address such communications matters for the nation. Among other things, the committee is to advise and make policy recommendations to the President on enhancing the survivability, resilience, and future architecture for NS/EP communications. The Executive Committee is composed of Assistant Secretary-level or equivalent representatives designated by the heads of the Departments of State, Defense, Justice, Commerce, and Homeland Security, the Office of the Director of National Intelligence, the General Services Administration, and the Federal Communications Commission, as well as such additional agencies as the Executive Committee may designate. The committee is chaired by the DHS Assistant Secretary for the Office of Cybersecurity and Communications and the DOD Chief Information Officer, with administrative support for the committee provided by DHS. To fulfill DHS’s cyber-critical infrastructure protection and NS/EP-related missions, the Office of Cybersecurity and Communications within the National Protection and Programs Directorate is responsible for, among other things, ensuring the security, resiliency, and reliability of the nation’s cyber and communications infrastructure, implementing a cyber-risk management program for protection of critical infrastructure, and planning for and providing national security and emergency preparedness communications to the federal government. The office is made up of the following five subcomponents that have various responsibilities related to DHS’s overarching cybersecurity mission: Stakeholder Engagement and Cyber Infrastructure Resilience division, among other things, is responsible for managing the agency’s role as the sector-specific agency for the communications sector. Office of Emergency Communications is responsible for leading NS/EP and emergency communications in coordination and cooperation with other DHS organizations. National Cybersecurity and Communications Integration Center is the national 24-hours-a-day, 7-days-a-week operations center that is to provide situational awareness, multiagency incident response, and strategic analysis for issues related to cybersecurity and NS/EP communications. The center is comprised of numerous co-located, integrated elements including the National Coordinating Center for Telecommunications, the U.S. Computer Emergency Readiness Team (US-CERT), and the Industrial Control Systems Cyber Emergency Response Team. Federal Network Resilience division is responsible for collaborating with departments and agencies across the federal government to strengthen the operational security of the “.gov” networks. As part of those efforts, the division leads the DHS initiative related to DNSSEC. Network Security Deployment division is responsible for designing, developing, acquiring, deploying, sustaining, and providing customer support for the National Cybersecurity Protection System. Four of these subcomponents have taken specific actions with respect to the communications networks, which are detailed in table 4 below. Under the National Infrastructure Protection Plan, DHS’s Office of Cybersecurity and Communications, as the sector-specific agency for the communications and information technology sectors, is responsible for leading federal efforts to support sector protection efforts. As part of the risk management process for protecting the nation’s critical infrastructure, including the protection of the cyber information infrastructure, the National Infrastructure Protection Plan recommends that outcome- oriented metrics be established that are specific and clear as to what they are measuring, practical or feasible in that needed data are available, built on objectively measureable data, and align to sector priorities. These metrics are to be used to determine the health and effectiveness of sector efforts and help drive future investment and resource decisions. DHS and its partners have previously identified the development of outcome-oriented metrics as part of the process to be used to manage risks to the nation’s critical communications infrastructure. For example, in 2010, DHS and its communications sector partners identified preserving the overall health of the core network as the sector’s first priority at the national level. They also defined a process for developing outcome-oriented sector metrics that would map to their identified goals and would yield quantifiable information (when available). Additionally, DHS and its information technology sector partners stated that they would measure their cyber protection efforts related to DNS and BGP in terms of activities identified in 2009 to assist sector partners in mitigating risks to key sector services, such as providing DNS functionality and Internet routing services. In 2010, they noted that implementation plans would be developed for each of the activities and outcome-based metrics would be used to monitor the status and effectiveness of the activities. However, DHS and its partners have not yet developed outcome-based metrics related to the cyber-protection activities for the core and access networks, DNS functionality, and Internet routing services. For the communications sector, DHS officials stated that the sector had recently completed the first part of a multiphased risk assessment process that included identification of cyber risks. The officials further stated that efforts are under way to prioritize the identified risks and potentially develop actions to mitigate them. However, DHS officials stated that outcome-oriented metrics had not yet been established and acknowledged that time frames for developing such metrics had not been agreed to with their private sector partners. For the information technology sector, DHS officials noted that the information technology sector’s private sector partners had decided to focus on progress-related metrics (which report the status of mitigation development activities as well as implementation decisions and progress) to measure the effectiveness of sector activities to reduce risk across the entire sector and periodically re-examine their initial risk evaluation based on perceived threats facing the sector. While these progress-related metrics are part of the information technology sector’s planned measurement activities, the sector’s plans acknowledge that outcome-based metrics are preferable to demonstrate effectiveness of efforts. Until metrics related to efforts to protect core and access networks, DNS, and BGP are fully developed, implemented, and tracked by DHS, federal decision makers will have less insight into the effectiveness of sector protection efforts. Within DOD, the Office of the Chief Information Officer (CIO) has been assigned the responsibility for implementing Executive Order 13618 requirements related to NS/EP communication functions. As previously described, the CIO (along with the Assistant Secretary for Cybersecurity and Communications in DHS) co-chairs the NS/EP Communications Executive Committee established in Executive Order 13618. The CIO directs, manages, and provides policy guidance and oversight for DOD’s information and the information enterprise, including matters related to information technology, network defense, network operations, and cybersecurity. Table 5 describes the department’s efforts in relation to this executive order. Federal law and policy also establish a role for the Department of Commerce (Commerce) related to the protection of the nation’s communications networks. For example, Commerce conducts industry studies assessing the capabilities of the nation’s industrial base to support the national defense. In addition, the department’s National Telecommunications and Information Administration (NTIA) was established as the principal presidential adviser on telecommunications and information policies. Further, Commerce’s National Institute of Standards and Technology (NIST) is to, among other things, cooperate with other federal agencies, industry, and other private organizations in establishing standard practices, codes, specifications, and voluntary consensus standards. Commerce also has a role in ensuring the security and stability of DNS. Prompted by concerns regarding who has authority over DNS, along with the stability of the Internet as more commercial interests began to rely on it, the Clinton administration issued an electronic commerce report in July 1997 that identified the department as the lead agency to support private efforts to address Internet governance. In June 1998, NTIA issued a policy statement (known as the White Paper) that stated it would enter into an agreement with a not-for-profit corporation formed by private sector Internet stakeholders for the technical coordination of DNS. In addition, Commerce created the Internet Policy Task Force in August 2011 to, among other things, develop and maintain department-wide policy proposals on a range of global issues that affect the Internet, including cybersecurity. While NIST has been identified as the Commerce lead bureau for cybersecurity, the task force is to leverage the expertise of other Commerce bureaus, such as the Bureau of Industry and Security and NTIA. Commerce components also carry out functions related to the security of the nation’s communications networks. The Bureau of Industry and Security conducted an industrial study to examine the operational and security practices employed by network operators in the nation’s communications infrastructure. In addition, NTIA manages agreements with the Internet Corporation for Assigned Names and Numbers (ICANN) and VeriSign, Inc., through which changes are made to the authoritative root zone file. Also, NIST participates in open, voluntary, industry-led, consensus-based, standards-setting bodies that design and develop specifications for network security technologies, including those used in the nation’s communications networks (such as DNS and BGP) as well as in industry technical forums for the purpose of promulgating the deployment of such new technologies. Table 6 describes some of the key efforts of Commerce as they relate to the cybersecurity of the nation’s communications networks. No cyber incidents affecting the core and access networks have been reported by communications networks owners and operators through three established reporting mechanisms from January 2010 to October 2012. To report incidents involving the core and access communications networks to the federal government, communication networks operators can use reporting mechanisms established by FCC and DHS to share information on outages and incidents: FCC’s Network Outage Reporting System is a web-based filing system that communications providers use to submit detailed outage reports to FCC. In turn, FCC officials stated that the agency uses the reported outage data to develop situational awareness of commercial network performance as well as to aid the commission in influencing and developing best practices regarding incidents. DHS’s Network Security Information Exchange is an information- sharing forum comprised of representatives from the communications and information technology sectors that meet bimonthly to voluntarily share communications-related incidents, among other things. DHS’s National Cybersecurity and Communications Integration Center, which includes the National Coordinating Center, US-CERT, and the Industrial Control Systems Cyber Emergency Response Team, is used to share information about threats, vulnerabilities, and intrusions related to communications networks and the sector as a whole. Communications and information technology providers can voluntarily report threats, vulnerabilities, and intrusions to the center. Although these mechanisms for reporting exist, available information showed that no cyber-based incidents involving the core and access communication networks had been reported using these mechanisms to the federal government from January 2010 to October 2012. Specifically, of the over 35,000 outages reported to FCC during this time period, none were related to traditional cyber threats (e.g., botnets, spyware, viruses, and worms). FCC officials stated that there could be an increase in the presence of cyber-related outages reported in the future as the Voice- over-Internet-Protocol reporting requirements are enforced. Further, DHS Office of Cybersecurity and Communications officials stated that no cyber incidents related to the core and access networks were reported to them during January 2010 to October 2012. For example, although several incidents attributed to the communications sector were reported to DHS’s Industrial Control Systems Cyber Emergency Response Team in fiscal year 2012, none of these incidents involved core and access networks. Our review of reports published by information security firms and communication network companies also indicated that no cyber incidents related to the core and access networks were publicly reported from January 2010 to October 2012. Officials within FCC and the private sector attributed the lack of incidents to the fact that the communications networks provide the medium for direct attacks on consumer, business, and government systems—and thus these networks are less likely to be targeted by a cyber attack themselves. In addition, Communications Information Sharing and Analysis Center officials expressed greater concern about physical threats (such as natural and man-made disasters, as well as unintentional man-made outages) to communications infrastructure than cyber threats. DOD, in its role as the sector-specific agency for the defense industrial base critical infrastructure sector, established two pilot programs to enhance the cybersecurity of sector companies and better protect unclassified department data residing on those company networks. The Deputy Secretary of Defense established the Cyber Security/Information Assurance program under the department’s Office of the Chief Information Officer to address the risk posed by cyber attacks against sector companies. The Opt-In Pilot was designed to build upon the Cyber Security/Information Assurance Program and, according to department officials, established a voluntary information-sharing process for the department to provide classified network security indicators to Internet service providers. In August 2012, we reported on these pilot programs as part of our study to identify DOD and private sector efforts to protect the defense industrial base from cybersecurity threats. Our report described these programs in detail, including challenges to their success. For example, one challenge noted by defense industrial base company officials was that the quality of the threat indicators provided by the federal government as part of the Opt-In pilot had not met their needs. In addition, the quality of the pilot was affected by the lack of a mechanism for information sharing among government and private stakeholders. The report also made recommendations to DOD and DHS to better protect the defense industrial base from cyber threats. (The August 2012 report was designated as official use only and is not publicly available.) Using information in that report, we identified six attributes that were implemented to varying extents as part of the pilot programs (see table 7). These attributes were utilized by DOD and the defense industrial base companies to protect their sector from cyber threats and could inform the cyber protection efforts of the communications sector. Agreements: Eligible defense industrial base companies who wanted to participate in these pilots enter into an agreement with the federal government. This agreement establishes the bilateral cyber- information-sharing process that emphasizes the sensitive, nonpublic nature of the information shared which must be protected from unauthorized use. The agreement does not obligate the participating company to change its information system environment or otherwise alter its normal conduct of cyber activities. Government sharing of unclassified and classified cyber threat information: DOD provides participating defense industrial base companies with both unclassified and classified threat information, and in return, the companies acknowledge receipt of threat information products. For any intrusions reported to DOD by the participating companies under the program, the department can develop damage assessment products, such as incident-specific and trend reports, and provide them to participating companies and DOD leadership. Feedback mechanism on government services: When a participating company receives cyber threat information from DOD, it has the option of providing feedback to the department on, among other things, the quality of the products. Government cyber analysis, mitigation, and digital forensic support: A participating company can also optionally report intrusion events. When this occurs, DOD can conduct forensic cyber analysis and provide mitigation and digital forensic support. The department can also provide on-site support to the company that reported the intrusion. Government reporting of voluntarily reported incidents: In addition to providing cyber analysis, mitigation, and cyber forensic support, DOD can report the information to other federal stakeholders, law enforcement agencies, counterintelligence agencies, and the DOD program office that might have been affected. Internet service providers deploying countermeasures based on classified threat indicators for organizations: Each Cyber Security/Information Assurance program participating company can voluntarily allow its Internet service providers to deploy countermeasures on its behalf, provided the Internet service provider has been approved to receive classified network security indicators from the U.S. government. For those providers, US-CERT collects classified threat indicators from multiple sources and provides them to the companies’ participating Internet service providers. If the Internet service provider identifies a cyber intrusion, it will alert the company that was the target of the intrusion. Providers can also voluntarily notify US-CERT about the incident, and US-CERT will share the information with DOD. In May 2012, DOD issued an interim final rule to expand the Cyber Security/Information Assurance program to all eligible defense industrial base sector companies. Additionally, the Defense Industrial Base Opt-In Pilot became the Defense Industrial Base Enhanced Cybersecurity Service (DECS) Program, and is now jointly managed by DHS and DOD. In addition, on February 12, 2013, the President signed Executive Order 13636, which requires the Secretary of Homeland Security to establish procedures to expand DECS (referred to as the Enhanced Cybersecurity Services program) to all critical infrastructure sectors, including the communications sector. Considering these attributes and challenges could inform DHS’s efforts as it develops these new procedures. Securing the nation’s networks is essential to ensuring reliable and effective communications within the United States. Within the roles prescribed for them by federal law and policy, the Federal Communications Commission and the Departments of Homeland Security, Defense, and Commerce have taken actions to support the communications and information technology sectors’ efforts to secure the nation’s communications networks from cyber attacks. However, until DHS and its sector partners develop appropriate outcome-oriented metrics, it will be difficult to gauge the effectiveness of efforts to protect the nation’s core and access communications networks and critical support components of the Internet from cyber incidents. While no cyber incidents have been reported affecting the nation’s core and access networks, communications networks operators can use reporting mechanisms established by FCC and DHS to share information on outages and incidents. The pilot programs undertaken by DOD with its defense industrial base partners exhibit several attributes that could apply to the communications sector and help private sector entities more effectively secure the communications infrastructure they own and operate. As DHS develops procedures for expanding this program, considering these attributes could inform DHS’s efforts. To help assess efforts to secure communications networks and inform future investment and resource decisions, we recommend that the Secretary of Homeland Security direct the appropriate officials within DHS to collaborate with its public and private sector partners to develop, implement, and track sector outcome-oriented performance measures for cyber protection activities related to the nation’s communications networks. We provided a draft of this report to the Departments of Commerce (including the Bureau of Industry and Security, NIST, and NTIA), Defense, and Homeland Security and FCC for their review and comment. DHS provided written comments on our report (see app. II), signed by DHS’s Director of Departmental GAO-OIG Liaison Office. In its comments, DHS concurred with our recommendation and stated that it is working with industry to develop plans for mitigating risks that will determine the path forward in developing outcome-oriented performance measures for cyber protection activities related to the nation’s core and access communications networks. Although the department did not specify an estimated completion date for developing and implementing these measures, we believe the prompt implementation of our recommendation will assist DHS in assessing efforts to secure communication networks and inform future investment and resource decisions. We also received technical comments via e-mail from officials responsible for cybersecurity efforts related to communication networks at Defense, DHS, FCC, and Commerce's Bureau of Industry and Security and NTIA. We incorporated these comments where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to interested congressional committees; the Secretaries of the Departments of Commerce, Defense, and Homeland Security; the Chairman of the Federal Communications Commission; the Director of the Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6244 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to (1) identify the roles of and actions taken by key federal entities to help protect the communications networks from cyber- based threats, (2) assess what is known about the extent to which cyber- incidents affecting the communications networks have been reported to the Federal Communications Commission (FCC) and Department of Homeland Security (DHS), and (3) determine if the Department of Defense’s (DOD) pilot programs to promote cybersecurity in the defense industrial base can be used in the communications sector. Our audit focused on the core and access networks of the communication network. These networks include wireline, wireless, cable, and satellite. We did not address broadcast access networks because they are responsible for a smaller volume of traffic than other networks. Additionally, we focused on the Internet support components that are critical for delivering services: the Border Gateway Protocol (BGP) and Domain Name System (DNS). To identify the roles of federal entities, we collected, reviewed, and analyzed relevant federal law, policy, regulation, and critical infrastructure protection-related strategies. Sources consulted include statutes such as the Communications Act of 1934, Homeland Security Act of 2002, and the Defense Production Act of 1950, as well as other public laws; the Code of Federal Regulations; National Communication System Directive 3-10; the National Infrastructure Protection Plan; the Communications Sector- Specific Plan; the Information Technology Sector-Specific Plan; the Communications Sector Risk Assessment; the Information Technology Sector Risk Assessment; Homeland Security Presidential Directives; selected executive orders; and related GAO products. Using these materials, we selected the Departments of Commerce, Defense, and Homeland Security, and FCC to review their respective roles and actions related to the security of the privately owned communications network because they were identified as having the most significant roles and organizations for addressing communications cybersecurity. To identify the actions taken by federal entities we collected, reviewed, and analyzed relevant policies, plans, reports, and related performance metrics and interviewed officials at each of the four agencies. For example, we reviewed and analyzed Department of Commerce agreements detailing the process for how changes are to be made to the authoritative root zone file and Internet Policy Task Force reports on cybersecurity innovation and the Internet. In addition, we analyzed and identified current and planned actions outlined in DOD’s National Security/Emergency Preparedness Executive Committee Charter. Also, we analyzed reports issued by the Communications Security, Reliability, and Interoperability Council on a variety of issues, including the security of the Domain Name System and the Border Gateway Protocol. Further, we reviewed and analyzed the risk assessments and sector-specific plans for both the communications and information technology critical infrastructure sectors, as well DHS’s plans for realignment in response to Executive Order 13618. In addition, we interviewed agency officials regarding authority, roles, policies, and actions created by their department or agency, and actions taken by their departments and agencies to encourage or enhance the protection of communications networks, BGP, and DNS, and fulfill related roles. For Commerce, we interviewed officials from the Bureau of Industry and Security, National Telecommunications and Information Administration, and the National Institute of Standards and Technology. For DOD, we interviewed officials from the Office of the Chief Information Officer, including those from the National Leadership Command Capability Management Office and the Trusted Mission Systems and Networks Office. We also interviewed officials from the Office of the Under Secretary of Defense for Policy. For DHS, we interviewed officials from the National Protection and Programs Directorate’s Office of Cybersecurity and Communications. For FCC, we interviewed officials from the International, Media, Public Safety and Homeland Security, Wireless Telecommunications, and Wireline Competition Bureaus. Based on our analysis and the information gathered through interviews, we created a list of actions taken by each agency. Additionally, we reviewed documents (including the communications sector risk assessment) from and conducted interviews with officials from the Communications Information Sharing and Analysis Center to assess federal efforts to fulfill roles and responsibilities. To assess what is known about the extent to which cyber-incidents affecting the communications networks have been reported to FCC and DHS, we analyzed FCC policy and guidance related to its Network Outage Reporting System. Additionally, we conducted an analysis of outage reports submitted from January 2010 to October 2012 to determine the extent to which they were related to cybersecurity threats, such as botnets, spyware, viruses, and worms affecting the core and access networks. To assess the reliability of FCC outage reports, we (1) discussed data quality control procedures with agency officials, (2) reviewed relevant documentation, (3) performed testing for obvious problems with completeness or accuracy, and (4) reviewed related internal controls. We determined that the data were sufficiently reliable for the purposes of this report. We also interviewed officials from FCC’s Public Safety and Homeland Security Bureau to understand incident reporting practices of its regulated entities, and how reported incident data were used by FCC to encourage improvement or initiate enforcement actions. Further, we interviewed officials from DHS’s United States Computer Emergency Readiness Team regarding the extent to which incidents were reported to it that affected core and access communications networks. We also conducted an analysis of information security reports from nonfederal entities, to determine if cyber incidents on the core and access communications networks had been reported to nonfederal entities. Additionally, we interviewed Communications Information Sharing and Analysis Center officials to identify the mechanisms and processes used to report cyber-related incidents in the communications sector to the center and then to the federal government. To determine if DOD’s pilot can be used to inform the communications sector, we reviewed our August 2012 report on DOD efforts to enhance the cybersecurity of the defense industrial base critical infrastructure sector. We then identified and summarized attributes of the program that could be publicly reported and that were potentially applicable to the communications sector. The information used to compile the attributes from the August 2012 report was determined by DOD at that time not to be considered official use only. We also interviewed officials from DHS’s Office of Cybersecurity and Communications to ascertain the current status of the pilot programs and efforts to determine the applicability of the pilots to all critical infrastructures, including the communications sector. We conducted this performance audit from April 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. GAO staff who made significant contributions to this report include Michael W. Gilmore, Assistant Director; Thomas E. Baril, Jr; Bradley W. Becker; Cortland Bradford; Penney Harwell Caramia; Kush K. Malhotra; Lee A. McCracken; David Plocher; and Adam Vodraska.
Ensuring the effectiveness and reliability of communications networks is essential to national security, the economy, and public health and safety. The communications networks (including core and access networks) can be threatened by both natural and human-caused events, including increasingly sophisticated and prevalent cyber-based threats. GAO has identified the protection of systems supporting the nation's critical infrastructure--which includes the communications sector--as a government-wide high-risk area. GAO was asked to (1) identify the roles of and actions taken by key federal entities to help protect communications networks from cyber-based threats, (2) assess what is known about the extent to which cyber incidents affecting the communications networks have been reported to the FCC and DHS, and (3) determine if Defense's pilot programs to promote cybersecurity in the defense industrial base can be used in the communications sector. To do this, GAO focused on core and access networks that support communication services, as well as critical components supporting the Internet. GAO analyzed federal agency policies, plans, and other documents; interviewed officials; and reviewed relevant reports. While the primary responsibility for protecting the nation's communications networks belongs to private-sector owners and operators, federal agencies also play a role in support of their security, as well as that of critical components supporting the Internet. Specifically, private-sector entities are responsible for the operational security of the networks they own, but the Federal Communications Commission (FCC) and the Departments of Homeland Security (DHS), Defense, and Commerce have regulatory and support roles, as established in federal law and policy, and have taken a variety of related actions. For example, FCC has developed and maintained a system for reporting network outage information; DHS has multiple components focused on assessing risk and sharing threat information; Defense and DHS serve as co-chairs for a committee on national security and emergency preparedness for telecommunications functions; and Commerce has studied cyber risks facing the communications infrastructure and participates in standards development. However, DHS and its partners have not yet initiated the process for developing outcome-based performance measures related to the cyber protection of key parts of the communications infrastructure. Outcome-based metrics related to communications networks and critical components supporting the Internet would provide federal decision makers with additional insight into the effectiveness of sector protection efforts. No cyber-related incidents affecting core and access networks have been recently reported to FCC and DHS through established mechanisms. Specifically, both FCC and DHS have established reporting mechanisms to share information on outages and incidents, but of the outages reported to FCC between January 2010 and October 2012, none were related to common cyber threats. Officials within FCC and the private sector stated that communication networks are less likely to be targeted themselves because they provide the access and the means by which attacks on consumer, business, and government systems can be facilitated. Attributes of two pilot programs established by Defense to enhance the cybersecurity of firms in the defense industrial base (the industry associated with the production of defense capabilities) could be applied to the communications sector. The department's pilot programs involve partnering with firms to share information about cyber threats and responding accordingly. Considering these attributes can inform DHS as it develops procedures for expanding these pilot programs to all critical infrastructure sectors, including the communications sector. GAO recommends that DHS collaborate with its partners to develop outcome-oriented measures for the communications sector. DHS concurred with GAO's recommendation.
Interior, created by the Congress in 1849, oversees and manages the nation’s publicly owned natural resources, including parks, wildlife habitats, and minerals, including crude oil and natural gas resources, on over 260 million surface acres and 700 million subsurface acres onshore and in the waters of the Outer Continental Shelf. In this capacity, Interior is authorized to lease federal oil and gas resources and to collect the royalties associated with their production, Interior’s Bureau of Land Management (BLM) is responsible for leasing federal oil and natural gas resources on land, whereas offshore—including the U.S. Gulf of Mexico— Minerals Management Service (MMS) has the leasing authority. To lease U.S. Gulf of Mexico waters for oil and gas exploration, companies generally must first pay the federal government a sum of money that is determined through a competitive auction and evaluated by Interior against departmental economic and geologic models. This money is called a bonus bid. Companies are required to submit one-fifth of any bid for a lease tract up front at time of bid, and pay the remaining four-fifths balance of their bonus payment and their first year rental payment after acquiring a lease. After the lease is awarded and production begins, the companies must also pay royalties to MMS based on a percentage of the cash value of the oil and gas produced and sold or “in kind,” as a percentage of the actual oil or gas produced. Royalty rates for onshore leases are generally 12.5 percent. Royalty rates for leases in the U.S. Gulf of Mexico, prior to 2007, ranged from 12.5 percent for water depths of 400 meters or deeper (referred to as deepwater) to 16-2/3 percent for water depths less than 400 meters (referred to as shallow). In 2007, the Secretary of Interior twice increased the royalty rate for future U.S. Gulf of Mexico leases—in January, the rate for deep water leases was raised to 16-2/3 percent and in October, the rate for all future leases, included those issued in 2008, was raised to 18-3/4 percent. A considerable body of legislation has been enacted pertaining to the management of resources on federal and Indian trust lands and within federal waters. This legislation includes the Mining Law of 1872, Mineral Lands Leasing Act of 1920, 1947 Mineral Leasing Act for Acquired Lands, Outer Continental Shelf Lands Act of 1953, Federal Land Policy and Management Act of 1976, the Outer Continental Shelf Lands Act Amendments of 1978, Federal Oil and Gas Royalty Management Act of 1982, as well as the Federal Onshore Oil and Gas Leasing Reform Act of 1987, the Outer Continental Shelf Deep Water Royalty Relief Act of 1995, and the Energy Policy Act of 2005. The Outer Continental Shelf Lands Act, as amended (OCSLA) is, among other things, intended to ensure the public “a fair and equitable return” on the resources of the shelf. The law directs the Secretary of Interior to conduct leasing activities to assure receipt of fair market value for the lands leased and the rights conveyed by the federal government. In addition, the Federal Land Policy and Management Act indicated that, unless otherwise provided by statute, it is the policy of the United States to receive “fair market value” for the use of the public lands and their resources. In 1982, Interior’s MMS convened a task force to review its fair market value procedures. Upon completion of the task force’s work, the Secretary of Interior informed the Congress by letter in March 1983 that the Department had completed its analysis and validation of the process by which it will assure a fair return to the American people. The Secretary indicated in that letter that the process in place will assure the American people a full and fair return as it pertains to bonuses, rentals, royalties, and taxes. The 1983 Interior task force report also provided some clarity regarding a fair return, or fair market value. The report indicated that the market value of a lease is not the market value of the oil and gas eventually discovered or produced. Instead, it is the value of the right to explore and, if there is a discovery, develop and produce the energy resource. Currently Interior has the legal authority to change most aspects of the oil and gas fiscal system. Specifically, Interior is allowed by statute to change bid terms for offshore leases including the royalty rate, the bonus bid structure, rental terms, and even the minimum 12.5 percent royalty rate, so long as there is only one variable or “flexible” term—such as a royalty rate that adjusts upwards or downwards with oil and gas prices—in the resulting system, and so long as Congress does not pass a resolution of disapproval within 30 days of Interior’s changes to the system. With regard to onshore leases, Interior is generally allowed by statute to change bid terms including the royalty rate, the bonus bid structure, rental terms, and the minimum royalty rate so long as the bid structure meets certain bid terms, but with certain additional limits on flexibility than the offshore leases. Over the past 25 years, Interior has implemented several programs that adjusted royalty rates or other system components. Such programs included the net profit share leases, which were for offshore leases that based royalties on a percentage of net profits derived from production; sliding scale royalty rates, which was an onshore royalty rate system based on changing production levels; and royalty rate reduction for stripper wells and lower-grade, more viscous crude oil, where onshore oil wells producing less than 15 barrels of oil per day were eligible for royalty rate reductions. Interior officials told us they also “experimented” with a variety of flexible royalty rate and profit sharing systems in the early 1980s, but found them difficult to administer and validate the amount of payments due to MMS, which in Interior’s estimation more than offset any enhanced flexibility associated with a variable royalty rate. Multiple studies completed as early as 1994 and as recently as June 2007 all indicate that the U.S. government take in the Gulf of Mexico is lower than most other oil and gas fiscal systems. Four recent studies by private consultants or resource owners indicate that the U.S. government take in the Gulf of Mexico is relatively low. For example, data we evaluated from a June 2007 report by Wood Mackenzie reported that the government take in the deep water U.S. Gulf of Mexico ranked as the 93rd lowest out of 104 oil and gas fiscal systems evaluated in the study. Other U.S. oil and gas regions are also listed in the Wood Mackenzie study and some but not all other studies. However, these regions are not uniquely under federal jurisdiction, so a direct comparison of the government take in these other regions cannot be used to isolate the federal oil and gas fiscal system. The results of the four studies are summarized below in table 1. As we reported in May 2007, the results of five other studies completed between 1994 and 2006 had similar findings. The information reported by Wood Mackenzie and other such expert studies are used by resource owners and oil and gas companies alike to aid in making investment or policy decisions and these studies represent the best data on government take available. However, we recognize there are limitations with the government take studies and the relative ranking of government take alone is not sufficient to determine whether the federal government is receiving its fair share of oil and gas revenues. A number of other factors that are not part of the government take determine company decisions of where and how much to invest and how much to pay for access to oil and gas resources. These factors include the relative size of oil and gas resource bases in different regions and the relative costs of developing these resources. Thus government take is a major, but not sole factor in determining the attractiveness of a fiscal system for oil and gas development. When other factors are taken into consideration, the U.S. Gulf of Mexico is an attractive target for investment because it has large remaining oil and gas reserves and the United States is generally a good place to do business compared to many other countries with comparable oil and gas resources. For example, once reserves that are entirely owned by governments are removed from the analysis, of the 104 remaining fiscal regimes ranked by Wood Mackenzie that allow some participation by international oil companies and that have remaining oil and gas reserves, the deep water U.S. Gulf of Mexico ranked 18th highest in terms of remaining oil and gas reserves. Three other U.S. regions were ranked in the top 18 in terms of reserves. These were the U.S. Rocky Mountains (8th), Alaska (14th), and U.S. Gulf Coast (15th), but these regions are not uniquely covered by the federal fiscal regimes, as state and private resource owners may also exist. Wood Mackenzie also ranked oil and gas fiscal regimes in terms of their attractiveness for investment. Wood Mackenzie’s measure of oil and gas fiscal attractiveness took into account both reward associated with factors such as resource size; and risk, including the extent to which government take includes bonuses. With respect to reward, Wood Mackenzie compared the levels of government take with the size of oil and gas fields governed by the various oil and gas fiscal systems. The risk ranking reflected whether or not the system included bonus payments, which increase the risk to investors because they must be paid whether or not economic volumes of oil and gas are eventually found on an oil and gas tract. Risk also included a measure of the extent to which and the way in which the resource owner held an equity share in the resources being developed. The impact of the fiscal terms on the rewards and risks associated with a wide range of hypothetical new investments were assessed under the terms of each of the 103 oil and gas fiscal systems included in this section of the study. Based on these assessments, Wood Mackenzie ranked the deep water U.S. Gulf of Mexico fiscal system as more attractive for investment than 60 (about 58 percent) of the 103 fiscal systems ranked. More broadly, other measures indicate that the United States is an attractive place to invest in oil and gas production. For example, since 2002 as oil prices have risen and gas prices have remained high by historical standards, the number of oil and gas drilling rigs operating in the United States has increased much faster than in the rest of the world, which indicates companies in recent years have continued to find the United States a conducive place to invest in oil and gas production. Specifically, according to data on crude oil rig counts from Baker Hughes, the number of rigs in use globally excluding the United States increased by about 18 percent from an annual average in 2002 of 998 to 1,180 through the first 4 months of 2008, while the number of rigs operating in the United States increased about 113 percent, from 831 rigs in 2002 to 1,768 rigs in 2007 and 1,829 rigs in April 2008. These increases coincided with the increase in oil and gas prices over the same period and indicate that the United States has remained an attractive place to invest in oil and gas as prices have risen. While rig counts can reasonably be associated with the attractiveness of a region for development and production, they do not tell the whole story. For example, according to Baker Hughes, the Gulf of Mexico rig count fluctuated over the longer term and has decreased in recent years. Specifically, from 1973 to 1981, rig counts in the Gulf of Mexico increased, from 80 to 231, before generally decreasing to 45 in 1992. They then generally rose again until 2001. From 2001 to April 2008, the annual rig count in the Gulf of Mexico decreased from 148 to 58. This decline has occurred despite the Gulf of Mexico being generally considered an attractive target for investment, both from the perspective of the government take and because of the potential for significant oil and gas resources. Other analyses report the oil and gas industry appears to have performed favorably in recent years compared with other industries. The Energy Information Administration reported in December 2007 that from 2000 through 2006, the return on equity, which compares a company’s profit with the value of the shares held by the company’s owners, for the major energy producers, referred to as Financial Reporting System (FRS) companies, averaged 7 percentage points higher than that of the U.S. Census Bureau’s “All Manufacturing Companies.” According to the report, this reversed a trend where the return on equity for the major energy producers averaged 2 percentage points lower than All Manufacturing Companies from 1985 to 1999. The American Petroleum Institute, in a 2007 study, showed that from 2000 to 2005, the average return on investment for oil and gas production was about 61 percent higher than for the Standard & Poor’s (S&P) industries. However, the average return on investment for the industry has matched or exceeded the returns for the S&P industrials only in recent years; over the 25-year period from 1980 to 2005, the average return on investment for oil and gas production was about 18 percent lower than for the S&P industries. A GAO analysis found that the “upstream,” or exploration and production segments, of the domestic oil and gas production companies also received higher rates of return than companies operating in other U.S. manufacturing industries from 2002 through 2006. We analyzed financial data from S&P’s Compustat and EIA’s FRS. From 2002 through 2006, the upstream segments of the domestic oil and gas production companies have averaged a 17.4 percent return on investment, compared with 15.2 percent for all other manufacturing companies. When both upstream and “downstream” the refining and marketing segments are included in the analysis, the oil and gas industry return on investment averaged over 20 percent during this period. This short term picture, however, contrasts with a longer-term analysis, which shows the oil and gas industry receiving a return on investment that is comparable, or slightly lower, than that received by other manufacturing industries over the past 30 years. Our analysis found that during this period, upstream oil and gas production has averaged 11.2 percent return on investment with the entire oil and gas industry receiving an average 13.7 percent return on investment. All other manufacturing companies have averaged 12.3 percent return on investment during this period. This recent improvement in financial performance from 2000 through 2006 coincided with rising oil and gas prices. Further, since 2006, oil and gas prices have risen even higher, while EIA’s most recent projections to 2030 are for oil and gas prices to remain much higher than they were for most of the period 1985 through 1999. In addition, the United States is also generally ranked favorably as a place to conduct business by the World Bank and by business media sources, including The Economist, and AM Best. For example, the World Bank ranked the United States as the third most favorable place to conduct business of 178 countries analyzed in a 2007 study. The Economist in October 2007 ranked the United States as the ninth highest of 82 countries analyzed for projected favorability of business environment from 2008 to 2012. Finally, as of February 2008, the United States remained in the top tier—of five possible tiers—on AM Best’s countries for business risk index, meaning the United States generally posed the least risk for investors of the five possible levels assigned. The lack of price flexibility in royalty rates and the inability to change fiscal terms for existing leases have put pressure on Interior and the Congress to change royalty rates in the past on future leases on an ad hoc basis. For example, in 1980, a time when oil prices were comparable in inflation-adjusted terms to today’s prices, Congress passed a windfall profit tax, which amounted to an excise tax per barrel of oil produced in the United States. Congress repealed that tax in 1988 at time when oil prices had fallen significantly from their 1980 level. The tax attempted to recoup for the federal government much of the revenue that would have otherwise gone to the oil industry as a result of the decontrol of oil prices. Further, in 1995—a period with relatively low oil and gas prices—the federal government enacted the Outer Continental Shelf Deep Water Royalty Relief Act (DWRRA). In implementing the DWRRA for leases sold in 1996, 1997, and 2000, MMS specified that royalty relief would be applicable only if oil and gas prices were below certain levels, known as “price thresholds,” with the intention of protecting the government’s royalty interests if oil and gas prices increased significantly. MMS did not include these same price thresholds for leases it issued in 1998 and 1999. In addition, the Kerr-McGee Corporation— which was active in the Gulf of Mexico and is now owned by Anadarko Petroleum Corporation—filed suit challenging Interior’s authority to include price thresholds in DWRRA leases issued from 1996 through 2000. Recently, the U.S. District Court for the Western District of Louisiana granted summary judgment in favor of Kerr-McGee concerning the application of price thresholds to those leases and this ruling is currently under appeal. In our June 2008 report on the potential foregone revenues at stake in the Kerr- McGee litigation, we found that the value of future forgone royalties is highly dependent upon oil and gas prices, and on production levels. Assuming that the District Court’s ruling is upheld, future foregone royalties from all the DWRRA leases issued from 1996 through 2000 could range widely—from a low of about $21 billion to a high of $53 billion, depending on the outcome of ongoing litigation concerning the authority of Interior to place price thresholds that would remove the royalty relief offered on certain leases. The $21 billion figure assumes relatively low production levels and oil and gas prices that average $70 per barrel and $6.50 per thousand cubic feet over the lives of the leases. The $53 billion figure assumes relatively high production levels and oil and gas prices that average $100 per barrel and $8 per thousand cubic feet over the lives of the leases. A royalty relief provision was also included in the Energy Policy Act of 2005 on leases issued during the 5-year period beginning on August 8, 2005. In 2007, the Secretary of the Interior twice increased the royalty rate for future Gulf of Mexico leases—in January, the rate for deep water leases was raised to 16-2/3 percent and in October, the rate for all future leases in the Gulf, including those issued in 2008, was raised to 18-3/4 percent. Interior estimated these actions will increase federal oil and gas revenues by $8.8 billion over the next 30 years. The January 2007 increase applied only to deep water Gulf of Mexico leases; the October 2007 increase applied to all water depths in the Gulf of Mexico. These royalty rate increases appear to be a response by Interior to the high prices of oil and gas that have led to record industry profits and raised questions about whether the existing federal oil and gas fiscal system gives the public an appropriate share of revenues from oil and gas produced on federal lands and waters. However, the royalty rate increases do not address these record industry profits from existing leases at all and high profits will likely remain as long as the existing leases produce oil and gas or until oil and gas prices fall. In addition, in choosing to increase royalty rates, Interior did not evaluate the entire oil and gas fiscal system to determine whether or not these increases were sufficient to balance investment attractiveness and appropriate returns to the federal government for oil and gas resources. On the other hand, according to Interior, it did consider factors such as industry costs for outer continental shelf exploration and development, tax rates, rental rates, and expected bonus bids. Further, because the new royalty rates are not flexible with respect to oil and gas prices, Interior and the Congress may again be under pressure from industry or the public to further change the royalty rates if and when oil and gas prices either fall or continue rising. Finally, these royalty changes only affect Gulf of Mexico leases and do not address onshore leases at all, which should also be considered in light of the increases in oil and gas prices. In addition, Wood Mackenzie reports that the deep water U.S. Gulf of Mexico ranked in the bottom half of countries in terms of oil and gas fiscal system stability based on repeated changes to fiscal terms for future leases and on the relative lack of built-in flexibility that would allow the fiscal terms to adjust to market conditions. Specifically, the Wood Mackenzie study ranked the deep water U.S. Gulf of Mexico fiscal terms as lower than 71 (about 72 percent) of the 103 oil and gas fiscal systems. In contrast, among the key trends among governments in recent years has been to make fiscal terms more responsive to market conditions. By adding such progressive features to oil and gas fiscal systems including royalty rates that increase with oil and gas prices, these other entities are making their systems more stable over time by reducing incentives for industry or the public to push for ad hoc changes in fiscal terms as future prices change. Wood Mackenzie’s measure of fiscal stability combines two criteria: recent history of changes to fiscal terms and built-in flexibility. As discussed previously in this report, changes to royalty rates occurred in the Gulf of Mexico three times since 1995, with the royalty relief in the mid 1990s and the two increases in royalty rates in 2007. However, as noted above, the study was conducted before the second 2007 increase in royalty rates, so the government take would likely have increased but the U.S. stability rating could have fallen in the intervening period. Built-in flexibility reflects the relative degree to which a fiscal system is regressive or progressive, with more progressive systems being more flexible. A flexible system does not mean changing the fiscal terms of existing contracts but having a system in place that automatically adjusts to changing economic and market conditions. Oil and gas companies we communicated with stated a clear preference for stable fiscal terms, other things being equal. Overall, oil and gas companies may be more willing to invest in flexible systems, given that they tend to be inherently more stable and therefore are less likely to be arbitrarily changed on a recurring basis. Oil and gas companies and industry trade associations we contacted provided us a range of views on the advantages and disadvantages of various oil and gas fiscal systems, and generally indicated that one of the most important features of any system is its stability and predictability. Stability of fiscal terms is important because oil and gas companies are making very long-term investments and uncertainty about whether or not the resource owner will change the fiscal terms during the lifetime of the investment adds to the investment risk. The respondents also said that the terms of the oil and gas fiscal system should consider industry exploration and development costs, the likelihood of discovery, and political and economic risks. While companies surely prefer lower government take, all else constant, to the extent that stability is also preferred, a more stable system may be able to remain competitive for investment while resulting in a higher government take than a less stable system. In particular, companies may be willing to pay a larger average share of oil and gas revenues if they believe that oil and gas fiscal systems will not change when market conditions change, such as the windfall profits charges that a number of countries have recently imposed. Such willingness to accept lower expected profits in exchange for lower risk is a common feature of investment markets. In addition to the potential trade-off between oil and gas fiscal system stability and government take, companies may be willing to pay higher average shares of revenues if governments bear some of the risk that companies take on when they purchase the rights to explore for oil and gas. For example, in the United States as well as for a number of other governments, leases are awarded through a bidding process that requires companies to pay bonus bids for the rights to explore and develop leases. With regard to bonus bids, there are advantages to requiring such bids. First, when companies have to compete with one another to win a lease, the lease is more likely to be awarded to a company with the expertise and resources to properly explore and develop the resources on the lease than if leases are awarded using some other rationing mechanism that does not take into account how much companies are willing to pay for the lease. In addition, it guarantees the public some revenue early on in the exploration and development process, which can take a number of years to complete. However, the use of bonus bids pushes a great deal of risk onto oil and gas companies and requires them to estimate many uncertain factors, including the amounts of oil and gas that will ultimately be produced on the lease, the costs of that production, and the prices of gas and oil over the entire working life of the lease. In general, by increasing the risk that companies bear, these companies will have to expect to receive a higher rate of return to be willing to take on the project. In fiscal systems requiring bonus bids or other up-front payments, the companies bear the risk that leases will not generate economically significant oil and gas production. In fact, in the United States, a large proportion of leases that companies have paid for do not generate economic levels of production and the companies, after purchasing the lease, and paying rent for the duration of the initial term of the lease and whatever resources they spent on exploring for oil and gas, simply let the lease revert back to the government when the initial term expires. Some oil and gas fiscal systems mitigate the risk associated with up-front company expenditures by allowing the companies to recover exploration and development costs prior to starting higher royalty payments. For example, Alberta, Canada, has used such fiscal terms. Other fiscal systems share risk with companies by more strongly linking government take to company profits. In such oil and gas fiscal systems, government take is low in early years of a lease, when exploration and early development are being undertaken, but increases if production increases or if oil and gas prices increase once production begins. The state of Alaska has recently changed its fiscal terms to increase its government take and to increase the linkage between government take and company profits. Both Alberta and the state of Alaska have higher government takes than the U.S. Gulf of Mexico according to the Wood Mackenzie study. Interior does not routinely evaluate the federal oil and gas fiscal system as a whole, monitor what other resource owners worldwide are receiving for their energy resources, or evaluate and compare the attractiveness of the United States for oil and gas investment with that of other oil and gas regions. As a result, Interior cannot assess whether or not there is a proper balance between the attractiveness of federal lands and waters for oil and gas investment and a reasonable assurance that the public is getting an appropriate share of revenues from this investment. This is true of the U.S. Gulf of Mexico as well as other federal oil and gas producing regions. Interior does not have procedures in place for routinely evaluating the ranking of (1) the federal oil and gas fiscal system against other resource owners or (2) industry rates of return on federal leases compared to other U.S. industries which could factor into any decisions about whether or how to alter the fiscal systems in response to changing market conditions. Interior officials told us that they have a “bid adequacy review process” for offshore leases that determines whether the bonus bid meets criteria designed to ensure fair market value of the leased tract but that onshore leases do not have a similar bid adequacy provision. Moreover, Interior maintains it has been responsive to changes in market conditions through revisions to lease terms, including changes in minimum bonus bid levels, fluctuating royalty rates, and price thresholds. However, as we have discussed previously in this report, bonus bids have both positive and negative sides with respect to their likely impact on overall government take. Further, frequent adjustments to fiscal terms are not looked on favorably by industry, especially when they involve increases in royalty rates or other charges. Interior indicated, in commenting on the report draft, that it is in the process of evaluating other fiscal approaches such as sliding scale royalties for some oil and gas leases. We did not evaluate the effectiveness of the bid adequacy review process in terms of its intended goal of ensuring bonus bids on offshore federal leases are competitive. However, even assuming that these bids are competitive, we do not think that this is sufficient to ensure that the other elements of the system are appropriately balancing the interests of taxpayers and oil and gas companies. In light of the complexity of oil and gas fiscal systems, the great deal of uncertainty surrounding the volumes and future prices of oil and gas, and the costs of producing it, oil and gas companies cannot be expected to accurately forecast all the factors that will ultimately determine the value of a lease at the time that lease is sold. As a result, oil and gas company profits have tended to rise and fall over time with oil and gas prices, putting pressure on Interior to alter fiscal terms in a reactive rather than a strategic way. Further, the fact that Interior does not apply the same or a similar bid adequacy process for onshore leases raises questions about how Interior, overall, is providing reasonable assurance that even the bonus bids it receives are competitively determined in all publicly owned oil and gas producing regions. While Interior has made many specific changes to components of the federal oil and gas fiscal system over the years to adjust to changing market conditions, these changes were generally not done as part of a comprehensive review of the fiscal system that took into account the relative ranking of the U.S. government take or other comparisons with other countries or regions. The last time Interior conducted a comprehensive evaluation of the oil and gas fiscal system was over 25 years ago. The lack of a recent comprehensive re-evaluation of the U.S. federal fiscal system stands in contrast to the actions of many other governments that have recently reevaluated or are currently re-evaluating their fiscal systems in light of rising oil and gas prices and higher industry profits and rates of return. For example, as previously discussed in this report, a number of countries have recently imposed windfall profits taxes or other mechanisms to increase the resource owners’ shares of oil and gas revenues from existing projects. Wood Mackenzie estimates that these changes will ultimately result in these countries’ collecting additional oil and gas revenues of between $118 billion and $400 billion, depending on future oil and gas prices. In evaluating an oil and gas fiscal system, all components of the system, including bonus bids, land rental rates, royalties, and oil and gas company taxes, must be considered. However, while Interior has a great deal of expertise and data from years of administering and collecting revenues from oil and gas leases on federal lands and waters that would be essential for any review of the federal oil and gas fiscal system, they do not have the authority to change taxes and, therefore, cannot fully revise the system without legislative action by the Congress. Further, it is essential to keep federal leases competitive with other potential investments governed by different fiscal systems. Therefore, in addition to input from Interior, oil and gas industry experts must also be consulted in any comprehensive review of the federal oil and gas fiscal system. For example, when Alberta recently reviewed its oil and gas fiscal system, it convened a panel that included experts from academia, energy research and consulting firms, and the energy industry and also hired a consultant to evaluate the system and make specific recommendations. Following this review, Alberta increased some elements of the oil and gas fiscal system. However, prior to this review, Canadian government corporate taxes were reduced, which made Alberta more attractive for investors. In any comprehensive review of the U.S. oil and gas fiscal system, taxes may need to be part of the discussion. Therefore, congressional action may be needed to change the federal oil and gas fiscal system, if changes are ultimately determined to be appropriate. Oil prices have increased in recent years to levels not seen since the late 1970s and early 1980s when adjusted for inflation. Natural gas prices have also been high by historical standards in recent years. These high prices have coincided with rising oil company profits. Moreover, the EIA’s long- term outlook projects these prices to remain much higher than what they had been for much of the past 25 years. Our work indicates that federal oil and gas leases in the deep water U.S. Gulf of Mexico and other U.S. regions are attractive investments and that the government take in the U.S. Gulf of Mexico ranks among the lowest across a large number of other oil and gas fiscal systems. Our work further indicates that other measures, including fiscal attractiveness and rates of return, indicate the U.S. Gulf of Mexico and other U.S. oil and gas producing regions are attractive places to invest. However, the regressive nature of the U.S. federal fiscal systems and other factors have caused these fiscal systems to be unstable over time and this adds risk to oil and gas investments and may reduce the amount oil and gas companies are willing to pay in total for the rights to explore and develop federal leases. Because of these facts and because Interior has not re-evaluated its oil and gas fiscal system in over 25 years, a comprehensive re-evaluation is called for. While Interior could collect data and commission studies to re-evaluate the federal fiscal system, the agency does not currently have the information to fully compare the federal fiscal system with those of other governments, including states or foreign countries. In addition, Interior does not have the authority to make changes to all elements of federal fiscal system if such changes were found to be desirable. Finally, because of the complexity of evaluating oil and gas fiscal systems and the importance of striking a balance between remaining an attractive place for investment and providing revenue to the federal government, it is important that independent experts also be consulted as well as representatives from the oil and gas industry. In the draft report we sent to Interior for comment, we made recommendations to address these issues. In its response, Interior stated that it did not fully concur with our recommendations because it had already contracted for a study that will address many of the issues we raise. However, because Interior’s ongoing study is limited in scope and is limited to a specific region in the Gulf of Mexico, rather than a review of the entire federal oil and gas fiscal system as we recommended, we do not find the agency’s stated rationale for not agreeing fully with our recommendations to be convincing. Therefore, we believe that Congress may wish to consider directing the Secretary of the Interior to convene an independent panel to perform a comprehensive review of the federal oil and gas fiscal system. Further, in order to keep abreast of potentially changing market conditions going forward, the Congress may wish to consider directing the Secretary of the Interior to direct the Minerals Management Service and other relevant agencies within Interior to establish procedures for periodically collecting data and information and conducting analyses to determine how the federal government take and the attractiveness for oil and gas investors in each federal oil and gas region compare to those of other resource owners and report this information to the Congress. The Department of the Interior provided us comments on a draft of the report. Overall, the department agreed that it is important to reassess the federal oil and gas fiscal system but did not fully concur with either of our two recommendations to (1) perform a comprehensive review of the system using an independent panel and (2) adopt policies and procedures to keep abreast of important changes in the oil and gas market and in other countries’ efforts to adjust their oil and gas management practices in light of these changes. We disagree with Interior’s rationale for its lack of full concurrence with our recommendations and have, therefore, elected to reframe the recommendations into Matters for Congressional Consideration in the final report. In response to our first recommendation, Interior indicated that it would be premature and duplicative for the department to undertake such a review because it had recently contracted with an outside party to conduct a 2-year study of the policies affecting the pace of area-wide leasing and revenues in the Central and Western Gulf of Mexico. We disagree that our recommended review is either premature or duplicative with this Interior study effort. First, a comprehensive review is overdue, given that Interior has not performed a comprehensive evaluation of the oil and gas fiscal system in over 25 years and in light of the dramatic increases in oil and gas prices and industry profits in recent years. Further, as documented in this report, many other oil and gas owners have been re-evaluating and changing their oil and gas fiscal systems in response to these recent market conditions. The Congress and the public are justifiably concerned about whether the federal government is getting a fair return for its energy resources as oil and gas company profits have reached record levels. In addition, our recommended review would not be duplicative with Interior’s ongoing study, which is geographically limited to only two sections of the Gulf of Mexico. In contrast, we recommended that Interior review all its oil and gas fiscal systems, both onshore and offshore. Nor does Interior’s ongoing study cover the full scope of review that we recommended, including looking at how other resource owners are managing their oil and gas fiscal systems. Further, Interior’s ongoing study does not explicitly look at the stability of the system as we recommended and this appears to be a critical factor influencing changes to oil and gas fiscal systems globally. Finally, Interior’s ongoing effort does not utilize an independent panel. We believe it is essential to empanel an independent body, representative of major stakeholders, including those representing the interests of industry and the public, in order to develop recommendations that strike an appropriate balance between remaining and attractive place for investment and providing revenue to the federal government. In response to our second recommendation, Interior implied that such an effort was unnecessary because Interior agencies that lease federal minerals already keep abreast of current literature on fiscal systems of other resource owners. During our work, we identified only one Interior study done over the past 25 years that provided information on the U.S. government take compared to other fiscal systems. While that one Interior study issued in 2006 showed, similar to our work, that the U.S. government take was low compared to other fiscal systems, it is also worth noting that the study itself relied on dated 1994 government-take information. Therefore, we do not believe that Interior has adequately kept abreast of important trends in oil and gas management, especially as it relates to how other resource owners are managing these resources. In addition, our recommendation went further than simply keeping abreast of current literature. In particular, our recommendation sought to have Interior monitor and report on how the federal government’s fiscal terms for oil and gas development compare with the terms of other resource owners worldwide. Interior’s full letter commenting on the draft report is printed as appendix III, and our detailed response follows. In addition, Interior made technical comments that we have addressed as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this report. At that time, we will send copies to appropriate congressional committees, the Secretary of the Interior, the Director of MMS, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 on [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. We performed our work at the Department of Interior’s (Interior), Bureau of Land Management’s (BLM), and Minerals Management Service’s (MMS) offices and in Washington, D.C. from May 2007 to September 2008 in accordance with generally accepted government auditing standards. We focused our analysis of government take and industry rates of return on the U.S. Gulf of Mexico because it represents approximately 79 percent of oil and 50 percent of gas production on federal leases, and because there are complicating factors for onshore oil and gas leases, such as state and local taxes or fees that may differ by locality, which the available studies do not fully address. We did evaluate information that applied more broadly to the United States, specifically with respect to overall measures of the attractiveness of the United States for oil and gas investment. However, we cannot infer from our review of the Gulf of Mexico federal oil and gas leases how the data on federal government take or industry returns to investment are applicable to federal onshore leases. In general, the results of this review can compare the federal system associated with the U.S. Gulf of Mexico to that of other oil and gas fiscal systems but cannot provide specific prescriptive recommendations for how to change the federal fiscal system to achieve a fair return for the public from sale of oil and gas on public lands and waters. We also compared the federal oil and gas fiscal system to all types of fiscal systems around the world to encompass the range of choices that oil and gas companies are faced with when deciding where to invest. To determine the degree to which the federal government is receiving a fair return, our work included reviewing various pieces of energy resource management legislation enacted over the last several decades. This included, among others, the Outer Continental Shelf Lands Act of 1953 (OCSLA) and its amendments and the Federal Land Policy and Management Act of 1976 (FLPMA) and its amendments. We also collected and analyzed various pieces of Interior energy resource policy and management information. To evaluate how the U.S. government take compares to those in other countries, we reviewed the results of a study procured from Wood Mackenzie, a leading industry consultant, and recent studies conducted by other private consultants or resource owners. We also collected and analyzed various studies generated by MMS, the agency responsible for collecting oil and gas royalties from federal lands and waters and interviewed private consulting firm officials. In evaluating the study results, we conducted interviews with study authors and an industry expert to discuss the study methodologies and the appropriate interpretation of the results. Based on these interviews and our review of study results, we believe the general approach that these study authors took was reasonable and that the results of the studies are credible. However, we did not fully evaluate each study’s methodology or the underlying data used to make the government take estimates. Overall, because all the studies came to similar conclusions with regard to the relative ranking of the U.S. federal government, and because such studies are used by oil and gas industry companies and governments alike for the purposes of evaluating the relative competitiveness of specific oil and gas fiscal systems, we are confident that the broad conclusions of the studies are valid. To assess the extent to which the United States’ oil and gas fiscal system is able to remain stable as market conditions change, we relied heavily on the study and data we obtained from Wood Mackenzie. We interviewed industry experts and gathered information regarding the types of fiscal systems and the relative stability offered with each. We interviewed company officials and industry experts to obtain information on their preferences regarding fiscal system characteristics. We also purchased data from Compustat and analyzed that data and data published by the Energy Information Administration. The financial data we procured are widely used by private companies and governments for purposes of comparing company and industry rate of return over time, because Interior in the past used rate of return as a credible measure to evaluate the profitability of the Gulf of Mexico for firms conducting oil and gas exploration there versus the relative profitability of other manufacturing firms operating in the United States. We also evaluated data reported by the American Petroleum Institute and other sources. Further, we reviewed various reports prepared over the last 2 years by private sources on the profitability of oil and gas companies operating in the U.S. versus operating elsewhere in the world. We also spoke to industry officials regarding aspects of the various fiscal systems in which they operate. Finally, we discussed the issue of a “fair return” with various Interior, BLM, and MMS officials, as well as members of the oil and gas industry. To determine what steps Interior takes to get reasonable assurance that the federal government take provides a fair return to the public, we reviewed Interior studies and procedures, and interviewed officials from MMS. We conducted this performance audit from May 2007 to September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to Interior, companies operating in the U.S. Gulf of Mexico had received more than $1.3 billion in royalty relief through September 30, 2007. Table 2 lists the companies that have received royalty relief under DWWRA and the amounts of that relief. Six companies had signed agreements with Interior, allowing thresholds to be placed for royalties to be paid in the future. Those companies are BP Exploration and Production; ConocoPhillips & Burlington Resources Offshore, Inc.; Marathon; Shell; Walter Hydrocarbons; and Walter Oil and Gas. According to Interior information dated February 4, 2008, ConocoPhillips & Burlington Resources Offshore, Inc., had not received royalty relief. The following are GAO’s comments on the Department of the Interior’s letter dated August 8, 2008. 1. Regarding Interior’s statements that (1) the draft report relies heavily on measures of government take but does not clarify the link between government take and investment attractiveness, and (2) the draft report does not relate the significance of the OCSLA and FLPMA laws, we disagree. The report on page 1 states that several factors need to be considered, including the size of availability of the oil and gas resources in place; the cost of finding and developing these resources, and the stability of other the oil and gas fiscal systems and the country in general. Also on page 2, we note that a fair government take would strike a balance between encouraging private companies to invest in the development of oil and as resources on federal lands and waters while maintaining the public’s interest in collecting the appropriate level of revenues from the sale of the public’s resources. Further, we devote a significant portion of our discussion of objective one to how the attractiveness of the U.S. oil and gas fiscal system compares with those of other resource owners, and concludes that U.S Gulf of Mexico and other U.S. places are attractive places to invest. With regard to the significance of the OCSLA and FLPMA laws, on page 3 of the report we discuss the provisions of the OCSLA and FLPMA laws and how they relate to the management of the federal oil and gas fiscal system. 2. Interior commented that the report’s conclusion that inflexibility in the federal oil and gas fiscal system is responsible for significant reductions in the federal fiscal take is not supported. We maintain that the inherent inflexibility of the federal fiscal system means that government receipts from the production of oil and gas on federal lands and waters have not tracked with the prices of oil and gas. This lack of flexibility explains, in part, why the Congress enacted the Deep Water Royalty Relief Act in 1995, a time when oil and gas prices were much lower than they are today. The lack of flexibility of the royalty rates for some of the leases issued under this Act, as implemented by Interior, will end up costing the public billions of dollars in foregone revenues. Further, the recent increases to royalty rates that Interior references in its comments do nothing to address the bulk of leases already held and for which industry profits have increased as high as they have precisely because neither the royalty rates, nor other components of the oil and gas fiscal system were sufficiently flexible to allow federal revenues to increase automatically when oil and gas prices and industry profits increased. Overall, Interior should strive to achieve fair market value over time, not simply evaluate market conditions at the time leases are issued. 3. With regard to Interior’s comments that although it has not conducted a comprehensive evaluation of the federal oil and gas fiscal system, it has evaluated expected resources and conditions on the Outer Continental Shelf (offshore) tracts, we agree that Interior takes some steps to evaluate offshore leases but the objective addresses a broader evaluation of how Interior monitors the performance and appropriateness of the entire federal oil and gas fiscal system, including offshore and onshore, and also including assessing performance over time rather than at the time a lease is sold. Interior officials told us they evaluate offshore tracts before the issuance of a lease for prospectivity of the lease and use such measures to determine an adequate minimum bid for the lease. However, as Interior makes note in its own comments, Interior officials have not systematically reviewed the bid outcomes of offshore tracts. 4. We agree that onshore and offshore leases can be very different and for the reasons stated in Interior’s comments. That is why we recommended a comprehensive review of the entire federal oil and gas fiscal system, including onshore and offshore. We also recommended that the results of this comprehensive review be presented to the Congress so that it can act appropriately in the event any existing laws or regulations that govern the leasing and collection of revenues from federal oil and gas leases could be improved in light of the recommendations of the independent panel. 5. Interior states that the report implies that Interior maximizes government receipts from oil and gas leases. We disagree that the report implies this and can find no place in the report where we believe a reader would make such an inference. 6. We disagree with Interior’s statement that it “analyzes fiscal terms before each lease sale and reviews the results of each sale.” Interior’s analysis of prospective leases is for offshore leases only and, according to Interior officials, is an analysis of the prospectivity of the offshore tract, designed to set minimum adequate bids. It is not a review of “fiscal terms,” as Interior states in its comments. With regard to the two recent royalty rate increases for future oil and gas leases in the Gulf of Mexico, these increases do not resolve fair market value for past leases issued with inflexible fiscal terms and are themselves inflexible. Therefore, if future oil and gas prices turn out to be different than what Interior expected when they made the changes, the resulting outcome will again not reflect a fair return and could be too high or too low, depending on what happens in the oil and gas markets. 7. With regard to Interior’s comment that it recently contracted with a panel of academic oil and gas industry experts to conduct a study of fiscal arrangements including fixed and sliding royalty terms, please see our general response to Interior’s comments on page 24. 8. We agree with the statements Interior makes in this paragraph, and note that these concepts are also well represented in our report. For example, we note that investment choices are affected by many variables, including the fiscal system of rents, royalties, and bonus bids, as well as the cost of capital, risk, and the attractiveness of investments; indeed, we designed the job to discuss the first range of issues in our first objective, and the second range of issues in the second objective. We conclude, and Interior agrees, that any increases in federal revenues through higher fiscal terms must be carefully weighed; however, Interior has not done this “careful weighing” in making its royalty rate increases. That is why we recommended a comprehensive review of the federal oil and gas fiscal system. 9. With regard to Interior’s comment that it operates under a management and leasing policy defined by the Congress in the OCSLA and FLPMA, we agree and this is reflected on page 3 of the draft report. However, Interior cannot effectively conduct the mineral leasing programs without evaluating federal mineral leasing systems and assessing industry rates of return and other factors discussed in this report. Interior must keep abreast of these issues and developments in fiscal regimes elsewhere, and advise Congress on developments in the competitiveness of the federal oil and gas fiscal system versus those employed by other resource owners. Further, our audit work shows that Interior has responded to oil and gas market changes in a reactive, rather than strategic and forward-looking manner, and we believe the Congress needs to be kept abreast of changes affecting federal oil and gas leasing and revenue generation. 10. Interior comments that the draft report does not mention the royalty relief mandated by the Congress in the Energy Policy Act of 2005, and its decision to seek repeal of this provision, and that the impact of the law should have been taken into consideration. We agree that the draft report did not discuss the 2005 law explicitly but note that the results we report do implicitly take this law into consideration. Our results on the government take and attractiveness of investment in the deep water Gulf of Mexico derive largely from a 2007 study done by Wood Mackenzie that took into account the impact of the existing laws at the time of the study. We have added language to make explicit acknowledgement of the Energy Policy Act of 2005 11. See our general response to Interior’s comments on page 24-25 of this report. 12. See our general response to Interior’s comments on page 24-25 of this report. 13. See our general response to Interior’s comments on page 24-25 of this report. In additional to the individual named above, Jon Ludwigson (Assistant Director), Robert Baney, Ron Belak, Nancy Crothers, Glenn Fischer, Michael Kendix, Carol Kolarik, Michelle Munn, Daniel Novillo, Ellery Scott, Rebecca Shea, Dawn Shorey, Barbara Timmerman, and Maria Vargas made key contributions to this report.
In fiscal year 2007, domestic and foreign companies received over $75 billion from the sale of oil and gas produced from federal lands and waters, according to the Department of the Interior (Interior), and these companies paid the federal government about $9 billion in royalties for this oil and gas production. The government also collects other revenues in rents, taxes, and other fees, and the sum of all revenues received is referred to as the "government take." The terms and conditions under which the government collects these revenues are referred to as the "oil and gas fiscal system." This report (1) evaluates government take and the attractiveness for investors of the federal oil and gas fiscal system, (2) evaluates how the absence of flexibility in this system has led to large foregone revenues from oil and gas production on federal lands and waters, and (3) assesses what Interior has done to monitor the performance and appropriateness of the federal oil and gas fiscal system. To address these issues, we reviewed expert studies and interviewed government and industry officials. In addition to having a low government take, the deep water Gulf of Mexico and other U.S. regions are attractive targets for investment because they have large remaining oil and gas reserves and the U.S. is generally a good place to do business compared to many other countries with comparable oil and gas resources. Multiple studies completed as early as 1994 and as recently as June 2007 indicate that the U.S. government take in the Gulf of Mexico is lower than that of most other fiscal systems. For example, data GAO evaluated from a June 2007 industry consulting firm report indicated that the government take in the deep water U.S. Gulf of Mexico ranked 93rd lowest of 104 oil and gas fiscal systems evaluated. Generally, other measures indicate that the United States is an attractive target for oil and gas investment. The lack of price flexibility in royalty rates--automatic adjustment of these rates to changes in oil and gas prices or other market conditions--and the inability to change fiscal terms on existing leases have put pressure on Interior and the Congress to change royalty rates in the past on an ad hoc basis with consequences that could amount to billions of dollars of foregone revenue. For example, royalty relief granted on leases issued in the deep water areas of the Gulf of Mexico between 1996 and 2000--a period when oil and gas prices and industry profits were much lower than they are today--could cost the federal government between $21 billion and $53 billion, depending on the outcome of ongoing litigation challenging the authority of Interior to place price thresholds that would remove the royalty relief offered on certain leases. Further, royalty rate increases in 2007 are expected to generate modest increases in federal revenues from future leases offered in the Gulf of Mexico. However, in choosing to increase royalty rates, Interior did not evaluate the entire oil and gas fiscal system to determine whether or not these increases strike the proper balance between the attractiveness of federal leases for investment and appropriate returns to the federal government for oil and gas resources. Interior does not routinely evaluate the federal oil and gas fiscal system, monitor what other governments or resource owners are receiving for their energy resources, or evaluate and compare the attractiveness of federal lands and waters for oil and gas investment with that of other oil and gas regions. As a result, Interior cannot assess whether or not there is a proper balance between the attractiveness of federal leases for investment and appropriate returns to the federal government for oil and gas resources. Specifically, Interior does not have procedures in place for evaluating the ranking of (1) the federal oil and gas fiscal system or (2) industry rates of return on federal leases against other resource owners. Interior also does not have the authority to alter tax components of the oil and gas fiscal system. All these factors are essential to inform decisions about whether or how to alter the federal oil and gas fiscal system in response to changing market conditions.
Initial joint reform efforts have, in part, aligned with key practices that we have identified for organizational transformations, such as having committed leadership and a dedicated implementation team, but reports issued by the Joint Reform Team do not provide a strategic framework that contains other important elements of a successful transformation, such as a mission statement and long-term goals with related outcome- focused performance measures to show progress, and do not identify obstacles to progress and possible remedies. In September 2002, GAO convened a forum to identify and discuss practices and lessons learned from major private and public sector organizational mergers, acquisitions, and transformations that can serve to guide federal agencies as they transform their processes in response to governance challenges. Consistent with some of these key practices, in June 2008 Executive Order 13467 established the Suitability and Security Clearance Performance Accountability Council, commonly known as the Performance Accountability Council, as the head of the governmentwide governance structure responsible for achieving reform goals, driving implementation, and overseeing clearance reform efforts. The Deputy Director for Management at OMB—who was confirmed in June 2009—serves as the chair of the council. The Executive Order also designated Executive Agents for Suitability and Security. The Joint Reform Team, while not formally part of the governance structure established by Executive Order 13467, works under the council to provide progress reports to the President, recommend research priorities, and oversee the development and implementation of an information technology strategy, among other things. Membership on this council currently includes senior executive leaders from 11 federal agencies. In addition to high-level leadership, the reform effort has benefited from a dedicated implementation team—the Joint Reform Team—to manage the transformation process from the beginning. Although the high-level leadership and governance structure of the current reform effort distinguish it from previous efforts, it is difficult to gauge progress of reform, or determine if corrective action is needed, because the council, through the Joint Reform Team, has not established a method for evaluating the progress of the reform efforts. Without a strategic framework that fully addresses the long-standing security clearance problems and incorporates key practices for transformation—including the ability to demonstrate progress leading to desired results—the Joint Reform Team is not in a position to demonstrate to decision makers the extent of progress that it is making toward achieving its desired outcomes, and the effort is at risk of losing momentum and not being fully implemented. In addition to the key practices, the personnel security clearance joint reform reports that we reviewed collectively also have begun to address essential factors for reforming the security clearance process, which represents positive steps. GAO’s prior work and IRTPA identified several factors key to reforming the clearance process. These include (1) developing a sound requirements determination process, (2) engaging in governmentwide reciprocity, (3) building quality into every step of the process, (4) consolidating information technology, and (5) identifying and reporting long-term funding requirements. However, the Joint Reform Team’s information technology strategy, which is intended to be a cross- agency collaborative initiative, does not yet define roles and responsibilities for implementing a new automated capability. GAO’s prior work has stressed the importance of defining these roles and responsibilities when initiating cross-agency initiatives. Also, the joint reform reports do not contain any information on initiatives that will require funding, determine how much they will cost, or identify potential funding sources. Without long-term funding requirements, decision makers in both the executive and legislative branches will lack important information for comparing and prioritizing proposals for reforming the clearance processes. The reform effort’s success will be dependent upon the extent to which the Joint Reform Team is able to fully address these key factors moving forward. Therefore, we recommended that the OMB Deputy Director of Management, in the capacity as Chair of the Performance Accountability Council, ensure that the appropriate entities—such as the Performance Accountability Council, its subcommittees, or the Joint Reform Team— establish a strategic framework for the joint reform effort to include (1) a mission statement and strategic goals; (2) outcome-focused performance measures to continually evaluate the progress of the reform effort toward meeting its goals and addressing long-standing problems with the security clearance process; (3) a formal, comprehensive communication strategy that includes consistency of message and encourages two-way communication between the Performance Accountability Council and key stakeholders; (4) a clear delineation of roles and responsibilities for the implementation of the information technology strategy among all agencies responsible for developing and implementing components of the information technology strategy; and (5) long-term funding requirements for security clearance reform, including estimates of potential cost savings from the reformed process that are subsequently provided to decision makers in Congress and the executive branch. In oral comments on our report, OMB stated that it partially concurred with our recommendation to establish a strategic framework for the joint reform effort. Further, in written agency comments provided to us jointly by DOD and ODNI, they also partially concurred with our recommendation. Additionally, DOD and ODNI commented on the specific elements of the strategic framework that we included as part of our recommendation. For example, in their comments, DOD and ODNI agreed that the reform effort must contain outcome-focused performance measures, but added that these metrics must evolve as the process improvements and new capabilities are developed and implemented because the effort is iterative and in phased development. We continue to believe that outcome-focused performance measures are a critical tool that can be used to guide the reform effort and allow overseers to determine when the reform effort has accomplished its goals and purpose. In addition, DOD and ODNI asserted that considerable work has already been done on information technology for the reform effort, but added that even clearer roles and responsibilities will be identified moving forward. Regarding our finding that, at present, no single database exists in accordance with IRTPA’s requirement that OPM establish an integrated database that tracks investigations and adjudication information, DOD and ODNI stated that the reform effort continues its iterative implementation of improvements to systems that improve access to information that agencies need. They also acknowledged that more work needs to be done to identify long-term funding requirements. While our work also found that DOD and OPM met timeliness requirements for personnel security clearances in fiscal year 2008, the executive branch’s 2009 required report to Congress does not reflect the full range of time it takes to make all initial clearance decisions. Currently, 80 percent of initial clearance decisions are to be made within 120 days, on average, and by December 2009 a plan is to be implemented under which to the extent practical 90 percent of initial clearance decisions are to be made within 60 days, on average. Under both requirements, the executive branch can exclude the slowest percentile, and then report on an average of the remaining clearances. The most recent report stated that the average time to complete the fastest 90 percent of initial clearances for military and DOD civilians in fiscal year 2008 was 124 days, on average. However, without taking averages or excluding the slowest clearances, we analyzed 100 percent of initial clearances granted in 2008 and found that 39 percent still took more than 120 days. By limiting its reporting on timeliness to the average of the fastest 90 percent of the initial clearance decisions made in fiscal year 2008, the executive branch did not provide congressional decision makers with visibility over the full range of time it takes to make all initial clearance decisions and the reasons why delays continue to exist. In addition to limited visibility over timeliness of clearances, the executive branch’s annual reports to Congress on the personnel security clearance process have provided decision makers with limited data on quality, and the executive branch has missed opportunities to make the clearance process transparent to Congress. For example, we independently estimated that 87 percent of about 3,500 investigative reports prepared by OPM that DOD adjudicators (employees who decide whether to grant a clearance to an applicant based on the investigation and other information) used to make clearance decisions, for initial top secret clearances adjudicated in July 2008, were missing required documentation. We found, however, that DOD has not issued formal guidance clarifying if and under what circumstances adjudicators can adjudicate incomplete investigative reports. For DOD adjudicative files, we estimated that 22 percent were missing required documentation of the rationale for granting clearances to applicants with security concerns. Because neither OPM nor DOD measures the completeness of their investigative reports or adjudicative files, both agencies are limited in their ability to explain the extent to which or the reasons why some documents are incomplete. Incomplete documentation may lead to increases in the time needed to complete the clearance process and in the overall costs of the process and may reduce the assurance that appropriate safeguards are in place to prevent DOD from granting clearances to untrustworthy individuals. We have stated that timeliness alone does not provide a complete picture of the clearance process and emphasized that attention to quality could increase reciprocity—accepting another federal entity’s clearances—and the executive branch, though not required to include information on quality in its annual reports, has latitude to report appropriate information. We are encouraged that, while the 2009 report did not provide any data on quality, unlike previous reports it did identify quality metrics that the executive branch proposes to collect. Because the executive branch has not fully addressed quality or the full range of time to complete clearances in its reports, it has missed opportunities to provide congressional decision makers with full transparency over the clearance process. Therefore, in our recent report, we recommended that the Deputy Director for Management at OMB, as the Chair of the Performance Accountability Council, include (1) comprehensive data on the timeliness of the personnel security clearance process and (2) metrics on quality in future versions of the IRTPA-required annual report to Congress. We also recommended that DOD clarify its guidance to specify when adjudicators can use incomplete investigative reports in adjudication decisions and that OPM and DOD measure the completeness of their investigation and adjudication documentation to improve the completeness of future documentation. In commenting on a draft of our report, OMB concurred with both of our recommendations to that agency, commenting that it recognized the need for more reporting on timeliness and quality. OMB described some steps that the Performance Accountability Council is taking to address our recommendations, including developing measures to account, more comprehensively, for the time it takes to complete the end-to-end clearance process. In its written comments, DOD also concurred with both of the recommendations directed to the department, and described specific steps it expects to implement later this year to address the recommendations. Finally, in its written comments, OPM did not indicate whether it concurred with the one recommendation we made to that agency. Instead, OPM highlighted improvements it has made in reducing delays in the clearance investigations process since DOD transferred this function to OPM in 2005. In my statement, I have highlighted recommendations from the two reports we recently released that, if implemented, will help the responsible agencies continue to guide the security clearance reform effort and improve the clearance process. We are encouraged that the Joint Reform Team’s efforts during the past year have included several actions to improve the process, and we recognize that OPM and DOD are currently meeting IRTPA timeliness requirements, which represents significant and noteworthy progress. At the request of your Subcommittee, we will continue to monitor ongoing joint reform efforts with a focus on reciprocity and information technology advances, as well as efforts by the responsible agencies to implement our related recommendations, and we will continue to assess the impact of those efforts on the security clearance process governmentwide. Although the high-level leadership and governance structure of the current reform effort distinguish it from previous attempts at clearance reform, it is important to note that, in June 2009 the administration confirmed a vital leadership component necessary for sustaining the momentum achieved to date. OMB’s new Deputy Director for Management will play a crucial role in deciding how to implement the recommendations contained in the reports we recently released, as well as prior recommendations on this issue, and in leading the reform effort in his role as chair of the Performance Accountability Council. Madam Chairwoman, this concludes my prepared statement. I would be happy to respond to any questions that you or members of the Subcommittee may have at this time. For further information about this testimony, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, at (202) 512-3604, or [email protected]. Key contributors to this statement include David Moser (Assistant Director), Lori Atkinson, Joseph M. Capuano, Sara Cradic, Susan Ditto, Cindy Gilbert, Shvetal Khanna, James P. Klein, Greg Marchand, Shannin O’Neil, and Sarah Veale. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Personnel Security Clearances: Progress Has Been Made to Reduce Delays But Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. DOD Personnel Clearances: Preliminary Observations about Timeliness and Quality. GAO-09-261R. Washington, D.C.: December 19, 2008. Personnel Security Clearance: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found For Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed To Improve The Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the key recommendations from the two reports we recently released, which include (1) the need for a fully developed strategic framework for the reform process that includes outcome-focused performance measures to show progress and (2) more transparency in annually reporting to Congress on the timeliness and quality of the clearance process. This testimony is based on our review of the Joint Reform Team's plans, as well as our work on DOD's security clearance process, which includes reviews of clearance-related files and interviews of senior officials at the Office of Management and Budget (OMB), DOD, Office of the Director of National Intelligence (ODNI), and OPM. In addition, this statement is based on key practices and implementation steps for mergers and organizational transformations. We conducted our work on both reports between March 2008 and May 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Although the high-level leadership and governance structure of the current reform effort distinguish it from previous efforts, it is difficult to gauge progress of reform, or determine if corrective action is needed, because the council, through the Joint Reform Team, has not established a method for evaluating the progress of the reform efforts. Without a strategic framework that fully addresses the long-standing security clearance problems and incorporates key practices for transformation--including the ability to demonstrate progress leading to desired results--the Joint Reform Team is not in a position to demonstrate to decision makers the extent of progress that it is making toward achieving its desired outcomes, and the effort is at risk of losing momentum and not being fully implemented. In addition to limited visibility over timeliness of clearances, the executive branch's annual reports to Congress on the personnel security clearance process have provided decision makers with limited data on quality, and the executive branch has missed opportunities to make the clearance process transparent to Congress. For example, we independently estimated that 87 percent16 of about 3,500 investigative reports prepared by OPM that DOD adjudicators (employees who decide whether to grantclearance to an applicant based on the investigation and other information) used to make clearance decisions, for initial top secret clearances adjudicated in July 2008, were missing required documentation. Because neither OPM nor DOD measures the completeness of their investigative reports or adjudicative files, both agencies are limited in their ability to explain the extent to which or the reasons why some documents are incomplete. Incomplete documentation may lead to increases in the time needed to complete the clearance process and in the overall costs of the process and may reduce the assurance that appropriate safeguards are in place to prevent DOD from granting clearances to untrustworthy individuals. We have stated that timeliness alone does not provide a complete picture of the clearance process and emphasized that attention to quality could increase reciprocity--accepting another federal entity's clearances--and the executive branch, though not required to include information on quality in its annual reports, has latitude to report appropriate information. We are encouraged that, while the 2009 report did not provide any data on quality, unlike previous reports it did identify quality metrics that the executive branch proposes to collect.
FDIC was established by Congress to maintain the stability of and public confidence in the nation’s financial system by insuring deposits, examining and supervising financial institutions, and resolving troubled institutions. Congress created FDIC in 1933 in response to the thousands of bank failures that had occurred throughout the late 1920s and early 1930s. The Bank Insurance Fund and the Savings Association Insurance Fund were established as FDIC responsibilities under the Financial Institutions Reform, Recovery, and Enforcement Act of 1989, which sought to reform, recapitalize, and consolidate the federal deposit insurance system. The Bank Insurance Fund and the Savings Association Insurance Fund merged into the Deposit Insurance Fund on February 8, 2006, as a result of the passage of the Federal Deposit Insurance Reform Act of 2005. As administrator of the Deposit Insurance Fund, FDIC insures the deposits of banks and savings associations (insured depository institutions). In cooperation with other federal and state agencies, the FDIC promotes the safety and soundness of insured depository institutions by identifying, monitoring, and addressing risks to the Deposit Insurance Fund. FDIC is also the administrator of the Federal Savings and Loan Insurance Corporation Resolution Fund. This fund was created to close out the business of the former Federal Savings and Loan Insurance Corporation and liquidate the assets and liabilities transferred from the former Resolution Trust Corporation. FDIC relies extensively on computerized systems to support its mission, including financial operations, and to store the sensitive information that it collects. The corporation uses local and wide area networks to interconnect its systems. To support its financial management functions, FDIC uses, among other things, the following information technology (IT) resources: a corporate-wide system that functions as a unified set of financial and payroll systems that are managed and operated in an integrated fashion; a system to calculate and collect FDIC deposit insurance premiums and Financing Corporation interest amounts from insured institutions; a Web-based application that provides full functionality to support franchise marketing, asset marketing, and asset management; an application and Web portal to provide acquiring institutions with a secure method for submitting required data files to FDIC; computer programs used to derive the corporation’s estimate of losses from shared loss agreements; a system to request access to and receive permission for the computer applications and resources available to its employees, contractors, and other authorized personnel; and a primary receivership and subsidiary financial processing and reporting system. The federal government has seen a marked increase in the number of information security incidents affecting the integrity, confidentiality, and availability of government information, systems, and services. Without proper safeguards, computer systems are vulnerable to individuals and groups with malicious intentions who can intrude and use their access to obtain sensitive information, commit fraud and identity theft, disrupt operations, or launch attacks against other computer systems and networks. Cyber-based threats to information systems and cyber-related critical infrastructure can come from sources internal and external to the organization. External threats include the ever-growing number of cyber- based attacks that can come from a variety of sources such as individuals, groups, and countries who wish to do harm to an organization’s systems. Internal threats include errors or mistakes, as well as fraudulent or malevolent acts by employees or contractors working within an organization. Under the Federal Information Security Modernization Act of 2014 (FISMA), the Chairman of FDIC is responsible for, among other things, (1) providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information systems and information; (2) ensuring that senior agency officials provide information security for the information and information systems that support the operations and assets under their control; and (3) delegating to the corporation’s Chief Information Officer (CIO) the authority to ensure compliance with the requirements imposed on the agency under FISMA. FISMA states that the CIO is responsible for developing and maintaining a corporate-wide information security program and for developing and maintaining information security policies, procedures, and control techniques that address all applicable requirements. FISMA also states that the CIO is to designate a senior agency information security officer to carry out the CIO’s responsibilities for information security under the law. In most federal organizations, this official is referred to as the Chief Information Security Officer. At FDIC, the CIO is responsible for, among other things, (1) establishing the information security risk management program and ensuring that it is properly implemented; (2) establishing the overall strategy for how the corporation frames, assesses, responds to, and monitors information security risks; and (3) establishing and promulgating agency-wide information security risk awareness programs and practices. The responsibilities of the FDIC Chief Information Security Officer include, among other things, (1) overseeing the corporation’s information technology security risk management program; (2) providing information security standards, control frameworks, security policy, best practices, and security architecture oversight; (3) ensuring appropriate staffing and support of all information security positions that support the risk management program; and (4) managing and maintaining the continuous monitoring program. For calendar years 2016 and 2015, FDIC implemented numerous information security controls intended to protect its key financial systems. In addition, the corporation addressed 15 of 21 recommendations to mitigate control weaknesses that we had previously identified in our reports in 2013, 2014, 2015, and 2016. Nevertheless, weaknesses remained in FDIC’s implementation of access, configuration management, and information security program controls that threaten the confidentiality, integrity, and availability of its financial systems and information. As we have previously reported, the collective effect of weaknesses in access and configuration management controls, both new and unresolved from previous audits, contributed to our determination that FDIC had a significant deficiency in internal control over financial reporting as of December 31, 2016. An agency can better protect the resources that support its critical operations and assets from unauthorized access, disclosure, modification, or loss by designing and implementing controls for protecting information system boundaries, identifying and authenticating users, restricting user access to only what has been authorized, encrypting sensitive data, and auditing and monitoring systems to detect potentially malicious activity, among other actions. Although FDIC had implemented numerous controls in these areas, weaknesses nevertheless continued to challenge the corporation in ensuring the confidentiality, integrity, and availability of its information and information systems. Boundary protection controls are intended to restrict logical access into and out of networks and control connectivity to and from network- connected devices. Any connections to the Internet or to other external and internal networks or information systems should occur through controlled interfaces (for example, gateways, routers, switches, and firewalls). In addition, networks should be appropriately configured to adequately protect access paths between systems; this can be accomplished through the use of access control lists and firewalls. National Institute of Standards and Technology (NIST) guidance recommends that organizations employ boundary protection mechanisms to separate organization-defined information system components supporting organization-defined missions and/or business functions. Such isolation limits unauthorized information flows among system components and also provides the opportunity to deploy greater levels of protection for selected components. Consistent with NIST guidance, Office of Management and Budget Circular A-130 requires agencies to isolate sensitive or critical information resources (e.g., information systems, system components, applications, databases, and information) into separate security domains with appropriate levels of protection based on the sensitivity or criticality of those resources. FDIC did not implement sufficient internal boundary protection controls on its network to isolate financial systems from other parts of its network. Although the corporation partially isolated financial systems from other parts of the environment using virtual local area networks, it did not always implement controls on network devices to prevent unauthorized users and systems from communicating with the financial systems. According to FDIC, a plan to isolate sensitive systems had been made, but implementation of the plan had been delayed due to other competing priorities. Until it appropriately isolates its financial systems, FDIC faces increased risk that unauthorized or malicious attempts to communicate with its financial systems could go undetected. Identification is the process of distinguishing one user from all others, usually through user identifications (ID). These are important because they are the means by which specific access privileges are assigned and recognized by the computer. However, because the confidentiality of a user ID is typically not protected, other means of authenticating users— that is, determining whether individuals are who they say they are—are typically implemented. The combination of identification and authentication—such as user account-password combinations—provides the basis for establishing accountability and for controlling access to the system. NIST SP 800-53, revision 4 recommends that agency information systems uniquely identify and authenticate organizational users or processes acting on behalf of organizational users. FDIC did not implement sufficient controls to ensure that users would be held accountable for the use of a key privileged account. Although the corporation employed a software tool to control access to privileged accounts, it did not use the tool to control access to a privileged account that was used by multiple engineers to manage the corporation’s virtual environment. As a result, FDIC’s ability to attribute authorized, as well as unauthorized, system activity to specific individuals could be diminished. Authorization is the process of granting or denying access rights and privileges to a protected resource, such as a network, system, application, function, or file. A key component of granting or denying access rights is the concept of “least privilege,” which refers to granting a user only the access rights and permissions needed to perform official duties. To restrict a legitimate user’s access to only those programs and files needed, organizations establish user access rights: allowable actions that can be assigned to a user or to groups of users. File and directory permissions are rules that are associated with a particular file or directory, regulating which users can access it—and the extent of their access rights. To avoid unintentionally giving a user unnecessary access to sensitive files and directories, an organization should give careful consideration to its assignment of rights and permissions. NIST SP 800-53, revision 4 recommends that organizations employ the principle of least privilege by allowing only authorized users (or processes acting on behalf of users) access permission that is necessary to accomplish assigned tasks in accordance with organizational missions and business functions. NIST also recommends periodic reviews of user accounts for compliance with account management requirements. In addition, FDIC policy requires administrators to use designated administrator accounts when conducting administrative tasks. FDIC policy also requires removal of user permissions if the job responsibilities of the user change, if the user transfers to a different organization, or the user no longer requires access for any other reason. Further, the policy requires that access settings be reviewed periodically to ensure that they remain consistent with existing authorizations and current business needs. During 2016, FDIC improved controls for authorizing users’ access by addressing all nine of the weaknesses pertaining to authorization that we had previously identified and that were still unresolved as of December 31, 2015. For example, FDIC implemented processes for reviewing individuals with access to its data centers; ensuring that users of a key financial application do not conduct access reviews of their own accounts; and removing users’ access to another financial application in a timely manner. However, while it addressed these weaknesses from prior years, the corporation did not always consistently implement authorization controls. Specifically, FDIC database administrators for one database management system did not use designated administrative accounts when performing administrative tasks on certain databases. Additionally, although the corporation had a process for conducting periodic reviews of access settings on mainframe accounts, it did not include all mainframe accounts in the access review process. Further, about one-fifth of the user accounts we reviewed on a key financial application were granted additional privileges that had not been authorized by the users’ supervisors. This occurred because the official granting the access had institutional knowledge of the privileges that the users would need, and because FDIC’s procedures for granting access to the application did not include responsibilities and procedures for ensuring that the level of access provided had been approved by the users’ supervisor. As a result, these systems are more vulnerable to unauthorized access and modification of data. Cryptography controls can be used to help protect the integrity and confidentiality of data and computer programs by rendering data unintelligible to unauthorized users and/or protecting the integrity of transmitted or stored data. Cryptography involves the use of mathematical functions called algorithms and strings of seemingly random bits called keys. Among other things, the algorithms and keys are used to encrypt a message or file so that it is unintelligible to those who do not have the secret key needed to decrypt it, thus keeping the contents of the message or file confidential. NIST SP 800-53, revision 4 recommends that organizations employ encryption to protect information from unauthorized disclosure and modification during transmission. The NIST standard for an encryption algorithm is Federal Information Processing Standards Publication (FIPS Pub.) 140-2. FDIC had not completed actions to implement our prior recommendation to use FIPS-compliant encryption for all mainframe connections. Although FDIC officials stated that they initially intended to implement a tool to enable mainframe encryption in 2016, the corporation determined that the tool would not encrypt all of the information within its planned scope. FDIC officials from the Division of Information Technology stated that the corporation is continuing to consider feasible options for encrypting mainframe connections. In the meantime, sensitive data— such as user IDs and passwords—continue to be transmitted over the network in clear text, exposing them to potential compromise. Audit and monitoring involves the regular collection, review, and analysis of auditable events for indications of inappropriate or unusual activity, and the appropriate investigation and reporting of such activity. Automated mechanisms may be used to integrate audit monitoring, analysis, and reporting into an overall process for investigation and response to suspicious activities. Audit and monitoring controls can help security professionals routinely assess computer security, perform investigations during and after an attack, and even recognize an ongoing attack. NIST SP 800-53, revision 4 states that organizations should review and analyze information system audit records for indications of inappropriate or unusual activity and report the findings to designated agency personnel. Additionally, NIST states that information systems should produce audit records that establish the type of event, when the event occurred, and the identity of any individuals or subjects associated with the event, among other things. FDIC improved its audit and monitoring controls by implementing four of the five recommendations pertaining to audit and monitoring that we had previously identified and that were still unresolved as of December 31, 2015. For example, the corporation had ensured that data on successful logins was being captured for each of its database systems for investigation of potential security incidents; implemented a centralized audit monitoring capability for its databases; improved the logging and monitoring process for several key systems; and documented all critical files on key servers that required real-time monitoring. However, other weaknesses existed in FDIC’s implementation of audit and monitoring controls. Specifically: FDIC had not performed vulnerability scans of all servers in its IT environment. In its November 2016 report on the effectiveness of the corporation’s information security program in accordance with the requirements of FISMA, the FDIC Office of Inspector General (OIG) reported that, at the time of its audit, FDIC was not performing vulnerability scans for more than 900 production servers within one of its general support systems. In addition, we found that FDIC had not scanned several production servers in another of its general support systems during the 3-month time period (July, August, and September 2016) that we reviewed. According to FDIC officials, these conditions occurred because the corporation did not have an inventory of network assets that included all servers and because its legacy scanning and discovery tool had failed to identify all servers. The officials added that the scanning and discovery tool had since been replaced. Without regularly scanning all servers, FDIC cannot reasonably be assured that vulnerabilities in its servers are identified and corrected in a timely manner, increasing the risk that its systems and information may be compromised. FDIC had not completed actions to address our prior year recommendation to ensure that changes made to critical files on certain key servers are adequately monitored. Although the corporation specified which directories on the servers were to be monitored, the logs that were generated did not provide sufficient detail to identify the individuals making changes. According to officials in FDIC’s Division of Information Technology, the corporation plans to implement a new solution in 2017 to enable security personnel to identify users making file system changes. Until FDIC fully addresses this recommendation by ensuring that users making changes to critical files are identified and logged, increased risk continues to exist that an unauthorized individual could inappropriately modify these files without being identified. In addition to access controls, agencies should implement policies, procedures, and techniques for managing the configuration of information systems. Configuration management controls are intended to prevent unauthorized changes to information system resources (for example, software programs and hardware configurations) and to provide reasonable assurance that systems are configured and operating securely and as intended. NIST SP 800-53, revision 4 recommends, among other things, that agencies develop and document an inventory of information system components that accurately reflects the current system and includes all components within the system’s authorization boundary; establish a baseline configuration for the information system and its constituent components; and identify and correct information system flaws, including installing security relevant software updates within a defined time period of their release. Consistent with NIST guidelines, FDIC policy states that mandatory configuration settings must be established and documented for IT products employed within the information system using information system-defined security configuration checklists. The policy also states that applicable vendor-released software patches designed to address security vulnerabilities are to be implemented in accordance with the CIO organization’s security patching schedule. Nevertheless, FDIC had not consistently implemented configuration management controls. For example, although the corporation used multiple tools to track and validate its IT assets, it had not established a single, authoritative, accurate listing of all IT assets in its environment. This occurred because FDIC had not established a process to reasonably assure that a complete, accurate inventory was developed and maintained. Additionally, although the corporation had defined baseline configuration settings for its information systems and had conducted configuration scans of its systems, it had not yet fully implemented processes for verifying that configurations are consistently applied. Further, although FDIC had applied patches to certain third-party applications supporting financial processing and had made significant progress in identifying and tracking vulnerabilities related to third-party software, it had not yet fully implemented processes to ensure that assets that require patching are identified correctly. Without establishing a reliable, authoritative listing of its IT assets and documenting, implementing, and monitoring security configurations, FDIC has reduced assurance that its information supporting financial processing is securely configured. Additionally, unless known vulnerabilities in FDIC’s systems and applications are patched, increased risk exists that they could be exploited, potentially exposing the corporation’s financial systems and information to unauthorized access or modification. An entitywide information security management program is the foundation of a security control structure and a reflection of senior management’s commitment to addressing security risks. The security management program should establish a framework and continuous cycle of activity for assessing risk, developing and implementing effective security procedures, and monitoring the effectiveness of these procedures. Without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, or improperly implemented; and controls may be inconsistently applied. FISMA requires each agency to develop, document, and implement an information security program to provide security for the information and information systems that support the agency’s operations and assets, including those provided or managed by another agency, contractor, or other organization on its behalf. Agency programs are to include, among other things, the following elements: periodic assessments of risk, including the magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems that support the operations and assets of the organization; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency; policies and procedures that are based on risk assessments, cost- effectively reduce information security risks to an acceptable level, and ensure that information security is addressed throughout the life cycle of each organizational information system; periodic testing and evaluation of the effectiveness of information security policies, procedures, practices, and security controls to be performed with a frequency depending on risk, but no less than annually; a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the organization; and procedures for detecting, reporting, and responding to security incidents. In addition, FISMA requires the head of each federal agency to ensure that information security management processes are integrated with agency strategic and operational planning processes. FDIC had developed, documented, and implemented many elements of its corporate information security program. For example, it had defined security categories for the general support systems we reviewed based on risk using NIST guidance, assessed the risk from control deficiencies identified during security control tests, and ensured that the general support systems we reviewed were authorized to operate; and conducted a disaster recovery test of its general support systems and mission-critical applications. However, FDIC had not fully or consistently implemented aspects of its information security program, which was an underlying reason for many of the information security weaknesses identified during our review. Specifically, FDIC had not included all necessary information in procedures for granting access to a key financial application; fully addressed the FDIC OIG’s finding that security control assessments of outsourced service providers had not been completed in a timely manner; fully addressed key previously identified weaknesses related to establishing agencywide configuration baselines and monitoring changes to critical server files; and completed actions to address the FDIC OIG’s finding that the corporation had not ensured that major security incidents are identified and reported in a timely manner. In addition, in November 2016, the FDIC OIG reported that the corporation had not yet developed and documented an up-to-date information security strategic plan or completed actions to address weaknesses in its Information Security Managers program. These shortcomings are discussed in more detail in the following section. A key element of an effective information security program is to develop, document, and implement risk-based policies, procedures, and technical standards that govern the security over an agency’s computing environment. Information security policy is essential to establishing roles, responsibilities, and requirements necessary for implementing an information security program. The supporting procedures provide the information and guidance on implementing the policies. According to NIST SP 800-53, revision 4, organizations should develop and document procedures to facilitate the implementation of access and configuration management policies and associated controls. Although FDIC developed and documented many information security policies and procedures that were consistent with the NIST Risk Management Framework, its procedure for granting users access to a key financial application did not include responsibilities and steps for ensuring that the level of access provided had been approved by the users’ supervisor. As a result, the official granting access to the application—who had institutional knowledge of the privileges that the users would need—granted additional privileges to some users for which they had not been previously approved. Until it updates its procedure to include these responsibilities and steps, FDIC will continue to face increased risk that users may be granted access to privileges in the application for which they have not been approved. A key element of an information security program is to test and evaluate policies, procedures, and controls to determine whether they are effective and operating as intended. Security control testing should include management, operational, and technical controls for every system identified in the agency’s required inventory of major systems. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results are used to improve security. FISMA requires that the frequency of tests and evaluations of management, operational, and technical controls be based on risks and occur no less than annually. The Office of Management and Budget (OMB) directs agencies to meet their FISMA-required controls testing by drawing on security control assessment results that include, but are not limited to, continuous monitoring activities. According to NIST SP 800-53, revision 4, continuous monitoring programs facilitate ongoing awareness of threats, vulnerabilities, and information security to support organizational risk management decisions. NIST also recommends that organizations monitor security control compliance by external service providers on an ongoing basis. FDIC developed a continuous control assessment methodology that defined the controls tested for each information system and the frequency that each control is to be tested. In addition, the corporation tested the effectiveness of the security controls for the three general support systems we reviewed in accordance with the methodology. However, the FDIC OIG has previously reported weaknesses in FDIC’s assessments of its outsourced service providers. Specifically, in October 2015, it reported that the corporation had not always ensured that security assessments of outsourced service providers were completed in a timely manner. In November 2016, the OIG reported that FDIC had made meaningful progress towards completing timely assessments of its outsourced service providers, but noted that continued management attention was warranted in this area to ensure outstanding assessments are completed timely. When security weaknesses are identified, the related risks should be assessed, appropriate corrective or remediation actions should be taken, and follow-up monitoring should be performed to make certain that corrective actions are effective. FISMA specifically requires that agencywide information security programs include a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency. NIST SP 800-53, revision 4 recommends that organizations develop a plan of action and milestones (POA&M) for information systems to document the planned remedial actions to correct weaknesses or deficiencies identified during security control assessments. A POA&M should also be updated based on the findings from the security controls assessment, security impact analysis, and continuous monitoring activities. FDIC documented POA&Ms for weaknesses identified during internal control assessments and implemented an effective process for tracking and mitigating identified weaknesses for each of the systems that we reviewed. In addition, as of December 31, 2016, FDIC had addressed 15 of the 21 previously reported information system weaknesses that were unresolved at the end of our prior audit. For example, FDIC had improved controls for authorizing users’ access to financial applications and for logging and monitoring financial systems to detect potentially malicious activity. However, six previously identified weaknesses remained unresolved. Until it completes actions to address previously identified weaknesses, FDIC will continue to face increased risk that its systems may not be adequately or consistently protected against unauthorized access to systems or data. Appendix II details the status of weaknesses that were unaddressed as of December 31, 2015 or were initially reported in 2016. Comprehensive monitoring and incident response controls are necessary for rapidly detecting incidents, minimizing loss and destruction, mitigating the weaknesses that were exploited, and restoring computing services. While strong controls may not prevent all incidents, agencies can reduce the risks associated with these events by detecting and promptly responding before significant damage is done. FISMA requires federal agencies to develop and implement procedures for detecting, reporting, and responding to security incidents. NIST SP 800-53, revision 4 further recommends that agencies develop, document, and disseminate procedures to facilitate the implementation of the incident response policy and associated incident response controls. FDIC developed and documented information security policies and procedures on incident response. For example, its policy on reporting computer security incidents states that the FDIC Computer Security Incident Response Team is responsible for evaluating the seriousness of computer security incidents and taking appropriate corrective actions, including notifying FDIC senior management, the OIG, and other outside entities, when appropriate. Nevertheless, shortcomings existed in FDIC’s implementation of its policies. Specifically, FDIC did not provide reasonable assurance that “major incidents,” as defined by OMB guidance, were identified and reported in a timely manner. Specifically, the OIG reported in July 2016 that FDIC’s incident response policies, procedures, and guidelines did not address major incidents. In addition, the large volume of potential security violations identified by its Data Loss Prevention tool, together with limited resources devoted to reviewing potential violations, hindered meaningful analysis of the information and FDIC’s ability to identify all security incidents, including major ones. Among other things, the OIG recommended that FDIC (1) revise its incident response policies, procedures, and guidelines to address major incidents; (2) ensure that these revisions include criteria for determining whether an incident is major, consistent with FISMA and Office of Management and Budget guidance; and (3) review the current implementation of the Data Loss Prevention tool to determine how it can be better leveraged to safeguard sensitive FDIC information. In November 2016, the FDIC OIG reported that, in response to these findings, the corporation was working to improve its incident response capabilities by developing an overarching incident response program guide, hiring an incident response coordinator, implementing a new incident tracking system, updating incident response policies and procedures, and performing a comprehensive assessment of the FDIC’s information security and privacy programs. If fully implemented, these actions could improve FDIC’s ability to identify and address security incidents, including major incidents. According to NIST SP 800-39, effective risk management requires organizations such as FDIC to operate in highly complex, interconnected environments using state-of-the-art and legacy information systems— systems that organizations depend on to accomplish their missions and to conduct important business-related functions. The complex relationships among missions, mission/business processes, and the information systems supporting those missions and processes require an integrated, organization-wide view for managing risk. Effective management of information security risk is critical to the success of organizations in achieving their strategic goals and objectives. NIST SP 800-100 states that agencies should have a strategic plan for information security that identifies goals and objectives related to the agency’s mission, specifies a plan for achieving those goals, and establishes short- and mid-term performance targets and measures that allow the agency to track, manage, and monitor its progress toward those goals and objectives. In addition, according to NIST SP 800-39, agencies should establish roles and responsibilities for managing information security risk. However, FDIC had not fully implemented key activities for managing and overseeing information security risk across the organization. Specifically: In November 2016, the FDIC OIG reported that FDIC’s information security strategic plan was not up-to-date. Specifically, although the corporation had an information security strategic plan, this plan had expired in 2015 and did not fully reflect OMB’s cybersecurity priorities or the corporation’s strategies. Without an up-to-date strategic plan, ongoing and planned IT initiatives may not be linked to the corporation’s long-term security and business goals and priorities. FDIC had not completed actions to address gaps in how the roles and responsibilities of its Information Security Managers (ISM) are defined and carried out. In October 2015, the FDIC OIG reported that the duties and roles of the ISMs in addressing information security requirements and risks had evolved since the ISM program was established. It also reported that FDIC had not completed a recent comprehensive assessment to determine whether the skills, training, oversight, and resource allocations pertaining to the ISMs enabled them to effectively carry out their increased responsibilities and address security risks within their divisions and offices. In November 2016, the OIG reported that FDIC had conducted an assessment of its ISM program, which identified gaps in areas such as available resources, training, and performance measurement. The OIG also reported that FDIC plans to complete all actions to address these gaps by 2018. Until then, however, increased risk exists that these capability gaps could impact the effectiveness of the FDIC’s information security program. FDIC had implemented and strengthened many information security controls over its financial systems and information. For example, the corporation had taken steps to improve controls for restricting user access to only what has been authorized, auditing and monitoring systems for potentially malicious activity, and applying patches to address known software vulnerabilities by addressing many of the weaknesses that we previously reported. However, management attention is needed to address new and previously identified deficiencies in access controls— including boundary protection, identification and authentication, authorization, cryptography, and audit and monitoring controls—and in configuration management controls. These deficiencies, considered collectively, are the basis for our determination that FDIC had a significant deficiency in internal control over financial reporting in its information systems controls as of December 31, 2016. In addition, FDIC had developed, documented, and implemented many elements of its corporate information security program. However, further actions are needed to address shortcomings in the corporation’s program, such as ensuring that its procedure for granting access to a key financial application includes key responsibilities and steps. Given the important role that information systems play in FDIC’s internal controls over financial reporting, it is vitally important that the corporation address weaknesses in information security controls—both old and new—as part of its ongoing efforts to mitigate the risks from cyber attacks and to ensure the confidentiality, integrity, and availability of its financial and sensitive information. Continued and consistent management commitment and attention to access, configuration management, and security management controls will be essential to addressing existing deficiencies and further improving FDIC’s information system controls. To help improve the corporation’s implementation of its information security program, we recommend that the Chairman of FDIC direct the Chief Information Officer to update the procedure for granting access to the key financial application, to include responsibilities and steps for ensuring that the access privileges granted have been approved by the users’ supervisor. In a separate report with limited distribution, we are also making six recommendations to resolve shortcomings in FDIC’s internal control over financial reporting and help strengthen access and configuration management controls over key financial information, systems, and networks. In written comments on a draft of this report (reprinted in appendix II), FDIC concurred with our recommendation to improve its implementation of its information security program and stated that corrective actions will be completed by July 2017. FDIC also provided an attachment detailing its actions to implement our recommendation. In addition to the aforementioned comments, FDIC provided technical comments that we have addressed in our report as appropriate. In these comments, the corporation expressed concern about one additional recommendation to improve its information security program that we had made in our draft report. Specifically, the draft report had included a recommendation that FDIC develop, document, and implement procedures for ensuring that configuration actions identified by its Computer Security Incident Response Team are taken. In written and oral comments, FDIC officials provided additional information about the corporation’s incident handling process in order to clarify that the condition we identified did not pose a risk to the corporation’s information and systems. After our review of this information, we agree that the condition does not pose a risk to the corporation and, accordingly, removed the recommendation from our final report. We are sending copies of this report to interested congressional parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact Nick Marinos at (202) 512-9342 or Dr. Nabajyoti Barkakati at (202) 512-4499. We can also be reached by e-mail at [email protected] and [email protected]. Key contributors to this report are listed in appendix II. The objective of this information security review was to determine the effectiveness of the Federal Deposit Insurance Corporation’s (FDIC) controls in protecting the confidentiality, integrity, and availability of its financial systems and information. To do this, we identified and reviewed FDIC information systems control policies and procedures, tested controls over key financial applications, and held interviews with key security representatives and management officials in order to determine whether information security controls were in place, adequately designed, and operating effectively. The review was conducted as part of our audit of the financial statements of the two funds administered by FDIC: the Deposit Insurance Fund and the Federal Savings and Loan Insurance Corporation Resolution Fund. The scope of our audit included an examination of FDIC information security policies, procedures, and controls over key financial systems in order to (1) assess the effectiveness of corrective actions taken by FDIC to address weaknesses we previously reported and (2) determine whether any additional weaknesses existed. This work was performed in support of our opinion on internal control over financial reporting as it relates to our audits of the calendar years 2016 and 2015 financial statements of the two funds administered by FDIC. The independent public accounting firm of Cotton & Company LLP tested certain FDIC information systems controls, including the follow-up on the status of FDIC’s corrective actions during calendar year 2016 to address open recommendations from our prior years’ reports. We agreed on the scope of the audit work, monitored the firm’s progress, and reviewed the related audit documentation to determine whether the firm’s findings were adequately supported. To determine whether controls over key financial systems and information were effective, we considered the results of FDIC’s actions to mitigate previously-reported weaknesses that remained open as of December 31, 2015, and performed audit work at FDIC facilities in Arlington, Virginia. We concentrated our evaluation primarily on the controls for systems and applications associated with financial processing, such as the (1) New Financial Environment; (2) Communication, Capability, Challenge, and Control System; (3) Portfolio Investment Accounting; (4) Assessments Information Management System; and (5) general support systems. Our selection of the systems to evaluate was based on consideration of systems that directly or indirectly support the processing of material transactions that are reflected in the funds’ financial statements. Our audit methodology was based on the Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. Using standards and guidance from the National Institute of Standards and Technology and the Office of Management and Budget, as well as FDIC’s policies and procedures, we evaluated controls by examining network diagrams and device configuration settings to determine if intrusion detection and prevention systems were monitoring the FDIC network for suspicious activity; reviewing privileged accounts to verify that access to privileged accounts was appropriately controlled and that accounts were not shared among multiple users; analyzing user application authorizations to determine whether users had more permissions than necessary to perform their assigned functions; reviewing administrative account settings to determine if privileged accounts were used as required and if access to a privileged account was appropriately controlled; assessing configuration settings to evaluate settings used to audit inspecting vulnerability scans for in-scope systems to determine whether scans were conducted regularly and whether patches were appropriately installed on affected systems. Using the requirements of the Federal Information Security Modernization Act of 2014, which establishes elements for an agency-wide information security program, we evaluated FDIC’s implementation of its security program by examining system authorization documentation for information on FDIC’s implementation of risk categorization and risk assessment practices; reviewing information security policies and procedures to determine whether they were adequately documented and implemented; examining FDIC training records for information on general and reviewing assessments of security controls to determine if they had been completed as scheduled; reviewing an FDIC Office of Inspector General (OIG) report for information on the corporation’s processes for assessing security controls of outsourced service providers; examining remedial action plans to determine whether FDIC had addressed identified vulnerabilities in a timely manner; examining two FDIC OIG reports for information on the corporation’s reviewing security event records to determine if security events were tracked and resolved appropriately; reviewing continuity of operations plans, contingency plans, and test results to determine whether contingency planning controls were appropriately implemented; and examining two FDIC OIG reports for information on the corporation’s information security strategic management activities. To determine the status of FDIC’s actions to correct or mitigate previously reported information security weaknesses, we reviewed prior GAO reports to identify previously reported weaknesses, examined FDIC’s corrective action plans, and assessed the effectiveness of those actions. We conducted this audit in accordance with U.S. generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provided a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individuals named above, Gregory Wilshusen (Director); Gary Austin, Paul Foderaro, and Michael Hansen (Assistant Directors); William Cook (Analyst in Charge); Wayne Emilien; Nancy Glover; Franklin Jackson; Thomas J. Johnson; Jean Mathew; David Plocher; Dacia Stewart; and Adam Vodraska made key contributions to this report.
FDIC has a demanding responsibility enforcing banking laws, regulating financial institutions, and protecting depositors. Because of FDIC's reliance on information systems, effective information security controls are essential to ensure that the corporation's systems and information are adequately protected from inadvertent or deliberate misuse, improper modification, unauthorized disclosure, or destruction. As part of its audit of the 2016 and 2015 financial statements of the Deposit Insurance Fund and the Federal Savings and Loan Insurance Corporation Resolution Fund, which are administered by FDIC, GAO assessed the effectiveness of the corporation's controls in protecting the confidentiality, integrity, and availability of its financial systems and information. To do so, GAO examined security policies, procedures, reports, and other documents; tested controls over key financial applications; and interviewed FDIC personnel. The Federal Deposit Insurance Corporation (FDIC) implemented numerous information security controls intended to protect its key financial systems. However, further actions are needed to address weaknesses in access controls—including boundary protection, identification and authentication, and authorization controls—and in configuration management controls. For example, the corporation did not sufficiently isolate financial systems from other parts of its network, ensure that users would be held accountable for the use of a key privileged account, or establish a single, accurate listing of all IT assets in its environment. The corporation established a comprehensive framework for its information security program and implemented many aspects of its program. For example, FDIC (1) defined security categories for the general support systems we reviewed based on risk; (2) assessed the risk from control deficiencies identified during security control tests; and (3) conducted a disaster recovery test of its general support systems and mission-critical applications. In addition, FDIC addressed 15 of the 21 previously reported weaknesses that were unresolved as of December 31, 2015, as indicated in the following table. However, an underlying reason for many of the information security weaknesses identified during GAO's review was that FDIC did not fully implement other aspects of its program. For example, the corporation did not (1) include necessary information in procedures for granting access to a key financial application and (2) fully address the FDIC Office of the Inspector General's finding that the corporation did not always identify and report major security incidents in a timely manner. Until FDIC takes the necessary steps to address both new and previously reported control deficiencies, its sensitive financial information and resources will remain at increased risk of inadvertent or deliberate misuse, improper modification, unauthorized disclosure, or destruction. The combination of the continuing and new information security control deficiencies in access and configuration management controls, considered collectively, represent a significant deficiency in FDIC's internal control over financial reporting as of December 31, 2016. GAO is recommending that FDIC take one action to more fully implement its information security program. In a separate report with limited distribution, GAO made six recommendations to FDIC to address newly identified weaknesses in access and configuration management controls. In commenting on a draft of this report, FDIC agreed with GAO’s recommendation and stated that corrective actions to implement the recommendation will be completed by July 2017.
DHS has made progress in addressing high-risk areas for which it has sole responsibility, but significant work remains. DHS has made important progress in implementing, transforming, strengthening, and integrating its management functions in human capital, acquisition, financial management, and IT. This has included taking numerous actions specifically designed to address our criteria for removing areas from the high-risk list. However, as we reported in our February 2013 high risk update, this area remains high risk because the department has significant work ahead. As shown in table 1, DHS has met two of our criteria for removal from the high-risk list (leadership commitment and a corrective action plan), and has partially met the remaining three criteria (a framework to monitor progress; capacity; and demonstrated, sustained progress). Leadership commitment (met). The Secretary and Deputy Secretary of Homeland Security, the Under Secretary for Management at DHS, and other senior officials have continued to demonstrate commitment and top leadership support for addressing the department’s management challenges. They have also taken actions to institutionalize this commitment to help ensure the long-term success of the department’s efforts. For example, in May 2012, the Secretary of Homeland Security modified the delegations of authority between the Management Directorate and its counterparts at the component level to clarify and strengthen the authorities of the Under Secretary for Management across the department. In addition, in April 2014, the Secretary of Homeland Security issued a memorandum committing to improving DHS’s planning, programming, budgeting, and execution processes through strengthened departmental structures and increased capability. This memorandum identified several initial areas of focus intended to build organizational capacity.DHS officials have also routinely met with us over the past 5 years to discuss the department’s plans and progress in addressing this high-risk area, during which we provided specific feedback on the department’s efforts. According to these officials, and as demonstrated through their progress, the department is committed to demonstrating measurable, Senior sustained progress in addressing this high-risk area. It will be important for DHS to maintain its current level of top leadership support and sustained commitment to ensure continued progress in successfully executing its corrective actions through completion. Corrective action plan (met). DHS established a plan for addressing this high-risk area. In a September 2010 letter to DHS, we identified and DHS agreed to achieve 31 actions and outcomes that are critical to addressing the challenges within the department’s management areas and in integrating those functions across the department. In January 2011, DHS issued its initial Integrated Strategy for High Risk Management, which included key management initiatives and related corrective action plans for addressing its management challenges and the outcomes we identified. DHS provided updates of its progress in implementing these initiatives and corrective actions in its later versions of the strategy. In March 2014, we made updates to the actions and outcomes in collaboration with DHS to reduce overlap and ensure their continued relevance and appropriateness. These updates resulted in a reduction from 31 to 30 total actions and outcomes. DHS’s strategy and approach to continuously refining actionable steps to implementing the outcomes, if implemented effectively and sustained, provide a path for DHS to be removed from GAO’s high-risk list. Capacity (partially met). In May 2014, DHS identified that it had resources needed to implement 7 of the 11 initiatives the department had under way to address the actions and outcomes, but did not identify sufficient resource needs for the 4 remaining initiatives. In our analysis of DHS’s June 2013 update, which similarly did not identify sufficient resource needs for all initiatives, we found that this absence of complete resource information made it difficult to fully assess the extent to which DHS has the capacity to implement its initiatives. In addition, our prior work has identified specific capacity gaps that could undermine achievement of management outcomes. For example, in September 2012, we reported that 51 of 62 acquisition programs faced workforce shortfalls in program management, cost estimating, engineering, and other areas, increasing the likelihood that the programs will perform poorly in the future. Since that time, DHS has appointed component acquisition executives at the components and made progress in filling staff positions. In April 2014, however, we reported that DHS needed to increase its cost-estimating capacity, and that the department had not approved baselines for 21 of 46 major acquisition programs.These baselines—which establish cost, schedule, and capability parameters—are necessary to accurately assess program performance. DHS needs to continue to identify resources for the remaining initiatives; determine that sufficient resources and staff are committed to initiatives; work to mitigate shortfalls and prioritize initiatives, as needed; and communicate to senior leadership critical resource gaps. Framework to monitor progress (partially met). DHS established a framework for monitoring its progress in implementing the corrective actions it identified for addressing the 30 actions and outcomes. In the June 2012 update to the Integrated Strategy for High Risk Management, DHS included, for the first time, performance measures to track its progress in implementing all of its key management initiatives. DHS continued to include performance measures in its May 2014 update. Additionally, in March 2014, the Deputy Secretary began meeting monthly with the DHS management team to discuss DHS’s progress in strengthening its management functions. According to senior DHS officials, as part of these meetings, attendees discuss a report that senior DHS officials update each month, which identifies corrective actions for each outcome, as well as projected and actual completion dates. However, there are opportunities for DHS to strengthen this framework. For example, as we reported in September 2013, DHS components need to develop performance and functionality targets for assessing their proposed financial systems. This would include having an independent validation and verification program in place to ensure the modernized financial systems meet expected targets. Moving forward, DHS will need to closely track and independently validate the effectiveness and sustainability of its corrective actions and make midcourse adjustments, as needed. Demonstrated, sustained progress (partially met). Key to addressing the department’s management challenges is DHS demonstrating the ability to achieve sustained progress across the 30 actions and outcomes we identified and DHS agreed were needed to address the high-risk area. These actions and outcomes include, among others, validating required acquisition documents in accordance with a department-approved, knowledge-based acquisition process, and sustaining clean audit opinions for at least 2 consecutive years on department-wide financial statements and internal controls. As illustrated by the examples below, DHS has made important progress in implementing corrective actions across its management functions, but it has not demonstrated sustainable, measurable progress in addressing key challenges that remain within these functions and in the integration of those functions. GAO, DHS Strategic Workforce Planning: Oversight of Departmentwide Efforts Should Be Strengthened, GAO-13-65 (Washington, D.C.: Dec. 3, 2012). is more than the government-wide decrease of 4 percentage points over the same time period. As a result, the gap between average DHS employee satisfaction and the government-wide average widened to 7 percentage points. Accordingly, DHS has considerable work ahead to improve its employee morale. Further, according to senior DHS officials, the department has efforts under way intended to link workforce planning efforts to strategic and program-specific planning efforts to identify current and future human capital needs, including the knowledge, skills, and abilities needed for the department to meet its goals and objectives. According to these officials, the department is in the process of finalizing competency gap assessments to identify potential skills gaps within its components that collectively encompass almost half of the department’s workforce. These assessments focus on occupations DHS identifies as critical to its mission, including emergency management specialists and cyber- focused IT management personnel. DHS plans to analyze the results of these assessments and develop plans to address any gaps the assessments identify by the end of fiscal year 2014. This is a positive step, as identifying skills gaps could help the department to better identify current and future human capital needs and ensure the department possesses the knowledge, skills, and abilities needed to meet its goals and objectives. Given that DHS is finalizing these assessments, it is too early to assess their effectiveness. Acquisition management. DHS has mostly addressed one of the five acquisition management outcomes, partially addressed one, and initiated activities to address the remaining three. DHS has made the most progress in increasing component-level acquisition capability by, for example, establishing a component acquisition executive in each DHS component to provide oversight and support programs within its portfolio. DHS has also taken steps to enhance its acquisition workforce by establishing centers of excellence for cost estimating, systems engineering, and other disciplines to promote best practices and provide technical guidance. However, DHS needs to improve its acquisition management. For example: DHS initiated a governance body in 2013 to review and validate acquisition programs’ requirements and identify and eliminate any unintended redundancies, but it considered trade-offs only across acquisition programs within the department’s cybersecurity portfolio. DHS acknowledged that the department has no formal structure in place to consider trade-offs DHS-wide, but DHS anticipates chartering such a body by the end of May 2014. DHS also has initiated efforts to validate required acquisition documents in accordance with a knowledge-based acquisition process, but this remains a major challenge for the department. A knowledge-based approach provides developers with information needed to make sound investment decisions, and it would help DHS address significant challenges we have identified across its acquisition programs. DHS’s acquisition policy largely reflects key acquisition management practices, but the department has not implemented it consistently. In March 2014, we reported that the Transportation Security Administration does not collect or analyze available information that could be used to enhance the effectiveness of its advanced imaging technology. In March 2014, we also found that U.S. Customs and Border Protection (CBP) did not fully follow DHS policy regarding testing for the integrated fixed towers being deployed on the Arizona border. As a result, DHS does not have complete information on how the towers will operate once they are fully deployed. Finally, DHS does not have the acquisition management tools in place to consistently demonstrate whether its major acquisition programs are on track to achieve their cost, schedule, and capability goals. About half of major programs lack an approved baseline, and 77 percent lack approved life cycle cost estimates. DHS stated in its 2014 update that it will take time to demonstrate substantive progress in this area. We have recently initiated two reviews to examine DHS’s progress in these high-risk areas. In addition, the House Homeland Security committee recently introduced a DHS acquisition reform bill that reinforces the importance of key acquisition management practices, such as establishing cost, schedule, and capability parameters, and includes requirements to better identify and address poor-performing acquisition programs, which could aid the Department in addressing its acquisition management challenges. Financial management: DHS has made progress toward improving its financial management and has fully addressed one of eight high-risk financial management outcomes—ensuring its financial statements are accurate and reliable. However, a significant amount of work remains to be completed on the other seven outcomes related to DHS’s financial statements, internal control over financial reporting, and modernizing financial management systems. DHS produced accurate and reliable financial statements for the first time in fiscal year 2013, in part through management’s commitment to improving its financial management process. As of May 2014, DHS is working toward sustaining this key achievement. DHS has also made some progress toward implementing effective internal control over financial reporting, in part by implementing a corrective action planning process aimed at addressing internal control weaknesses. For example, the department took corrective actions to reduce the material weakness in environmental and other liabilities to a significant deficiency.eliminate all material weaknesses at the department level before its financial auditor can assert that the controls are effective. For example, one of the material weaknesses involves deficiencies in property, plant, and equipment. DHS plans to achieve this outcome for fiscal year 2016. To meet another outcome, DHS needs to sustain these efforts for 2 years. However, DHS needs to DHS also needs to effectively manage the modernization of financial management systems at the U.S. Coast Guard (USCG), U.S. Immigration and Customs Enforcement (ICE), and the Federal Emergency Management Agency (FEMA). Both USCG and ICE have made some progress toward modernizing their systems and foresee moving to a federal shared service provider and completing their efforts in the latter part of 2016 and 2017. Because of critical stability issues with its legacy financial system that were resolved in May 2013, FEMA postponed its modernization efforts and has not restarted them. IT Management. DHS has fully addressed one of the six IT management outcomes and partially addressed the remaining five. In particular, the department has strengthened its enterprise architecture program (or blueprint) to guide IT acquisitions by, among other things, largely addressing our prior recommendations aimed at adding needed architectural depth and breadth, thus fully addressing this outcome. However, the department needs to continue to demonstrate progress in strengthening other core IT management areas. For example, While the department is taking the necessary steps to enhance its IT security program, such as finalizing its annual Information Security Performance Plan, further work will be needed for DHS to eliminate the department’s current material weakness in its information security. It will be important for the department to fully implement its plan, since DHS’s financial statement auditor reported in December 2013 that flaws in the security controls such as access controls, contingency planning, and segregation of duties were a material weakness for financial reporting purposes. While important steps have been taken to define IT investment management processes generally consistent with best practices, work is needed to demonstrate progress in implementing these processes across DHS’s 13 IT investment portfolios. In July 2012, we recommended that DHS finalize the policies and procedures associated with its new tiered IT governance structure and continue to implement key processes supporting this structure. DHS agreed with these recommendations; however, as of April 2014, the department had not finalized the key IT governance directive, and the draft structure has been implemented across only 5 of the 13 investment portfolios. Fully addressing these actions would also help DHS to address key IT operations efficiency initiatives, as well as to more systematically identify other opportunities for savings. For example, as part of the Office of Management and Budget’s data center consolidation initiative, we reported that DHS planned to consolidate from 101 data centers to 37 data centers by December 2015.officials told us that the department had achieved actual cost savings totaling about $140 million in fiscal years 2011 through 2013, and that it estimates total consolidation cost savings of approximately $650 million through fiscal year 2019. Further, DHS DHS has also made progress in establishing and implementing sound IT system acquisition processes, but continued efforts are needed to ensure that the department’s major IT acquisition programs are applying these processes and obtaining more predictable outcomes. In 2013, DHS’s Office of the Chief Information Officer led an assessment of its major IT programs (against industry best practices in key IT system acquisition process areas) to determine its capability strengths and weaknesses, and has work under way to track programs’ progress in addressing identified capability gaps, such as requirements management and risk analysis. While this gap analysis and approach for tracking implementation of corrective actions are important steps, DHS will need to show that these actions are resulting in better, more predictable outcomes for its major IT system acquisitions. Demonstrated progress in closing these gaps is especially important in light of our recent reports on major DHS IT programs experiencing significant challenges largely because of system acquisition process shortfalls, including DHS’s major border security system modernization, known as TECS-Mod. Management integration. DHS has made substantial progress integrating its management functions, fully addressing three of the four outcomes we identified as key to the department’s management integration efforts. For example, DHS issued a comprehensive plan to guide its management integration efforts—the Integrated Strategy for High Risk Management— in January 2011, and has generally improved upon this plan with each update. In addition, in April 2014, the Secretary of Homeland Security issued a memorandum committing to improving DHS’s planning, programming, budgeting, and execution processes through strengthened departmental structures and increased capability.and most significant outcome—implement actions and outcomes in each management area to develop consistent or consolidated processes and systems within and across its management functional areas—DHS needs to continue to demonstrate sustainable progress integrating its management functions within and across the department and its components and take additional actions to further and more effectively integrate the department. To achieve the last For example, recognizing the need to better integrate its lines of business, in February 2013, the Secretary of Homeland Security signed a policy directive establishing the principles of the Integrated Investment Life Cycle Management to guide planning, executing, and managing critical investments department-wide. DHS’s June 2013 Integrated Strategy for High Risk Management identified that Integrated Investment Life Cycle Management will require significant changes to DHS planning, executing, and managing critical investments. At that time, DHS was piloting elements of the framework to inform a portion of the fiscal year 2015 budget. DHS’s May 2014 strategy update states that the department plans to receive an independent analysis of the pilots in May 2014. Given that these efforts are under way, it is too early to assess their impact. As we reported in March 2013, to more fully address the Strengthening DHS Management Functions high-risk area, DHS needs to continue implementing its Integrated Strategy for High Risk Management and show measurable, sustainable progress in implementing its key management initiatives and corrective actions and achieving outcomes. In doing so, it will be important for DHS to maintain its current level of top leadership support and sustained commitment to ensure continued progress in executing its corrective actions through completion; continue to implement its plan for addressing this high-risk area and periodically report its progress to Congress and GAO; monitor the effectiveness of its efforts to establish reliable resource estimates at the department and component levels, address and work to mitigate any resource gaps, and prioritize initiatives as needed to ensure it has the capacity to implement and sustain its corrective actions; closely track and independently validate the effectiveness and sustainability of its corrective actions and make midcourse adjustments, as needed; and make continued progress in addressing the 30 actions and outcomes—for the majority of which significant work remains—and demonstrate that systems, personnel, and policies are in place to ensure that progress can be sustained over time. We will continue to monitor DHS’s efforts in this high-risk area to determine if the actions and outcomes are achieved and sustained. FEMA has made progress in all of the areas required for removal of the NFIP from the high-risk list, but needs to initiate or complete additional actions; also, recent legislation has created challenges for FEMA in addressing the financial exposure created by the program. FEMA leadership has displayed a commitment to addressing these challenges and has made progress in a number of areas, such as financial reporting and continuity planning. While FEMA has plans for addressing and tracking progress on our specific recommendations, it has yet to address many of them. For example, FEMA has not completed actions in certain areas, such as modernizing its claims and policy management system and overseeing compensation of insurers that sell NFIP policies. Completing such actions will likely help improve the financial stability and operations of the program. Table 2 summarizes DHS’s progress in addressing the NFIP high-risk area. Leadership commitment (partially met). FEMA officials responsible for the NFIP have shown a commitment to taking a number of actions to implement our recommendations, which are designed to improve both the financial stability and operations of the program. For example, they have indicated a commitment to implementing our recommendations and have been proactive in clarifying and taking the actions needed to do so. In addition, FEMA officials have met with us to discuss outstanding recommendations, the actions they have taken to address them, and additional actions they could take. Further, a DHS official said that FEMA holds regular meetings to discuss the status of open recommendations. Recent legislative changes, however, have presented challenges for FEMA in addressing the financial exposure created by the NFIP. For example, in July 2012, the Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act) was enacted, containing provisions to help strengthen the future financial solvency and administrative efficiency of NFIP, including phasing out almost all discounted insurance premiums (commonly referred to as subsidized premiums). In July 2013, we reported that FEMA was starting to implement some of the required changes. However, on March 21, 2014, the Homeowner Flood Insurance Affordability Act of 2014 (2014 Act) was enacted, reinstating certain premium subsidies and restoring grandfathered rates removed by the Biggert-Waters Act. The 2014 Act addresses affordability concerns for certain property owners, but may also increase NFIP’s long-term financial burden on taxpayers. Corrective action plan (partially met). While FEMA developed corrective action plans for implementing the recommendations in individual GAO reports, it has not developed a comprehensive plan to address the issues that have placed the NFIP on GAO’s high-risk list. While addressing our recommendations is part of such a plan, a comprehensive plan also defines the root causes, identifies effective solutions, and provides for substantially completing corrective measures near term. According to a DHS official, the individual action plans collectively represent their plan for addressing these issues, as the recommendations cover steps needed to improve the program’s financial stability as well as its administration. The official added that DHS has developed more comprehensive plans for other high-risk areas, which have been helpful, and could consider doing so for the NFIP, but such plans require a lot of work. Such a plan could help FEMA ensure that all important issues, and all aspects of those issues, are addressed. For example, while our recommendations regarding the NFIP’s financial stability have focused on the extent of subsidized rates and the rate- setting process, financial stability could include other important areas, such as debt management. As of December 2013, FEMA owed the Treasury $24 billion—primarily to pay claims associated with Superstorm Sandy (2012) and Hurricane Katrina (2005)—and had not made a principal payment since 2010. Capacity (partially met). FEMA faces several challenges in improving the program’s financial stability and operations. First, recent legislative changes permit certain premium subsidies and restore grandfathered rates removed by the Biggert-Waters Act. These provisions, along with others, may weaken the potential for improved financial soundness of the NFIP program. Second, while FEMA is establishing a reserve fund as required by the Biggert-Waters Act, it is unlikely to initially meet the act’s annual targets for building up the reserve, partly because of statutory limitations on annual premium increases. Third, while FEMA has begun taking some actions to improve its administration of the NFIP, it is unclear how the resources required to implement both the Biggert-Waters Act and the 2014 Act will affect its ability to continue and complete these efforts. For example, the Acts require FEMA to complete multiple studies and take a number of actions within the next several years, which will require resources FEMA would normally have committed to other efforts. Monitoring Progress (partially met). FEMA has a process in place to monitor progress in taking actions to implement our recommendations related to the NFIP. For example, the status of efforts to address the recommendations is regularly discussed both within the Flood Insurance and Mitigation Administration, which administers the NFIP, and at the DHS level, according to a DHS official. However, it does not have a specific process for independently validating the effectiveness or sustainability of those actions. Instead, according to a DHS official, once a recommendation related to the NFIP is implemented, the effects of the actions taken to do so are not tracked separately, but are evaluated as part of regular reviews of the effectiveness of the entire program. Broader monitoring of the effectiveness and sustainability of its actions would help ensure that appropriate corrective actions are being taken. Demonstrated, sustained progress (partially met). FEMA has begun to take actions to improve the program’s financial stability, such as initiating actions to improve the accuracy of full-risk rates. However, these efforts are not complete, and FEMA does not have some information, such as the number and location of existing grandfathered properties and information necessary to appropriately revise premium rates for previously subsidized properties. Similarly, FEMA has taken a number of actions to improve areas of the program’s operations, such as financial reporting and continuity planning. However, some important actions, such as modernizing its policy and claims management system and ensuring the reasonableness of compensation to insurance companies that sell and service most NFIP policies, remain to be completed.financial and operational issues facing NFIP. Progress has been made in the government-wide high-risk areas in which DHS plays a critical role, but significant work remains. As we reported in our February 2013 high-risk update, the White House and federal agencies, including DHS, have taken a variety of actions that were intended to enhance federal and critical infrastructure cybersecurity. For example, the government issued numerous strategy-related documents over the past decade and established agency performance goals and a mechanism to monitor performance in three cross-agency priority areas of strong authentication, Trusted Internet Connections, and continuous monitoring. The White House, Presidential Policy Directive/PPD-21, Critical Infrastructure Security and Resilience (Feb. 12, 2013). either a material weakness or a significant deficiency in internal controls over financial reporting in fiscal year 2013. In addition to having responsibilities for securing its own information systems and data, DHS plays a pivotal role in government-wide cybersecurity efforts. In particular, in July 2010, the Director of the Office of Management and Budget (OMB) and the White House Cybersecurity Coordinator issued a joint memorandum that transferred several key OMB responsibilities under the Federal Information Security Management Act of 2002 (FISMA) to DHS. responsibility within the executive branch for overseeing and assisting with the operational aspects of cybersecurity for federal systems that fall within the scope of FISMA. Specifically, DHS is to exercise primary We agree that DHS should play a role in the operational aspects of federal cybersecurity. We suggested in February 2013 that Congress consider legislation that would clarify roles and responsibilities for implementing and overseeing federal information security programs and for protecting the nation’s critical cyber assets. See Pub. L. No. 107-347, Dec. 17, 2002; 44 U.S.C. 3541, et seq. See, most recently, Department of Homeland Security, NIPP 2013: Partnering for Critical Infrastructure Security and Resilience. cybersecurity posture of the federal government and the nation. For example, H.R. 3696, the National Cybersecurity and Critical Infrastructure Protection Act of 2014, would address DHS’s role and responsibilities in protecting federal civilian information systems and critical infrastructure from cyber threats. In carrying out its role in overseeing and assisting federal agencies in implementing information security requirements, DHS has begun performing several activities. These include conducting “CyberStat” reviews, which are intended to hold agencies accountable and offer assistance in improving their information security posture; holding interviews with agency chief information officers and chief information security officers on security status and issues; establishing a program to enable federal agencies to expand their continuous diagnostics and mitigation capabilities; and refining performance metrics that agencies use for FISMA reporting purposes. In February 2014, as part of our continued dialogue with DHS regarding progress and what remains to be accomplished in this high-risk area, we identified and communicated to DHS actions critical to addressing its efforts to oversee and assist agencies in improving information security practices. This included the following: Expand CyberStat reviews to all major federal agencies. DHS has conducted CyberStat sessions with several of the 24 major federal agencies. According to DHS officials, the current approach focuses on providing CyberStat reviews for the lowest-performing agencies. However, expanding the reviews to include all 24 agencies could lead to an improved security posture. Enhance FISMA reporting metrics. In September 2013, we reported that the metrics issued by DHS for gauging the implementation of priority security goals and other important controls did not address key security activities and did not always include performance targets. We recommended that OMB and DHS collaborate to develop improved metrics, and the agencies stated that they plan to implement the recommendation by September 2014. Develop a strategic implementation plan. DHS’s Office of Inspector General reported in June 2013 that the department had not developed a strategic implementation plan describing its cybersecurity responsibilities and a clear plan of action for fulfilling them. According to DHS officials, it has developed this plan and is awaiting closure of the inspector general recommendation. We will review the status of this plan as part of our ongoing review of this high risk area. Continue to develop continuous diagnostics and mitigation capabilities and assist agencies in developing and acquiring them. This effort is intended to protect networks and enhance an agency’s ability to see and counteract day-to-day cyber threats. The successful implementation of these actions should result in outcomes such as enhanced DHS oversight and assistance through CyberStat, improved metrics and other outcomes, improved situational awareness, and enhanced capabilities for assisting agencies in responding to cyber incidents. In conjunction with needed actions by federal agencies, this could contribute to improved information security government-wide. DHS, in conjunction with other executive branch entities, has taken steps to enhance the protection of cyber critical infrastructure. For example, according to DHS, it has expanded the capacity of its National Cybersecurity and Communications Integration Center to facilitate coordination and information sharing among federal and private sector stakeholders; established the Information Sharing Working Group and a mechanism for creating cyber threat reports that can be shared with private sector partners; and set up a voluntary program to encourage critical infrastructure owners and operators to use the cybersecurity framework developed by the National Institute of Standards and Technology, as required by Executive Order 13636. In February 2014, we identified and communicated to DHS actions critical to addressing cyber critical infrastructure protection, including the following: expand the Enhanced Cybersecurity Services program, which is intended to provide classified cyber threat and technical information to eligible critical infrastructure entities, to all critical infrastructure sectors as required by Executive Order 13636; enhance coordination efforts with private sector entities to facilitate improvements to the cybersecurity of critical infrastructure; and identify a set of incentives designed to promote implementation of the NIST cybersecurity framework. Completing these efforts could assist in achieving a flow of timely and actionable cybersecurity threat and incident information among federal stakeholders and critical infrastructure entities, adoption of the cybersecurity framework by infrastructure owners and operators, and effective implementation of security controls over a significant portion of critical cyber assets. As we reported in March 2014, more needs to be done to accelerate the progress made in bolstering the cybersecurity posture of the nation and federal government. The administration and executive branch agencies need to implement the hundreds of recommendations made by GAO and agency inspectors general to address cyber challenges, resolve known deficiencies, and fully implement effective information security programs. Until then, a broad array of federal assets and operations will remain at risk of fraud, misuse, and disruption, and the nation’s most critical federal and private sector infrastructure systems will remain at increased risk of attack from our adversaries. DHS has made significant progress in enhancing the sharing of information on terrorist threats and in supporting government-wide efforts to improve such sharing. Our work on assessing the high-risk area on sharing terrorism-related information has primarily focused on federal efforts to implement the Information Sharing Environment, as called for in the Intelligence Reform and Terrorism Prevention Act of 2004. The Information Sharing Environment is a government-wide effort to improve the sharing of terrorism-related information across federal agencies and with state, local, territorial, tribal, private sector, and foreign partners. When assessing progress, we review the activities of both the Program Manager for the Information Sharing Environment—a position established under the 2004 Act with responsibility for information sharing across the government—as well as efforts of DHS and other key entities, including the Departments of Justice, State, and Defense, and the Office of the Director of National Intelligence. Accordingly, DHS itself is not on the high-risk list nor can DHS’s efforts fully resolve the high risk issue. Nevertheless, DHS plays a critical role in government-wide sharing given its homeland security missions and responsibilities. Overall, the federal government has made progress in addressing the terrorism-related information-sharing high-risk area. As we reported in our February 2013 update, the federal government is committed to establishing effective mechanisms for managing and sharing terrorism- related information, and has developed a national strategy, implementation plans, and methods to assess progress and results. While progress has been made, the government needs to take additional action to mitigate the potential risks from gaps in sharing information, such as ensuring that it is leveraging individual agency initiatives to benefit all partners and continuing work to develop metrics that measure the homeland security results achieved from improved sharing. We are currently conducting work with the Program Manager and key entities to determine their progress in meeting the criteria since the 2013 high-risk report. Separately, in response to requests from this committee and other congressional committees, we have assessed or are currently assessing DHS’s specific efforts to enhance the sharing of terrorism-related information. As discussed below, this work includes DHS efforts to (1) support state and major urban area fusion centers, (2) coordinate with other federal agencies that support task forces and other centers in the field that share information on threats as part of their activities, (3) achieve its own information-sharing mission, and (4) share information related to the department’s intelligence analysis efforts. Fusion centers. A major focus of the high-risk area and Information Sharing Environment has been to improve the sharing of terrorism-related information among the federal government and state and local security partners, which is done in part through state and major urban area fusion centers. DHS is the federal lead for supporting these centers and has made significant strides. For example, DHS has deployed personnel to centers to serve as liaisons to the department and help centers develop capabilities (such as the ability to analyze and disseminate information), provided grant funding to support center activities, provided access to networks disseminating classified and unclassified information, and helped centers identify and share reports on terrorism-related suspicious activities. DHS has been very responsive to a recommendation in our 2010 report that calls for establishing metrics to determine what return the federal government is getting for its investments in centers.ongoing review of DHS’s efforts to assess center capabilities, manage federal grant funding, and determine the contributions centers make to enhance homeland security, and expect to issue a report later this year. Field-based entities that share information. DHS is also taking steps to measure the extent to which fusion centers are coordinating and sharing information with other field-based task forces and centers—such as Federal Bureau of Investigation Joint Terrorism Task Forces—and assess In April 2013, we reported that opportunities to improve coordination. fusion centers and other field-based entities had overlapping activities, but the agencies that support them had not held the entities accountable for coordinating and collaborating or assessed opportunities to enhance coordination, and recommended that the agencies develop mechanisms to do so. In response, DHS began tracking collaboration mechanisms, such as which fusion centers have representatives from the other entities on their executive boards, are colocated with other entities, and issue products jointly developed with other entities. DHS’s efforts can help avoid unnecessary overlap in activities, which in turn can help entities leverage scarce resources. To fully address our recommendation, however, the other federal agencies must take steps to better hold their respective field entities accountable for such collaboration. In addition, these agencies must work with DHS to collectively assess nationwide any opportunities for field entities to further implement collaboration mechanisms. DHS information-sharing mission. In September 2012, we reported that DHS had made progress in achieving its own information-sharing mission, but could take additional steps to improve its efforts. Specifically, DHS had demonstrated leadership commitment by establishing a governance board to serve as the decision-making body for DHS information-sharing issues. The board has enhanced collaboration among DHS components and identified a list of key information-sharing initiatives to pursue, among other things. We found, however, that five of DHS’s top eight priority initiatives faced funding shortfalls. We also reported that DHS had taken steps to track its information-sharing efforts, but had not fully assessed how such efforts had improved sharing. We recommended that DHS (1) revise its policies and guidance to include processes for identifying information-sharing gaps; analyzing root causes of those gaps, and identifying, assessing, and mitigating risks of removing incomplete initiatives from its list, and (2) better track and assess the progress of key initiatives and the department’s overall progress in achieving its information-sharing vision. DHS has since taken actions— such as issuing revised guidance and developing new performance measures—to address all of these recommendations. Sharing intelligence analysis. We are finalizing a report on DHS’s intelligence analysis capabilities, which are a key part of the department’s efforts in securing the nation. Within DHS, the Office of Intelligence and Analysis has a lead role for intelligence analysis, but other operational components—such as CBP and ICE—also perform their own analysis activities and are part of the DHS Intelligence Enterprise. Our report, expected to be issued later this month, will address (1) the extent to which the intelligence analysis activities of the enterprise are integrated to support departmental strategic intelligence priorities, and are unnecessarily overlapping or duplicative; (2) the extent to which Office of Intelligence and Analysis customers report that they find products and other analytic services to be useful, and what steps, if any, the office has taken to address any concerns customers report; and (3) challenges the Office of Intelligence and Analysis has faced in maintaining a skilled analytic workforce and steps it has taken to address these challenges. We are planning to make recommendations to help DHS enhance its intelligence analysis capabilities and related sharing of this information. Overall, DHS’s continued progress in enhancing the sharing of terrorism- related information and responding to our findings and recommendations will be critical to supporting government-wide sharing and related efforts to secure the homeland. Chairman McCaul, Ranking Member Thompson, and members of the committee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact George A. Scott at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 1990, GAO has regularly reported on government operations identified as high risk because of their greater vulnerability to fraud, waste, abuse, and mismanagement, or the need for transformation to address economy, efficiency, or effectiveness challenges. DHS has sole or critical responsibility for four GAO high-risk areas—(1) strengthening its management functions, (2) NFIP, (3) information security and cyber critical infrastructure protection, and (4) terrorism-related information sharing. This statement addresses DHS's progress and work remaining in addressing high-risk areas for which (1) it has sole responsibility and (2) it has critical, but shared responsibility. This statement is based on GAO's February 2013 high-risk update, reports and testimonies issued from March 2013 through April 2014, and analyses from GAO's ongoing assessment of DHS's efforts since February 2013 to address its high-risk designations. For these analyses, GAO examined DHS documents and interviewed DHS officials. The Department of Homeland Security (DHS) has made progress in addressing high-risk areas for which it has sole responsibility, but significant work remains. Strengthening management functions. In this area, DHS has met two and partially met three of GAO's five criteria for removing areas from the high-risk list. Specifically, DHS has met the criteria for having (1) demonstrated leadership commitment, and (2) a corrective action plan for addressing its management risks. However, it has partially met GAO's criteria for (1) capacity (having sufficient resources); (2) having a framework to monitor progress; and (3) demonstrated, sustained progress. DHS has made important progress, but to more fully address GAO's high-risk designation, DHS needs to show measurable, sustainable progress in implementing key management initiatives. For example: Human capital management. DHS has developed and demonstrated progress in implementing a strategic human capital plan. However, DHS needs to improve other aspects of its human capital management. As GAO reported in December 2013, the Office of Personnel Management's 2013 Federal Employee Viewpoint Survey data showed that DHS ranked 36th of 37 federal agencies in a measure of employee job satisfaction. In addition, employee satisfaction had decreased 7 percentage points since 2011, which is more than the government-wide decrease. Accordingly, DHS has considerable work ahead to improve its employee morale. Further, DHS is finalizing its analysis of skill gaps in key portions of its workforce including emergency management specialists and cyber-focused IT management personnel. Acquisition management. DHS has made progress in initiating efforts to validate required acquisition documents. However, about half of DHS major programs lack an approved baseline, 77 percent lack approved life cycle cost estimates, and the department has not implemented its acquisition policy consistently. In March 2014, GAO reported that the Transportation Security Administration does not collect or analyze available information that could be used to enhance the effectiveness of its advanced imaging technology. In March 2014, GAO also found that the U.S. Customs and Border Protection (CBP) did not fully follow DHS policy regarding testing for the integrated fixed towers being deployed on the Arizona border. As a result, DHS does not have complete information on how the towers will operate once they are fully deployed. Financial management. DHS has made progress toward improving its financial management, but a significant amount of work remains to be completed. For example, DHS needs to eliminate all material weaknesses at the department level in areas such as property, plant, and equipment before its financial auditor can assert that the controls are effective. DHS also needs to effectively manage the modernization of financial management systems at the U.S. Coast Guard, U.S. Immigration and Customs Enforcement, and the Federal Emergency Management Agency (FEMA). Information Technology (IT) Management. While important steps have been taken to define IT investment management processes, work is needed to demonstrate progress in implementing these processes across DHS's 13 IT investment portfolios. In July 2012, GAO recommended that DHS finalize the policies and procedures associated with its new tiered IT governance structure and continue to implement key processes supporting this structure. DHS agreed with these recommendations; however, as of April 2014, the department had not finalized the key IT governance directive, and the draft structure has been implemented across only 5 of the 13 investment portfolios. National Flood Insurance Program (NFIP) . DHS's FEMA, which manages the NFIP, has partially met the five criteria for NFIP removal from the high-risk list, but needs to initiate or complete additional actions. For example, FEMA has not completed actions in certain areas, such as modernizing its claims and policy management system and overseeing compensation of insurers that sell NFIP policies. In addition, FEMA is unlikely to generate sufficient revenue to cover future catastrophic losses or repay billions of dollars borrowed from the Department of the Treasury. As of December 2013, FEMA owed the Treasury $24 billion—primarily to pay claims associated with Superstorm Sandy (2012) and Hurricane Katrina (2005)—and had not made a principal payment since 2010. Progress has been made in the following government-wide high-risk areas in which DHS plays a critical role, but significant work remains. Information security and cyber critical infrastructure protection. Federal agencies, including DHS, have taken a variety of actions that were intended to enhance federal and critical infrastructure cybersecurity, but more efforts are needed. DHS needs to take several actions to better oversee and assist agencies in improving information security practices. For instance, DHS should continue to assist agencies in developing and acquiring continuous diagnostic and mitigation capabilities to protect networks and counteract day-to-day cyber threats. In addition, DHS has taken steps to enhance the protection of cyber critical infrastructure but could do more to enhance coordination with the private sector. Terrorism-related information sharing. The federal government faces significant challenges in sharing terrorism-related information. However, DHS has made significant progress in enhancing the sharing of this information. For example, DHS is taking steps to measure the extent to which fusion centers—collaborative efforts within states that investigate and respond to criminal and terrorist activity—are coordinating with other field-based task forces and centers to share terrorism-related information, and assessing opportunities to improve coordination and information sharing. The federal government has important work ahead to address the high risk issue, such as developing metrics that measure the homeland security results achieved from improved information sharing. This testimony contains no new recommendations. GAO has made over 2,100 recommendations to DHS since its establishment in 2003 to strengthen its management and integration efforts, among other things. DHS has implemented more than 65 percent of these recommendations and has actions under way to address others.
In 1998, following a presidential call for VA and DOD to start developing a “comprehensive, life-long medical record for each service member,” the two departments began a joint course of action aimed at achieving the capability to share patient health information for active duty military personnel and veterans. Their first initiative, undertaken in that year, was the Government Computer-Based Patient Record (GCPR) project, whose goal was an electronic interface that would allow physicians and other authorized users at VA and DOD health facilities to access data from any of the other agency’s health information systems. The interface was expected to compile requested patient information in a virtual record that could be displayed on a user’s computer screen. In our reviews of the GCPR project, we determined that the lack of a lead entity, clear mission, and detailed planning to achieve that mission made it difficult to monitor progress, identify project risks, and develop appropriate contingency plans. In April 2001 and in June 2002, we made recommendations to help strengthen the management and oversight of the project. In 2001, we recommended that the participating agencies (1) designate a lead entity with final decision-making authority and establish a clear line of authority for the GCPR project and (2) create comprehensive and coordinated plans that included an agreed-upon mission and clear goals, objectives, and performance measures, to ensure that the agencies could share comprehensive, meaningful, accurate, and secure patient health care data. In 2002, we recommended that the participating agencies revise the original goals and objectives of the project to align with their current strategy, commit the executive support necessary to adequately manage the project, and ensure that it followed sound project management principles. VA and DOD took specific measures in response to our recommendations for enhancing overall management and accountability of the project. By July 2002, VA and DOD had revised their strategy and had made progress toward being able to electronically share patient health data. The two departments had refocused the project and named it the Federal Health Information Exchange (FHIE) program and, consistent with our prior recommendation, had finalized a memorandum of agreement designating VA as the lead entity for implementing the program. This agreement also established FHIE as a joint activity that would allow the exchange of health care information in two phases. ● The first phase, completed in mid-July 2002, enabled the one-way transfer of data from DOD’s existing health information system (the Composite Health Care System, CHCS) to a separate database that VA clinicians could access. ● A second phase, finalized in March 2004, completed VA’s and DOD’s efforts to add to the base of patient health information available to VA clinicians via this one-way sharing capability. According to the December 2004 VA/DOD Joint Executive Council Annual Report, FHIE was fully operational, and VA providers at all VA medical centers and clinics nationwide had access to data on separated service members. According to the report, the FHIE data repository at that time contained historical clinical health data on 2.3 million unique patients from 1989 on, and the repository made a significant contribution to the delivery and continuity of care and adjudication of disability claims of separated service members as they transitioned to veteran status. The departments reported total GCPR/FHIE costs of about $85 million through fiscal year 2003. In addition, officials stated that in December 2004, the departments began to use the FHIE framework to transfer pre- and postdeployment health assessment data from DOD to VA. According to these officials, VA has now received about 400,000 of these records. However, not all DOD medical information is captured in CHCS. For example, according to DOD officials, as of September 6, 2005, 1.7 million patient stay records were stored in the Clinical Information System (a commercial product customized for DOD). In addition, many Air Force facilities use a system called the Integrated Clinical Database for their medical information. The revised DOD/VA strategy also envisioned achieving a longer term, two-way exchange of health information between DOD and VA, which may also address systems outside of CHCS. Known as HealthePeople (Federal), this initiative is premised on the departments’ development of a common health information architecture comprising standardized data, communications, security, and high-performance health information systems. The joint effort is expected to result in the secured sharing of health data between the new systems that each department is currently developing and beginning to implement—VA’s HealtheVet VistA and DOD’s CHCS II. ● DOD began developing CHCS II in 1997 and had completed a key component for the planned electronic interface—its Clinical Data Repository. When we last reported in June 2004, the department expected to complete deployment of all of its major system capabilities by September 2008. DOD reported expenditures of about $600 million for the system through fiscal year 2004. ● VA began work on HealtheVet VistA and its associated Health Data Repository in 2001 and expected to complete all six initiatives comprising this system in 2012. VA reported spending about $270 million on initiatives that comprise HealtheVet VistA through fiscal year 2004. Under the HealthePeople (Federal) initiative, VA and DOD envision that, on entering military service, a health record for the service member would be created and stored in DOD’s Clinical Data Repository. The record would be updated as the service member receives medical care. When the individual separated from active duty and, if eligible, sought medical care at a VA facility, VA would then create a medical record for the individual, which would be stored in its Health Data Repository. On viewing the medical record, the VA clinician would be alerted and provided with access to the individual’s clinical information residing in DOD’s repository. In the same manner, when a veteran sought medical care at a military treatment facility, the attending DOD clinician would be alerted and provided with access to the health information in VA’s repository. According to the departments, this planned approach would make virtual medical records displaying all available patient health information from the two repositories accessible to both departments’ clinicians. To achieve this goal requires the departments to be able to exchange computable health information between the data repositories for their future health systems: that is, VA’s Health Data Repository (a component of HealtheVet VistA) and DOD’s Clinical Data Repository (a component of CHCS II). In March 2004, the departments began an effort to develop an interface linking these two repositories, known as CHDR (a name derived from the abbreviations for DOD’s Clinical Data Repository—CDR—and VA’s Health Data Repository—HDR). According to the departments, they planned to be able to exchange selected health information through CHDR by October 2005. Developing the two repositories, populating them with data, and linking them through the CHDR interface would be important steps toward the two departments’ long-term goals as envisioned in HealthePeople (Federal). Achieving these goals would then depend on completing the development and deployment of the associated health information systems—HealtheVet VistA and CHCS II. In our most recent review of the CHDR program, issued in June 2004, we reported that the efforts of DOD and VA in this area demonstrated a number of management weaknesses. Among these were the lack of a well-defined architecture for describing the interface for a common health information exchange; an established project management lead entity and structure to guide the investment in the interface and its implementation; and a project management plan defining the technical and managerial processes necessary to satisfy project requirements. With these critical components missing, VA and DOD increased the risk that they would not achieve their goals. Accordingly, we recommended that the departments ● develop an architecture for the electronic interface between their health systems that includes system requirements, design specifications, and software descriptions; ● select a lead entity with final decision-making authority for the ● establish a project management structure to provide day-to-day guidance of and accountability for their investments in and implementation of the interface capability; and ● create and implement a comprehensive and coordinated project management plan for the electronic interface that defines the technical and managerial processes necessary to satisfy project requirements and includes (1) the authority and responsibility of each organizational unit; (2) a work breakdown structure for all of the tasks to be performed in developing, testing, and implementing the software, along with schedules associated with the tasks; and (3) a security policy. Besides pursuing their long-term goals for future systems through the HealthePeople (Federal) strategy, the departments are working on two demonstration projects that focus on exchanging information between existing systems: (1) Bidirectional Health Information Exchange, a project to exchange health information on shared patients, and (2) Laboratory Data Sharing Interface, an application used to transfer laboratory work orders and results. These demonstration projects were planned in response to provisions of the Bob Stump National Defense Authorization Act of 2003, which mandated that VA and DOD conduct demonstration projects that included medical information and information technology systems to be used as a test for evaluating the feasibility, advantages, and disadvantages of measures and programs designed to improve the sharing and coordination of health care and health care resources between the departments. Figure 1 is a time line showing initiation points for the VA and DOD efforts discussed here, including strategies, major programs, and the recent demonstration projects. VA and DOD have begun to implement applications developed under two demonstration projects that focus on the exchange of electronic medical information. The first—the Bidirectional Health Information Exchange—has been implemented at five VA/DOD locations and the second—Laboratory Data Sharing Interface—has been implemented at six VA/DOD locations. According to a VA/DOD annual report and program officials, Bidirectional Health Information Exchange (BHIE) is an interim step in the departments’ overall strategy to create a two-way exchange of electronic medical records. BHIE builds on the architecture and framework of FHIE, the current application used to transfer health data on separated service members from DOD to VA. As discussed earlier, FHIE provides an interface between VA’s and DOD’s current health information systems that allows one-way transfers only, which do not occur in real time: VA clinicians do not have access to transferred information until about 6 weeks after separation. In contrast, BHIE focuses on the two-way, near-real-time exchange of information (text only) on shared patients (such as those at sites jointly occupied by VA and DOD facilities). This application exchanges data between VA’s VistA system and DOD’s CHCS system (and CHCS II where implemented). To date, the departments reported having spent $2.6 million on BHIE. The primary benefit of BHIE is the near-real-time access to patient medical information for both VA and DOD, which is not available through FHIE. During a site visit to a VA and DOD location in Puget Sound, we viewed a demonstration of this capability and were told by a VA clinician that the near-real-time access to medical information has been very beneficial in treating shared patients. As of August 2005, BHIE was tested and deployed at VA and DOD facilities in Puget Sound, Washington, and El Paso, Texas, where the exchange of demographic, outpatient pharmacy, radiology, laboratory, and allergy data (text only) has been achieved. The application has also been deployed to three other locations this month (see table 1). According to the program manager, a plan to export BHIE to additional locations has been approved. The additional locations were selected based on a number of factors, including the number and types of VA and DOD medical facilities in the area, FHIE usage, and retiree population at the locations. The program manager stated that implementation of BHIE requires training of staff from both departments. In addition, implementation at DOD facilities requires installation of a server; implementation at VA facilities requires installation of a software patch (downloaded from a VA computer center), but no additional equipment. As shown in table 1, five additional implementations are scheduled for the first quarter of fiscal year 2006. Additionally, because DOD stores electronic medical information in systems other than CHCS (such as the Clinical Information System and the Integrated Clinical Database), work is currently under way to allow BHIE to have the ability to exchange information with those systems. The Puget Sound Demonstration site is also working on sharing consultation reports stored in the VA and DOD systems. The Laboratory Data Sharing Interface (LDSI) initiative enables the two departments to share laboratory resources. Through LDSI, a VA provider can use VA’s health information system to write an order for laboratory tests, and that order is electronically transferred to DOD, which performs the test. The results of the laboratory tests are electronically transferred back to VA and included in the patient’s medical record. Similarly, a DOD provider can choose to use a VA lab for testing and receive the results electronically. Once LDSI is fully implemented at a facility, the only nonautomated action in performing laboratory tests is the transport of the specimens. Among the benefits of LDSI is increased speed in receiving laboratory results and decreased errors from multiple entry of orders. However, according to the LDSI project manager in San Antonio, a primary benefit of the project will be the time saved by eliminating the need to rekey orders at processing labs to input the information into the laboratories’ systems. Additionally, the San Antonio VA facility will no longer have to contract out some of its laboratory work to private companies, but instead use the DOD laboratory. To date, the departments reported having spent about $3.3 million on LDSI. An early version of what is now LDSI was originally tested and implemented at a joint VA and DOD medical facility in Hawaii in May 2003. The demonstration project built on this application and enhanced it; the resulting application was tested in San Antonio and El Paso. It has now been deployed to six sites in all. According to the departments, a plan to export LDSI to additional locations has been approved. Table 2 shows the locations at which it has been or is to be implemented. Besides the near-term initiatives just discussed, VA and DOD continue their efforts on the longer term goal: to achieve a virtual medical record based on the two-way exchange of computable data between the health information systems that each is currently developing. The cornerstone for this exchange is CHDR, the planned electronic interface between the data repositories for the new systems. The departments have taken important actions on the CHDR initiative. In September 2004 they successfully completed Phase I of CHDR by demonstrating the two-way exchange of pharmacy information with a prototype in a controlled laboratory environment. According to department officials, the pharmacy prototype provided invaluable insight into each other’s data repository systems, architecture, and the work that is necessary to support the exchange of computable information. These officials stated that lessons learned from the development of the prototype were documented and are being applied to Phase II of CHDR, the production phase, which is to implement the two-way exchange of patient health records between the departments’ data repositories. Further, the same DOD and VA teams that developed the prototype are now developing the production version. In addition, the departments developed an architecture for the CHDR electronic interface, as we recommended in June 2004. The architecture for CHDR includes major elements required in a complete architecture. For example, it defines system requirements and allows these to be traced to the functional requirements, it includes the design and control specifications for the interface design, and it includes design descriptions for the software. Also in response to our recommendations, the departments have established project accountability and implemented a joint project management structure. Specifically, the Health Executive Council has been established as the lead entity for the project. The joint project management structure consists of a Program Manager from VA and a Deputy Program Manager from DOD to provide day-to-day guidance for this initiative. Additionally, the Health Executive Council established the DOD/VA Information Management/Information Technology Working Group and the DOD/VA Health Architecture Interagency Group, to provide programmatic oversight and to facilitate interagency collaboration on sharing initiatives between DOD and VA. To build on these actions and successfully carry out the CHDR initiative, however, the departments still have a number of challenges to overcome. The success of CHDR will depend on the departments’ instituting a highly disciplined approach to the project’s management. Industry best practices and information technology project management principles stress the importance of accountability and sound planning for any project, particularly an interagency effort of the magnitude and complexity of this one. We recommended in 2004 that the departments develop a clearly defined project management plan that describes the technical and managerial processes necessary to satisfy project requirements and includes (1) the authority and responsibility of each organizational unit; (2) a work breakdown structure for all of the tasks to be performed in developing, testing, and implementing the software, along with schedules associated with the tasks; and (3) a security policy. Currently, the departments have an interagency project management plan that provides the program management principles and procedures to be followed by the project. However, the plan does not specify the authority and responsibility of organizational units for particular tasks; the work breakdown structure is at a high level and lacks detail on specific tasks and time frames; and security policy is still being drafted. Without a plan of sufficient detail, VA and DOD increase the risk that the CHDR project will not deliver the planned capabilities in the time and at the cost expected. In addition, officials now acknowledge that they will not meet a previously established milestone: by October 2005, the departments had planned to be able to exchange outpatient pharmacy data, laboratory results, allergy information, and patient demographic information on a limited basis. However, according to officials, the work required to implement standards for pharmacy and medication allergy data was more complex than originally anticipated and led to the delay. They stated that the schedule for CHDR is presently being revised. Development and data quality testing must be completed and the results reviewed. The new target date for medication allergy, outpatient pharmacy, and patient demographic data exchange is now February 2006. Finally, the health information currently in the data repositories has various limitations. ● Although DOD’s Clinical Data Repository includes data in the categories that were to be exchanged at the missed milestone described above: outpatient pharmacy data, laboratory results, allergy information, and patient demographic information, these data are not yet complete. First, the information in the Clinical Data Repository is limited to those locations that have implemented the first increment of CHCS II, DOD’s new health information system. As of September 9, 2005, according to DOD officials, 64 of 139 medical treatment facilities worldwide have implemented this increment. Second, at present, health information in systems other than CHCS (such as the Clinical Information System and the Integrated Clinical Database) is not yet being captured in the Clinical Data Repository. For example, according to DOD officials, as of September 9, 2005, the Clinical Information System contained 1.7 million patient stay records. ● The information in VA’s Health Data Repository is also limited: although all VA medical records are currently electronic, VA has to convert these into the interoperable format appropriate for the Health Data Repository. So far, the data in the Health Data Repository consist of patient demographics and vital signs records for the 6 million veterans who have electronic medical records in VA’s current system, VistA (this system contains all the department’s medical records in electronic form). VA officials told us that they plan next to sequentially convert allergy information, outpatient pharmacy data, and lab results for the limited exchange that is now planned for February 2006. In summary, developing an electronic interface that will enable VA and DOD to exchange computable patient medical records is a highly complex undertaking that could lead to substantial benefits— improving the quality of health care and disability claims processing for the nation’s military members and veterans. VA and DOD have made progress in the electronic sharing of patient health data in their limited, near-term demonstration projects, and have taken an important step toward their long-term goals by improving the management of the CHDR program. However, the departments face considerable work and significant challenges before they can achieve these long-term goals. While the departments have made progress in developing a project management plan defining the technical and managerial processes necessary to satisfy project requirements, this plan does not specify the authority and responsibility of organizational units for particular tasks, the work breakdown structure lacks detail on specific tasks and time frames, and security policy has not yet been finalized. Without a project management plan of sufficient specificity, the departments risk further delays in their schedule and continuing to invest in a capability that could fall short of expectations. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Committee may have at this time. For information about this testimony, please contact Linda D. Koontz, Director, Information Management Issues, at (202) 512-6240 or at [email protected]. Other individuals making key contributions to this testimony include Nabajyoti Barkakati, Barbara S. Collier, Nancy E. Glover, James T. MacAulay, Barbara S. Oliver, J. Michael Resser, and Eric L. Trout. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For the past 7 years, the Departments of Veterans Affairs (VA) and Defense (DOD) have been working to exchange patient health information electronically and ultimately to have interoperable electronic medical records. Sharing medical information helps (1) promote the seamless transition of active duty personnel to veteran status and (2) ensure that active duty military personnel and veterans receive high-quality health care and assistance in adjudicating their disability claims. This is especially critical in the face of current military responses to national and foreign crises. In testimony before the Veterans' Affairs Subcommittee on Oversight and Investigations in March and May 2004, GAO discussed the progress being made by the departments in this endeavor. In June 2004, at the Subcommittee's request, GAO reported on its review of the departments' progress toward the goal of an electronic two-way exchange of patient health records. GAO is providing an update on the departments' efforts, focusing on (1) the status of ongoing, near-term initiatives to exchange data between the agencies' existing systems and (2) progress in achieving the longer term goal of exchanging data between the departments' new systems. In the past year, VA and DOD have begun to implement applications that exchange limited electronic medical information between the departments' existing health information systems. These applications are (1) Bidirectional Health Information Exchange, a project to achieve the two-way exchange of health information on patients who receive care from both VA and DOD, and (2) Laboratory Data Sharing Interface, an application used to electronically transfer laboratory work orders and results between the departments. The Bidirectional Health Information Exchange application has been implemented at five sites, at which it is being used to rapidly exchange information such as pharmacy and allergy data. Also, the Laboratory Data Sharing Interface application has been implemented at six sites, at which it is being used for real-time entry of laboratory orders and retrieval of results. According to the departments, these systems enable lower costs and improved service to patients by saving time and avoiding errors. VA and DOD are continuing with activities to support their longer term goal of sharing health information between their systems, but the goal of two-way electronic exchange of patient records remains far from being realized. Each department is developing its own modern health information system--VA's HealtheVet VistA and DOD's Composite Health Care System II--and they have taken steps to respond to GAO's June 2004 recommendations regarding the program to develop an electronic interface that will enable these systems to share information. That is, they have developed an architecture for the interface, established project accountability, and implemented a joint project management structure. However, they have not yet developed a clearly defined project management plan to guide their efforts, as GAO previously recommended. Further, they have not yet fully populated the repositories that will store the data for their future health systems, and they have experienced delays in their efforts to begin a limited data exchange. Lacking a detailed project management plan increases the risk that the departments will encounter further delays and be unable to deliver the planned capabilities on time and at the cost expected.
VA’s mission is to serve America’s veterans and their families and to be their principal advocate in ensuring that they receive medical care, benefits, and social support in recognition of their service to our nation. VA, headquartered in Washington, D.C., is the second largest federal department and operates the largest health care system in the United States. VA reported that as of September 30, 2007, it employed approximately 230,000 staff nationwide, including physicians, nurses, counselors, statisticians, computer specialists, architects, and attorneys. VA carries out its mission through three major line organizations – Veterans Health Administration (VHA), Veterans Benefits Administration, and National Cemetery Administration–and field facilities throughout the United States. During fiscal year 2007, VA provided health care services and benefits through a nationwide network of 153 medical centers, 877 outpatient clinics, and 135 nursing homes. The Veterans’ Health Care Eligibility Reform Act of 1996 authorized VA to provide certain medical services not previously available to veterans with nonservice-related conditions. While VA in 1996 had authority to recover some of the cost of providing these additional benefits through billing and collecting payments from veterans’ private health insurers (third-party collections), it was not authorized to keep these collections. The Veterans Reconciliation Act of 1997, which was enacted as part of the Balanced Budget Act of 1997, changed this by authorizing VA to collect and deposit third-party health insurance payments in its Medical Care Collections Fund, which VA could then use to supplement its medical care appropriations. As part of VA’s 1997 strategic plan, VA predicted that collections of payments from third-party insurance companies, along with veteran copayments for medications, would cover the majority of costs of care for veterans with nonservice-related conditions. During fiscal year 2007, almost 5.6 million people received care in VA health care facilities, and VA collections for health care services totaled nearly $2.2 billion. As illustrated in figure 1, VA does not bill for health care services provided to veterans who have Medicare coverage only or veterans who have no private health insurance. For veterans who are covered by both Medicare and private health insurance, VA prepares claims according to Medicare guidelines and sends the bill to a Medicare fiscal intermediary (contractor) who calculates the Medicare/patient responsibility and sends the bill to the private insurer for adjudication and payment. If the veteran is not eligible for health benefits under Medicare, but has private health insurance coverage, VA bills the third-party insurance company. In some situations, VA may not recognize that a veteran is eligible for Medicare benefits and sends the bill directly to the third-party insurance company. In these situations, the third-party insurer would determine that the veteran is eligible for Medicare coverage and would reject the bill and send it back to VA. VA updates the patient’s file and then sends the bill to the Medicare contractor. Similar to most health care providers, VA uses a fee schedule consisting of “reasonable charges” for medical services based on diagnoses and procedures. The fee schedule allows VA to more accurately bill for care provided. Documenting and coding the care provided and processing bills for each episode of care are critical to preparing accurate bills for submission to third-party insurers. As illustrated in figure 2, VA uses a process consisting of four key functions to collect from third-party insurance companies. The four functions cover the following actions. Patient intake, which involves gathering insurance information and verifying that information with the insurance company as well as collecting demographic data on the veteran. Utilization review, which involves precertification of care in compliance with the veteran’s insurance policy, including continued stay reviews to determine medical necessity. Billing functions include properly documenting the health care services provided to patients by physicians and other health care providers. Based on physician documentation, the diagnoses and medical procedures performed are coded. VA then creates and sends bills to insurance companies based on the insurance and coding information obtained. Accounts receivable and collections, which involves processing payments from insurance companies and following up on outstanding or denied bills. In accordance with VA Handbook 4800.14, Medical Care Debts, VA accounts receivable staff at each medical center or other health care facility are required to follow up on unpaid reimbursable insurance cases. For bills of $250 or more, the first telephone or online follow-up is to be made within 45 days after the initial bill was generated. If necessary, a second follow-up should be initiated within 21 days of the first follow-up. If a third follow-up is necessary, it should be initiated within 14 days of the second follow-up. When a telephone or online follow-up is made, a comment briefly summarizing the contact with an appropriate follow-up date should be entered in the third-party joint inquiry menu in VHA’s Veterans Health Information Systems and Technology Architecture (VistA) system. If no payment is received within 7 days of the third follow-up, accounts receivable personnel are to refer the bill to the VA medical center senior management official responsible for collection of the bill, generally the facility revenue manager. This official will determine the next appropriate action, including, after exhausting all required recovery efforts, possible referral to the VA regional counsel of jurisdiction for review and advice as to how to handle collection procedures. The regional counsel may forward problem cases to VA’s General Counsel to review for possible litigation. Under guidance issued to VA by the Department of Justice (DOJ), VA may refer cases to DOJ for possible litigation. Our July 2004 report documented continuing weaknesses in billing processes at the three medical centers tested that impaired VA’s ability to maximize the amount of dollars paid by third-party insurance companies. For example, the medical centers did not always bill insurance companies in a timely manner and they did not always perform follow-up on unpaid receivables in accordance with VA policy. We identified insufficient resources and a lack of performance standards as major causes of these problems. Our 2004 report included several recommendations directed at improving billing and collection functions. To improve the third-party billing function, we recommended that VA (1) perform a workload analysis of the medical center’s coding and billing staff and (2) based on the workload analysis, consider making necessary resource adjustments. To address these recommendations, VA formed a work group and performed in-depth surveys at 148 medical facilities to determine whether the medical facilities had established and implemented productivity and accuracy standards that were recommended by VHA in 2002. The work group reported that the majority of the 148 facilities surveyed had implemented coding productivity standards and these standards were fairly consistent. In addition, the work group made several recommendations directed at maximizing coding productivity and assuring data quality. For example, the work group recommended that only qualified, competent coders be used and that noncoding duties related to assembly, analysis, preparation of coding records, and release of information be assigned to other staff. The work group also recommended that all coding must be completed through the national encoder software. In November 2007, VA issued Handbook, 1907.03, Health Information Management, Clinical Coding Program Procedures, which established a minimum bill coding accuracy standard of 95 percent and minimum standards (time frames) for coding productivity. Our 2004 report also made three recommendations directed at improving the third-party collection function. Specifically, we recommended that VA (1) reinforce to accounts receivable staff that they should perform the first follow-up on unpaid claims within 30 days of the billing date, as required by VA Handbook 4800.14, Medical Care Debts, and establish procedures for monitoring compliance; (2) reinforce the requirement for accounts receivable staff to enter insurance company contact information and follow-up dates to better document follow-up actions; and (3) augment VA Handbook 4800.14 by specifying a date or providing instructions for determining an appropriate date for conducting second follow-up calls to insurance companies on unpaid amounts. To address these recommendations, VA modified and reissued VA Handbook 4800.14 to explain requirements for performing and documenting the first, second, and third follow-ups with third-party insurers. For example, for third-party accounts receivable greater than $250, the reissued Handbook now requires the first follow-up to be made within 45 days after the initial bill is generated. The second follow-up is to be made within 21 days after the first follow-up and the third follow-up is to be made within 14 days of the second follow-up. VA also provided training to staff on the policies included in VA Handbook 4800.14, which included the need for timely follow-up on outstanding third-party receivables as well as follow-up documentation requirements. Our case study analysis of unbilled patient encounters at 18 medical centers, including 10 medical centers with low billing performance (based on reported days to bill) and 8 centers under VA’s CPAC initiative that were expected to have high billing performance, found billing delays; coding, billing, and documentation errors; and a lack of adequate management oversight and accountability over approximately $1.7 billion deemed to be unbillable in fiscal year 2007 by coding and billing staff. Although there are valid reasons why some medical services are not billable, including $1.4 billion in service-connected treatment, Medicare coverage, and the lack of private health insurance coverage, medical center management did not validate the reasons for the related unbilled amounts. Further, because third-party insurers will not accept improperly coded amounts and in many cases have national and regional contracts with VA that bar insurer liability for payment of bills received after more than a specified period of time, usually 1 year, after the date that medical services were provided, it is important that coding for medical services is accurate and timely. Our analysis of VA billing data showed that VA has improved its average overall days to bill third-party insurance companies from 93 days in fiscal year 2003 to 64 days in fiscal year 2007. However, most of the 18 case study medical centers we audited exceeded VA’s fiscal year 2007 goal of 60 average days to bill. In addition, our analysis found that coding and billing errors, omissions in documentation, and other undefined reasons for unbilled amounts accounted for hundreds of millions of dollars that were not billed to third-party insurance companies. Moreover, case study medical centers did not effectively use available management reports to monitor trends and performance metrics, such as increases or decreases in unbilled amounts. The 10 medical centers with low billing performance included in our case study analysis reported average days to bill ranging from 109 days to 146 days in fiscal year 2007, compared to VA’s goal of 60 days. The 10 centers also had a total of $1.2 billion in unbilled medical services costs. To analyze case study medical center billing data by unbilled reason codes, we obtained medical center Reasons Not Billable reports and grouped unbillable reasons by major categories. Medical center Reasons Not Billable reports included over 100 reason codes and inconsistent reporting of other, undefined reasons. We discussed our groupings by category with VHA managers and obtained their agreement on our assignment of unbillable reasons by category. As illustrated in figure 3, our analysis of reasons not billable data for the 10 case study medical centers identified significant unbilled amounts for fiscal year 2007. There are valid reasons why VA does not bill for all medical services it provides. For example, VA’s legal authority to seek reimbursement from third-party insurers for the cost of medical services does not extend to services provided to veterans who have medical conditions that are (1) service-connected, (2) covered only by Medicare, or (3) not covered under a private or other applicable health insurance plan. Of the total $1.2 billion in unbilled medical services costs at the 10 medical centers, service-connected (nonbillable) medical care accounted for nearly $116 million, or 10 percent, and nonservice-related not billable amounts totaled $835.3 million, or 69 percent, including $170 million recorded as medical procedures performed for uninsured veterans and $433 million recorded as attributable to medical services that were not covered by veterans’ private health insurance. Managers at the 10 case study medical centers did not perform adequate reviews of the encounters assigned to these categories to ensure that billing clerks appropriately classified them. Coding and billing errors ($48.3 million), documentation errors ($10.4 million), and undefined other reasons ($195.4 million) accounted over $254 million, or 21 percent, of the $1.2 billion in total unbilled medical services costs at the 10 medical centers. Coding and billing errors include incorrect clinical service codes and late filing of claims. The largest group of billing errors included $25 million for which the billing time frame had expired. According to a VA official, VA has entered into national and regional contracts with many third-party insurance companies that bar insurer liability for payment of bills received after a specified period of time, usually 1 year, but sometimes as little as 6 months, after medical services were provided. In addition, documentation errors accounted for more than $10 million in unbilled amounts at the 10 medical centers. Documentation errors include the failure of certification personnel to provide documentation of physician and other health care provider certifications; health care provider errors, such as physicians failing to submit documentation of their services for coding; and veterans refusing to sign Authorization for Release of Protected Health Information forms. Insurance companies will not pay for services unless they receive documentation that the physicians and health care providers are certified. The largest groups of documentation errors related to veterans not signing Release of Information forms due to privacy concerns ($2.0 million) and insufficient or missing medical services documentation ($6.4 million). The CPAC centers’ fiscal year 2007 average days to bill ranged from 39 days to 68 days, compared to VA’s goal of 60 days. In fiscal year 2007, the eight CPAC centers accounted for over $508.7 million in unbilled amounts. As illustrated in figure 4, service-connected reasons accounted for about $65 million, or 13 percent, and nonservice-related, unbillable reason codes accounted for the largest portion—about $406.2 million, or 80 percent, of the total $508.7 million unbilled third-party amounts for the 8 medical centers under CPAC. Coding and billing errors, documentation errors, and other reasons accounted for $37.5 million, or about 7 percent, of medical services costs that were not billed to third-party insurance companies. CPAC officials told us they perform some analysis of unbillable amounts. For example, CPAC officials stated that they review unbillable codes they consider to be at high risk of error. If a particular unbillable code increased from month-to-month, they investigated the cause of the increase and took appropriate action to mitigate the problem. A detailed discussion of CPAC management oversight is presented later in this report. Our analysis of VA’s accounts receivable aging data as of September 25, 2007, showed that VA had approximately $600 million in outstanding third- party receivables of which over $148 million, or about 25 percent, was more than 120 days old. It is important that VA actively pursue unpaid amounts by making timely follow-up contacts with third-party insurance companies because the older a receivable, the less likely it is to be collected. Moreover, uncollected third-party receivables place an added burden on taxpayers because additional amounts would need to be covered by annual appropriations to support the same level of service to veterans. In addition, our statistical tests found high internal control failure rates related to medical centers’ lack of adherence to VHA requirements for timely, properly documented follow-up on unpaid bills that had been sent to third-party insurance companies. Management officials at several of the medical centers tested in our statistical sample attributed their high follow-up failure rate to inadequate staffing. However, we found that a lack of management oversight at the medical centers as well as at the VHA management level contributed to the control weaknesses we identified. Our analysis of VA’s accounts receivable aging data as of September 25, 2007, identified approximately $600 million in outstanding third-party receivables. As shown in figure 5, about $295 million of this total was less than 45 days old. Of the remaining $305 million, over $148 million, or 49 percent, was more than 120 days old. We focused our analysis on bills of $250 or more—the largest category of third-party receivables. For example, uncollected receivables related to bills of $250 or more represented over $426 million, or 71 percent, of the approximately $600 million in outstanding receivables at the end of fiscal year 2007. Although about $227 million of the over $426 million in receivables related to bills of $250 or more were less than 45 days old and did not yet require initial follow-up, the remaining $199 million, or 47 percent, was subject to VA follow-up action on unpaid amounts, and nearly $84 million had remained uncollected for 120 days from the date of the initial bill. Timely follow-up is critical because the older a receivable, the less likely it is to be collected. As was the case with billings, we found that the case study medical centers had limited procedures in place to monitor the collections process. Further, the lack of follow-up documentation undermines the reliability of trend information needed to effectively manage third-party receivables. Our analysis of accounts receivable aging data showed that $37.5 million of the total $600 million in receivables as of the end of fiscal year 2007 was over 1 year old, including $17 million related to bills of $250 or more. Our statistical tests of VA-wide data on controls for follow-up by accounts receivable personnel on unpaid amounts of $250 or more billed to third- party insurers found significantly high failure rates. VA Handbook 4800.14, Medical Care Debts, requires follow-up on unpaid accounts receivable, as necessary, to collect unpaid third-party receivables. The first follow-up for debts of $250 or more is required within 45 days after the initial bill was generated. If necessary, the second follow-up is to be made within 21 days of the initial follow-up and the third follow-up is required within 14 days of the second follow-up. The follow-up requirement does not apply once a receivable is either collected in full, partially paid and contractually adjusted to a zero balance, or contractually adjusted to zero with no payment. Our test of timely follow-ups considered a bill to be closed and no longer subject to follow-up on the date the bill was paid in full or decreased to zero. We randomly selected a sample of 260 third-party insurer bills from a population of $547.8 million in fiscal year 2007 billings for medical services. Of the $547.8 million, our analysis showed that VA collected $260.1 million, or about 47 percent. We generally report our statistical results as point estimates that fall within confidence intervals. Our 95 percent confidence interval means that if you were to determine an estimate for 100 different random samples, 95 out of 100 times, the estimate would fall within the confidence interval. In other words, the true value is between the lower and upper limits of the confidence interval 95 percent of the time. Point estimates provide a useful indicator of the effectiveness of controls for our VA-wide tests, which included a total of 260 bills. However, our tests of CPAC and non-CPAC medical subsets of our sample involved fewer bills and the confidence intervals were much wider. When this is the case, we generally focus on the lower bound of our confidence interval as a more conservative estimate of the effectiveness of controls. Our point estimates for each of our tests and the upper and lower bounds of our confidence intervals are included in appendix I. The following discussion focuses on conservative estimates of our test results based on the lower bound of our 95 percent confidence intervals. For example, table 1 shows that conservative estimates, based on the lower bound of our confidence intervals, indicate that VA controls for timely follow-up were ineffective. For example, the conservative approach for our VA-wide tests shows that medical center collections staff failed the control for timely initial follow-up after 45 days from the bill date at least 69 percent of the time. Similarly, our conservative estimate indicates that CPAC medical center personnel failed this control test for timely initial follow-up based on 60 bills in our sample subset at least 36 percent of the time and non-CPAC medical center personnel failed this control test based on 200 bills in this subset at least 71 percent of the time. Because the universe of unpaid bills subject to requirements for second and third follow-ups was smaller, the confidence intervals for these tests were greater. However, the same conservative approach for our estimates of control failures for the second and third follow-ups continues to show significant failure rates. For example, based on our tests, we estimate that VA-wide control failures related to required second and third follow-ups on unpaid third-party bills were at least 44 percent (based on a subset of 109 bills) and 20 percent (based on a subset of 55 bills), respectively. We estimate that CPAC medical center control failures related to required second and third follow-ups on unpaid third-party bills were at least 23 percent (based on a subset of 40 bills) and 22 percent (based on a subset of 21 bills), respectively. In addition, we estimate that non-CPAC medical center control failures related to required second and third follow-ups on unpaid third-party bills were at least 45 percent (based on a subset of 69 bills) and 17 percent (based on a subset of 34 bills), respectively. Our analysis of our actual test results for the third-party insurer bills included in our statistical sample showed that results varied by medical center. For example, for several medical centers all, or nearly all, of the bills in our statistical sample failed control tests for timely first follow-up on unpaid amounts within 45 days. Conversely, there were several medical centers in our sample that performed timely first follow-up on all of the bills we tested. The actual test results for the first follow-up for all of the bills in our statistical sample are presented by VISN in appendix III. In our interviews of management officials at several of the medical centers included in our statistical sample, the officials attributed their high follow- up failure rate to inadequate staffing. As noted previously, in response to recommendations in our 2004 report, VA shifted nonrevenue functions from billing and collections staff to other medical center personnel to provide greater focus on the revenue function. Our statistical tests of VA-wide data on controls for documenting details of follow-up contacts on unpaid amounts billed to third-party insurers also found significantly high failure rates. VA Handbook 4800.14, Medical Care Debts, requires medical center accounts receivable staff to document a summary of their contacts with third-party insurance companies as well as the first and last name of the insurance company representative and the representative’s title, position, and phone number. Documentation of contact detail is important because it enables VA to quickly identify billing problems and take appropriate action to resolve them. However, for several of the bills in our sample, accounts receivable personnel just noted “AR follow-up,” or they left this data field blank. As shown in table 2, our test results based on the lower bound of our confidence intervals indicate that controls for proper documentation of follow-up contacts on unpaid amounts with third-party insurers were ineffective. For example, using this conservative approach for our VA-wide tests, we estimate that medical center collections staff failed the control test for proper documentation of first follow-up contacts at least 72 percent of the time based on a sample of 97 bills. Similarly, our conservative estimate indicates that CPAC medical center personnel failed this control at least 38 percent of the time (based on 36 bills in this subset) and non-CPAC medical center personnel failed this control test at least 74 percent of the time (based on 61 bills in this subset). Table 2 shows that conservative estimates of contact documentation failures related to the second and third follow-ups are also significantly high, indicating that controls for all of our related tests were ineffective. Our tests of the requirement for documenting the second follow-up contact were based on 41 bills VA-wide, 14 bills for CPAC medical centers, and 27 bills for non- CPAC medical centers. Our tests of the documentation requirement for the third follow-up were based on 18 bills VA-wide, 8 bills for CPAC medical centers, and 10 bills for non-CPAC medical centers. In addition to documenting the details of follow-up contacts, collections personnel are required to adequately document the reasons for adjustments to decrease billed amounts in order to perform proper monitoring and oversight of accounts receivable personnel and to assess whether these adjustments were appropriate. Specifically, VHA Handbook 4800.14, Write-Offs, Decreases, And Termination of Medical Care Collections Fund Accounts Third-Party Receivable Balances, requires that accounts receivable staff provide an explanation for adjustments made to decrease third-party bills. The Handbook requires that the explanation provide clear and unambiguous reasons for the decrease adjustment and provides several suggested comments that are considered adequate explanations for the adjustments. Our tests of whether accounts receivable personnel adequately documented reasons for adjustments to decrease billed amounts found a VA-wide failure rate of 44 percent, as shown in table 3. Although the upper bound of our 95 percent, 2-sided confidence interval indicates that VA- wide estimated control failures could be over 50 percent, a conservative analysis based on the lower bound of our 2-sided confidence interval indicates that controls were ineffective for all categories of our tests in this area. Our tests for this control included a sample of 260 bills VA-wide, 60 bills for CPAC medical centers, and 200 bills for non-CPAC medical centers. Decreases made without appropriate explanations leave no audit trail or explanation of the reasons why an account receivable was decreased to zero. As a result, VA medical center management has limited data available to determine whether the adjustment was appropriate or if further collection action is needed. Moreover, without this information, medical center management cannot perform necessary oversight to assure that third-party revenues are maximized. Our review of VA and VHA policies and procedures, process walk- throughs, and interviews of VHA Chief Business Office (CBO) officials and management officials at our case study medical centers determined that there are no formal policies and procedures for oversight of the third- party insurer billing and collection processes by medical centers or VHA. In addition, we found that medical centers and VHA have few standardized management reports to facilitate oversight. Because VA’s health care billing and collection systems operate as stand-alone systems at each medical center, VA-wide reporting is dependent on numerous individual queries and data calls. As a result, we found little or no monitoring and oversight of the third-party billing and collection processes. This raises concerns about adequacy of oversight over the $1.7 billion in unbilled amounts at the 18 case study medical centers, including the hundreds of millions of dollars in unbilled amounts related to coding, billing, and documentation errors, and other undefined reasons. The lack of formal VA policies for management oversight of third-party billings and collections also raises VA-wide concerns. Enhanced oversight would permit VHA and medical center management to monitor trends and performance metrics, such as increases or decreases in unbillable amounts. Although VA has not established formal policies and procedures for oversight of third-party insurer billings and collections, officials at the 18 case study medical centers told us they perform some oversight. For example, officials at all 10 of the case study medical centers with low billing performance indicated that they perform limited oversight of unbilled amounts with no documentation and insufficient documentation reason codes. None of the officials we interviewed provided us documentation of their monitoring or oversight procedures. According to our interviews, oversight procedures varied by medical center. For example, one medical center official told us that she performs a monthly review of the insufficient documentation and no documentation reason codes. A second medical center official told us that a review of the codes had not yet been performed in fiscal year 2008, but that quarterly reviews usually have been performed. Another medical center official told us that she randomly selects between three and six bills per coder that were designated as unbillable for documentation reasons and reviews them for accuracy. Further, an official at a fourth medical center told us that she sees a potential risk that a billing clerk could clear out a billing backlog by inappropriately assigning reasons not billable codes to medical procedures waiting to be billed. While it is unlikely that this would occur, such a problem would only be detected if proper reviews were being performed by medical center management. None of the officials at the 10 medical centers indicated that their reviews included any of the other reasons not billable, such as service-connected medical services or medical services not covered by third-party insurance companies. As illustrated previously in figure 3, documentation errors, the focus of the 10 medical centers we reviewed, made up only 1 percent of the total amount not billed by the 10 medical centers during fiscal year 2007. Without reviewing all of the patient services deemed to be unbillable, the 10 medical centers do not have reasonable assurance that their unbilled amounts are accurate and appropriate. Further, officials at three case study medical centers that were also in our VA-wide sample for testing collection follow-up told us they performed little or no monitoring of collection follow-up activity because existing management reports did not facilitate their oversight. Although CPAC also lacked formalized policies and procedures for management oversight of unbilled amounts, CPAC officials told us that they reviewed unbilled amounts assigned to reason codes they consider to have a high risk of error. However, CPAC officials did not provide us any documentation of their oversight and monitoring procedures. For example, CPAC officials told us that they perform weekly reviews of unbilled amounts assigned to the service-connected reason not billable code. If doubt exists as to whether the patient’s condition is actually service-connected, quality assurance personnel send the medical services record to the facility where the bill was generated for further review. This provides management assurance that the encounters not billed for that reason are appropriately classified. The officials said they also prepare trend analyses for three types of documentation errors—no documentation, insufficient documentation, and not a billable provider. The officials said they also do some review of coding and billing errors. CPAC officials stated that if a particular unbillable code is increasing from month-to-month, they investigated the cause of the increase and took appropriate action to mitigate the problem. According to CPAC officials, experienced medical coders in their Quality Assurance Division review the diagnosis codes for service-connected patient encounters for reasonableness and documentation of the medical condition. At the eight CPAC medical centers, oversight of the collection process consists of supervisory reviews. For example, supervisors in the collections follow-up department perform quality reviews of clerks. A clerk is tested every 2 weeks until the clerk receives two consecutive reviews with no exceptions. The clerk is then reviewed monthly. The reviews involve testing five claims for proper follow-up. We found that medical centers have few standardized management reports to facilitate oversight. Our analysis of medical center Reasons Not Billable reports found that these reports consist of a list of over 100 reason codes for unbillable amounts that are not summarized by major categories, such as the five categories we identified, to facilitate management review and decision making. Our review of VA and VHA policies and procedures and our interviews with CBO officials determined that VA lacked formal policies and procedures for oversight of the billing and collections processes related to third-party insurers. In addition, we found that VA and VHA have few standardized management reports to facilitate oversight. For example, our review of CBO reports found that these reports generally consist of data on VA-wide days to bill, accounts receivable, and collections. VHA CBO does not generate detailed performance reports by medical center, and it does not review unbilled amounts. Limitations in management reporting relate to VHA systems design. For example, VistA operates as a stand-alone system at each medical center. As a result, VHA’s CBO does not have direct access to medical center data, and it would need to use data calls to obtain medical center data for monitoring and oversight. Consequently, VHA developed the Performance and Operations Web-Enabled Reports (POWER) system as a data warehouse for VistA data and information to provide some additional management information capability. However, as a data warehouse POWER does not provide a full range of standard management reports needed for oversight, and obtaining management information from POWER for oversight and monitoring purposes would necessitate numerous individual management queries and data compilations. In response to long-standing weaknesses in third-party billing and collection processes, VA undertook several initiatives aimed at increasing revenue from third-party billings and collections. According to VA documentation, the improvement initiatives were developed by engaging key VHA leaders and other stakeholders in a comprehensive review of revenue cycle business process activities, from patient intake and insurance verification through billing and collection as well as planning and implementation efforts. As discussed previously, we assessed controls for coding and billing accuracy and collection follow-up for medical centers under the CPAC pilot initiative. However, we did not evaluate the six ongoing initiatives, some of which are open ended or will not be completed for several years. Effective management oversight and implementation will be key to the success of these initiatives. The following section summarizes recently completed VA initiatives, including improvements in recruitment and retention of coders and updates of key VHA policy guidance. Ongoing initiatives include six key strategic initiatives for increasing third-party revenue. The following two VHA initiatives to enhance third-party revenue were completed in 2006 and 2007. Recruitment and Retention of Coders and Health Information Managers. Over the past several years, VHA has pursued improvements in the capture of medical charges and clinical documentation to enhance third-party collections. The first of three improvements, completed in December 2006, resulted in implementation of a plan to improve recruitment and retention of coders and health information managers within VHA through reclassification of employee positions in Office of Personnel Management occupation series 675, medical record technicians, and series 669, medical record administrators, from regular civil service positions to unique hybrid health-care civil service positions. The position reclassifications, which were effective on December 6, 2006, removed many hiring delays and provided opportunities for employee special advancement for professional achievement while in VA service. As of October 2007, VHA had 2,024 employees in the 675 medical record technician job series and 430 employees in the 669 medical record administrator job series. Updated VHA Policy Guidance. VHA updated its policy guidance on coding staff qualifications, accurate coding and documentation of medical services, and closing dates for reporting these data for performance measures and corporate management reporting. The updated guidance was incorporated in the following VHA policy documents during 2006 and 2007. VHA Directive 2006-026, Patient Care Data Capture, dated May 5, 2006, contains requirements for capture of all outpatient encounters, inpatient appointments in outpatient clinics, and inpatient billable services. This directive also requires that each clinic is set up with appropriate Decision Support System identifiers to help ensure the accuracy of coding for patient care encounters. VHA Directive 2006-035, Surgical Case Coding, dated May 30, 2006, provides policy for surgical code assignments based on International Classification of Diseases (9th Revision) Clinical Modification and Current Procedural Terminology (4th Edition). The policy also notes recent software changes and reiterates VHA policy on accurate capture of coded data within the surgical package, including requirements for qualified coding staff, accurate source documentation, and timely and accurate entry of codes. In addition, the directive makes VISN directors responsible for ensuring that the Surgery Version 3.0 software patch is installed on all medical centers’ VistA systems in accordance with nationally distributed software packages. The directive also makes medical center directors responsible for ensuring that surgical coding is conducted by qualified staff using the Update/Verify Procedure/Diagnosis Codes option within the surgery package or using an encoder that is interfaced with the surgery package for entry of coded procedures and diagnoses for all surgeries. VHA Directive 2007-030, Closeout of Veterans Health Administration Corporate Patient Data Files, Including Quarterly Inpatient Census, dated September 27, 2007, changed the close out date to ensure that all corporate data are available when extracted for performance measures and other corporate reporting needs. Accordingly, this directive requires inpatient coding to be completed no later than the 14th day following patient discharge and outpatient coding to be completed no later than 14 days after the outpatient visit. VHA Handbook 1907.03, Health Information Management, Clinical Coding Program Procedures, dated November 2, 2007, as previously discussed, establishes minimum standards for coding productivity, including specific time frames for completing bill coding for various medical services, and a minimum 95 percent coding accuracy standard. The Handbook also includes suggested coding staffing requirements, coding staff qualifications, coding contract services, and coding function efficiencies. In October 2005, VHA’s Revenue Optimization Plan Enhancement (ROPE) work group identified six key strategic initiatives for improving revenue performance. Many of these initiatives represented continuing actions that were previously initiated under VHA’s Revenue Action Plan (RAP)—the predecessor to ROPE. The first initiative is targeted for completion in May 2008. As of the end of our field work in April 2008, VA had not provided target dates for full implementation of the other five initiatives. A brief overview of the six initiatives and their current status follow. Revenue Improvement Demonstration Project (RIDP). The RIDP (outlined in congressional reports discussing the fiscal year 2006 appropriation for VHA’s medical administration account) was established to further advance revenue performance within a single VISN and develop a comprehensive national revenue model by integrating contractor- supported process modeling and business reengineering efforts. According to VA documents, CPAC was selected to be the host of this demonstration project because the objective was seen as a complimentary effort to the CPAC initiative that was already under way. The RIDP initiative was divided into two major phases. The first phase was an operational assessment of revenue cycle functions in order to create additional cash flow and to assist with the development of a model that could be replicated nationally. The second phase of RIDP has two parts – Parts A and B. Part A, which was completed during fiscal year 2007, covered phase 1 implementation and benefit realization. Under Part A, a revenue cycle processing environment was established, staff and leadership received training in new processes and techniques, and transition and permanence activities were completed. Part B, which was scheduled to be completed in May 2008, covers transition monitoring and sustainability. Clinical Data Entry(CDE). According to VA officials, CDE was developed to improve the currently existing Clinical Indicators Data Capture (CIDC) VistA software. CIDC was not implemented nationally because of concerns related to the provider-patient interaction and provider productivity. However, functional requirements and software design were to be revisited so that expanded clinician involvement could be included. According to VA, the expected outcome of CDE is twofold— first, to recommend software that can be designed to automatically capture clinical data as a by-product of the clinical encounter instead of as an extra step, and second, to accommodate revenue capture of high- volume/dollar procedures that are being performed, but not billed in VistA. CDE was targeted for completion in May 2007. However, upon extensive work with clinicians, the project team concluded that a fundamental change was required in the Computerized Patient Record System (CPRS) in order to effectively accomplish the project goal. According to VA information technology officials, the CDE design recommendations and accompanying business flow diagrams will be included as a part of Computerized Patient Record System Reengineering, referred to as CPRS- R. An implementation date for this initiative has not yet been determined. National Revenue Contracts Office. According to VA, the National Revenue Contracts Office initiative is designed to leverage VHA’s size and financial purchasing power to develop national relationships for both payer agreements and contracts for vendors who provide support for revenue cycle activities. The National Payer Relations Office (NPRO) is currently pursuing strategies to effectively manage relationships with third-party insurance companies. VHA’s first national payer agreement, with Aetna, was completed in 2007 and a second national agreement with United Healthcare is expected to be effective May 2008. According to VA officials, the National Payer Relations Office has completed 78 regional agreements and is currently working on negotiating an additional 10 agreements. According to VA officials, a Revenue Contracts Program component was established under NPRO to improve management of vendors used to support VHA revenue cycle activities by developing better rates and consistency in payment terms, expectations, and performance standards. VA hopes that this Program will ensure more consistent terms and conditions for frequently used revenue cycle contracts. For example, VA established national Blanket Purchase Agreements (BPA) for both coding and insurance identification/verification products and services. According to VA officials, the Revenue Contracts Program is currently working on establishing BPAs for third-party billing and accounts receivable follow- up. As of the end of our field work in April 2008, VA had not provided us with a target date for completing BPAs for third-party billing and accounts receivable follow-up. Revenue Improvements and Systems Enhancement (RISE) Plan. A major driver in VA’s revenue optimization strategy is a Patient Financial Services System (PFSS) project directed by Congress, which seeks to remedy significant business process and technology issues in VA’s revenue-related financial systems. Building on the initial PFSS project and to continue ongoing improvement efforts, the VHA CBO chartered a RISE project team. RISE is part of the VistA modernization action program. The primary objective of RISE is to provide comprehensive tools for seamless sharing of required administrative and clinical information to support billing and related revenue activities across the enterprise. The four goals of the RISE plan are (1) defining a clear vision for revenue cycle activities across VHA, (2) replacing or enhancing aspects of current integrated billing and accounts receivable systems, (3) improving all related business processes by implementing structured IT support systems while delivering automated tools to improve revenue cycle efficiency, and (4) identifying process improvements for VHA that drive improvement in revenue cycle activities while leveraging enhanced IT support systems. According to VA officials, the RISE team is currently developing detailed short- and long-term business process and technology strategies in all areas of the revenue program. The RISE team is also developing accompanying documentation that defines end-to-end processes and that will form the requirements for the framework of the overall system improvement initiative. This document will include (1) a vision and scope document which defines services and capabilities required from the future state revenue system and (2) a business case, which incorporates process definitions; a program governance structure to oversee operations; a communications plan for the business program; and definitions to identify requirements for a new end-to-end revenue system. As of the end of our field work in April 2008, VA had not provided us with a targeted implementation date for this initiative. Revenue Cycle Enhancement Reviews. VHA has implemented an initiative to identify opportunities to further enhance revenue potential using Revenue Cycle Enhancement Teams (RCET). On-site reviews and development of corrective action plans are ongoing. The objective of this initiative is to identify available operational opportunities and provide recommendations to improve overall cash flow. A review team consists of VHA subject matter experts who are deployed to lower performing facilities to assess core revenue cycle functions, including patient intake and insurance verification, utilization review, coding, billing, and accounts receivable. The methodology employed by the team in completing reviews includes a combination of data analysis and on-site observation of activities. Following the on-site review, the team provides an action plan to the facility that outlines tasks that need to be completed within the next 90 days and participates in conference calls to ensure completion of all identified action items. VHA CBO reported that it had completed reviews at 30 facilities (through January 2008) and that those facilities have generally increased revenue collections following these reviews. As of the end of our field work in April 2008, VA did not have a list of planned visits, and it had not provided us with a targeted implementation date for this initiative. Consolidated Patient Account Centers (CPAC). CPAC is based on a private sector model that is tailored for VHA’s specific requirements. The CPAC model consists of a stand-alone, regionalized billing and collection activity supported by remote utilization review, data validation, and customer service functions that are organizationally aligned with the consolidated center. CPAC is being developed in three phases. Phase I, which focused on designing a work flow model and new organizational structure within a pilot VISN–VISN 6, operating as CPAC–was completed September 30, 2006. In fiscal year 2006, CPAC reported that it achieved 99 percent of its targeted collections, increasing total VISN 6 collections by approximately $10 million over fiscal year 2005. In fiscal year 2007, CPAC reported that it achieved 110 percent of its targeted collection, increasing total VISN 6 collections by $23 million over fiscal year 2006. Phases II and III address expansion of the CPAC pilot program. Phase II, which includes moving the consolidated center into a new physical plant to support workflow models developed in Phase I as well as expansion of existing operations to service an additional VISN and is scheduled for completion by the end of fiscal year 2008. Phase III addresses national expansion. VHA is currently working on a VA-wide implementation strategy based on experiences from the CPAC pilot in VISN 6. As of the end of our field work in April 2008, VA had not provided us with targeted implementation dates for Phase III. Although VA has made some progress in improving policy guidance and processes for billing and collecting medical care receivables from third- party insurers, medical centers have significant, continuing weaknesses in controls over coding, billing, and collections follow-up that prevent VA from maximizing hundreds of millions of dollars in potential revenue from third-party insurance companies. The fundamental weaknesses are a lack of proper processing of billing information, inadequate follow-up on unpaid third-party accounts receivable, and inadequate management oversight by medical center and VA management. Unless VA effectively addresses these weaknesses, it will continue to use higher amounts of appropriations from the General Fund of Treasury to provide medical care to the nation’s veterans than otherwise would be necessary, thereby placing a higher burden on taxpayers. We recommend that the Secretary of Veterans Affairs require the medical centers to take the following seven actions to maximize revenue from third-party insurer billings and collections. First, to assure that all amounts that should be billed to third-party insurers are billed in an accurate and timely manner, we recommend that the Secretary take the following two actions. Establish procedures requiring medical center management to perform and document detailed monthly reviews of patient encounters determined to be nonbillable by coding staff to ensure they are properly coded. Establish procedures requiring medical center management to develop and use management reports on medical center performance with respect to accuracy and timeliness of billing performance and take appropriate corrective action. Second, to assure timely follow-up and documentation of unpaid third- party billings, we recommend that the Secretary take the following three actions. Establish a process requiring medical centers to monitor their accounts receivable staffs’ adherence to the requirement in VA Handbook 4800.14, Medical Care Debts, to follow-up on outstanding third-party accounts receivable within specified time frames. Establish a mechanism requiring medical centers to monitor their accounts receivable staff adherence to VA Handbook 4800.14, Medical Care Debts, which requires documenting a brief summary of all follow-up contacts, including information on when a payment will be made or why a payment was not made. Establish a process requiring medical centers to confirm that accounts receivable staff are following the requirement in VHA Handbook 4800.14, Write-Offs, Decreases, And Termination of Medical Care Collections Fund Accounts Third-Party Receivable Balances, to provide a specific explanation for any adjustments to decrease third party accounts receivable from third-party insurers. Third, to assure effective VA-wide oversight of billings and collections with regard to third-party insurers, we recommend that the Secretary take the following two actions. Require VHA to establish a formal VA-wide process for managing and overseeing medical center billing performance, including development of standardized reports on unbilled amounts by category. Establish procedures requiring periodic VHA-wide assessments by the Chief Business Office to document whether medical center staff are performing timely and accurately documented follow-up on outstanding third-party accounts receivable, as required in VHA Handbook 4800.14. On May 22, 2008, the Secretary of Veterans Affairs provided written comments on a draft of this report. VA officials concurred with all seven of our recommendations and provided information on steps it is taking to address them. However, VA’s letter stated that our report overstated findings related to the potential for lost revenue to the government and inadequate levels of oversight. VA’s letter also stated that our conclusion on collections improvement minimizes the significant gains VA has made and that several improvement initiatives were so successful that they have been designated as ongoing. With regard to VA’s comment that we overstated findings related to the potential for lost revenue to the government, we added language to our report to indicate that $1.4 billion of the $1.7 billion in unbilled medical services at the 18 case study medical centers were classified as service- connected, Medicare coverage, or lack of private health insurance coverage. However, as noted in our report, although certain medical services are not billable, such as service-connected treatment, VA management has not validated reasons for these unbilled amounts to assure that all billable costs are charged to third-party insurers. Further, we focused on unbilled amounts related to coding, billing, and documentation errors and other undefined problems as a basis for making recommendations for increasing third-party revenues. In this regard, we identified $291.5 million in unbilled amounts due to errors at the 18 case study medical centers. With regard to VA’s statement that it has established significant levels of oversight, our report noted that VA has agencywide data on days to bill, accounts receivable, and collections. However, we found that VA has not established policies and procedures for management oversight of unbilled amounts or compliance with follow-up requirements for outstanding third- party receivables. Further, although POWER generates metrics for several performance indicators, these metrics do not provide VA with the full range of management reports needed to adequately monitor unbilled amounts and compliance with follow-up procedures. VA concurred with our recommendation to establish oversight of unbilled amounts and compliance with follow-up procedures and described systems enhancements and improved monitoring activities that it expects will address the problems related to billings, collections, and oversight we identified. With regard to VA’s comment that it made significant gains in collecting third-party revenue since fiscal year 2004, there are a number of factors that need to be considered to measure the extent of VA’s success. For example, VA also experienced an increase of 500,000 patients that were treated at VA medical centers during the 4-year period. In addition, the rate of inflation for medical care over the last 4 fiscal years and changes in inpatient and outpatient mix would need to be considered. Further, VA’s efforts to address billing backlogs over the 4-year period could have also contributed to the increased revenue. This type of analysis was outside the scope of our audit and would require further study to determine the impact of these factors on VA’s collection gains. Despite VA’s increased third-party collections, our work showed there is a significant opportunity to increase revenue from third-party insurers by (1) correcting errors that have prevented appropriate billings to third-party insurers and (2) performing timely and effective follow-up on unpaid receivables. VA’s statement that several initiatives to enhance third-party revenue were so successful that they have been designated as ongoing implies that target dates for completion are not necessary. We support the concept of ongoing efforts for continuous improvement in operations. However, three of the six initiatives were begun in 2002 and have encountered significant slippage and refocusing under revised management plans. For example, initiatives related to Clinical Data Entry, the National Revenue Contracts Office, and CPAC began under the Revenue Action Plan (RAP), which was approved in July 2002. These initiatives were later incorporated under the Revenue Optimization Plan Enhancement (ROPE) plan in 2005. In addition, the RISE plan for revenue-related system enhancements was initiated under ROPE. Targeted implementation dates and milestones will be key to assisting management in overseeing these initiatives to assure that intended goals are accomplished within reasonable time frames. Finally, VA officials informed us at the conclusion of our audit that they revised their follow-up requirements for third-party receivables to require increased focus on unpaid high-dollar amounts and provide more flexibility in follow-up time frames for smaller dollar amounts. For example, VA’s revised policy will focus on collection follow-up for amounts of $1,500 and above within 45 days of the billing date. However, the revised policy would extend the date for the first follow-up for bills from $250 to $1,500 to be within 60 days of the initial bill. Going forward, it will be important for VA to oversee and monitor the implementation of the new policy as part of its management oversight process in order to determine if the new policy is achieving intended results and, if not, to perform additional analysis and make appropriate policy changes to assure effective follow-up on unpaid third-party bills. VA also provided technical comments and corrections which we have addressed in our report, as appropriate. VA’s comments are reprinted in appendix II. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report until 30 days from its date. At that time we will send copies of this report to interested congressional committees; the Secretary of Veterans Affairs; the Acting Secretary of Health, Veterans Health Administration; the VHA Chief Business Officer; and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-9095 or [email protected], if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix IV. Pursuant to requests from the Chairman and Ranking Minority Member of the House Committee on Veterans’ Affairs and the Chairman and Ranking Minority Member of the Subcommittee on Oversight and Investigations, we performed a follow-up audit of controls over VA’s third-party billing and collection processes, including (1) an evaluation of the effectiveness of VA medical center billing processes at selected locations, (2) an assessment of medical centers’ adherence to VA policies for performing timely follow-up on unpaid accounts receivable and proper documentation of follow-up contacts, and (3) a determination of the adequacy of VA oversight of billing and collection processes. In addition, we summarized the status of management initiatives undertaken to improve third-party billing and collection processes. We used as our criteria applicable law and VA policy, as well as our Standards for Internal Control in the Federal Government and our Internal Control Management and Evaluation Tool. To assess the control environment at our test locations, we obtained an understanding of VA processes and controls over the third-party revenue cycle. We performed walk-throughs of these processes at several medical centers. We interviewed management officials at selected medical centers about their management oversight and accountability procedures over third- party billings and collections. We also reviewed applicable VA program guidance and local policies and procedures at selected test locations and interviewed officials about their billing and collection processes and controls. In addition, to assure the reliability of data and information used in this report, we reviewed VA documentation and interviewed key officials. We also reviewed VA procedures for assuring the reliability of data and information generated by key VA systems used in the third-party billing and collection processes, including VHA’s Veterans Health Information Systems and Technology Architecture (VistA) and Performance and Operations Web-Enabled Reports (POWER) systems. To determine if the third-party revenue offices at our 18 case study locations had adequate management oversight and accountability for assuring timely and accurate billings, we obtained and reviewed management reports , including (1) Reasons Not Billable Summary and Detailed reports and (2) Elapsed Days to Bill performance reports. We compiled data from the Reasons Not Billable reports into a database and established categories of reasons not billed for further analysis. We coordinated with VHA officials on the identification of reasons not billed categories. We interviewed medical center revenue officials at the 10 case study medical centers with low billing performance and at CPAC, for the 8 CPAC case study medical centers, about their management oversight and accountability procedures. Table 4 shows the two groups of case study medical centers we examined and performance data on days to bill and unbilled amounts for each location. We selected a VA-wide statistical sample of billing records to assess whether medical center accounts receivable personnel were adhering to VA policy for timely follow-up with private insurance companies on unpaid third-party receivables. From the universe of fiscal year 2007 collections activity, we stratified the billing records into one stratum for bills from CPAC and one stratum from all other VA medical centers. We randomly selected 60 bills from the CPAC stratum and 200 from the stratum for the rest of VA. We designed this sample for control testing with a 5 percent tolerable error rate so that if there were no errors in one of the strata, we would be able to conclude with 95 percent confidence that the billing records for that stratum were statistically compliant. Our random sample of 260 bills included medical centers from each of the 21 VISNs. We used our statistical sample to assess the population of follow-up contacts for receivables greater than $250 that were outstanding for at least 45 days at any point during fiscal year 2007. We explain the results of our statistical sample in terms of control attributes related to adherence to VA policy guidance for (1) performing timely initial and subsequent follow-up, as appropriate, on unpaid amounts, (2) whether accounts receivable personnel properly documented follow-up contacts, and (3) whether accounts receivable staff properly documented reasons why adjustments to decrease billed amounts were made. We present our statistical results as (1) our projection of the estimated error overall (failure rate) and for each control attribute as point estimates and (2) the 95 percent, two-sided, confidence intervals for control failure rates. Our 95 percent confidence interval means that if you were to determine an estimate for 100 different random samples, 95 out of 100 times, the estimate would fall within the confidence interval. In other words, the true value is between the lower and upper limits of the confidence interval 95 percent of the time. We generally report our statistical results as point estimates that fall within confidence intervals. However, because confidence intervals varied widely for our various control tests, we focused on the lower bound of our confidence intervals as a conservative estimate of our test results. As additional information, we present our detailed test results below, including the point estimate and the upper and lower bound of our 95 percent confidence intervals. Table 5 shows our detailed test results for timely follow-up on unpaid third-party bills totaling $250 or more. Table 6 shows the detailed results of our tests of VA-wide controls for documenting the details of follow-up contacts made with third-party insurers. Our tests of whether accounts receivable personnel adequately documented reasons for adjustments to decrease billed amounts found a VA-wide failure rate of 44 percent, as shown in table 7. To assess VA management oversight of third-party billing and collection processes, we interviewed medical center and CPAC management officials at our case study locations and reviewed available data and reports used by these managers. Three of our 10 case study locations with low billing performance and CPAC were also included in our statistical tests of collection follow-up procedures. We also reviewed related VA and VHA policies and available VHA CBO management reports. In addition, we interviewed VHA officials about their oversight procedures, including limitations in systems reporting capabilities. To follow up on management initiatives undertaken to improve third-party billings and collections, we obtained and reviewed information and interviewed VA managers on the objectives, status, and targeted completion dates for eight major initiatives. We did not evaluate the initiatives or independently assess the information provided by VA officials. However, we evaluated billing controls and tested compliance with controls for accounts receivable follow-up for the medical centers under the CPAC pilot initiative. We briefed VA managers at our test locations and VA headquarters, including VA medical center directors, VA headquarters information resource management and property management officials, and VHA’s Chief Business Officer on the details of our audit, including our findings and their implications. On April 30, 2008, we requested comments on a draft of this report. We received comments on May 22, 2008, and have summarized those comments in the Agency Comments and Our Evaluation section of this report. We conducted our audit work from January 2007 through May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 8 lists our actual test results for timely first follow-up on unpaid third-party billings by medical centers within each VISN. As noted at the end of the table, 183 of the 260 bills in our stratified random sample failed this control test. In addition to the contact named above, Gayle L. Fischer, Assistant Director; F. Abe Dymond, Assistant General Counsel; Carl S. Barden; Deyanna J. Beeler; Francine DelVecchio; Lauren S. Fassler; Jason Kelly; Amanda K. Miller; Matthew L. Wood; and MatthewP. Zaun made key contributions to this report.
GAO previously reported that continuing problems in billing and collection processes at the Department of Veterans Affairs (VA) impaired VA's ability to maximize revenue from private (third-party) insurance companies. VA has undertaken several initiatives to address these weaknesses. GAO was asked to perform a follow-up audit to (1) evaluate VA billing controls, (2) assess VA-wide controls for collections, (3) determine the effectiveness of VA-wide oversight, and (4) provide information on the status of key VA improvement initiatives. GAO performed case study analyses of the third-party billing function, statistically tested controls over collections, and reviewed current oversight policies and procedures. GAO also reviewed and summarized VA information on the status of key management initiatives to enhance third-party revenue. GAO's case study analysis of unbilled patient encounters at 18 medical centers, including 10 medical centers with low billing performance and 8 medical centers under VA's Consolidated Patient Account Centers (CPAC) initiative considered to be high performers, found documentation, coding, and billing errors and inadequate management oversight for approximately $1.7 billion deemed unbillable in fiscal year 2007. Although some medical services are unbillable, such as service-connected treatment, management has not validated reasons for related unbilled amounts of about $1.4 billion to assure that all billable costs are charged to third-party insurers. Because insurers will not accept improperly coded bills and they generally will not pay bills received more than 1 year after the date that medical services were provided, it is important that coding for medical services is accurate and timely. The 10 case study medical centers reported average days to bill ranging from 109 days to 146 days in fiscal year 2007 and significant coding and billing errors and other problems that accounted for over $254 million, or 21 percent, of the $1.2 billion in unbilled medical services costs. Although GAO determined that CPAC officials performed a more thorough review of billings, GAO's analysis of unbilled amounts for the 8 CPAC centers found problems that accounted for $37.5 million, or about 7 percent, of the $508.7 million in unbilled medical services costs. In addition, GAO's VA-wide statistical tests of collections follow-up on unpaid third-party bills of $250 or more identified significant control failures related to timely follow-up and documentation of contacts with third-party insurers on outstanding receivables. VA guidance requires medical center accounts receivable staff to make up to three follow-up contacts, as necessary, on outstanding third-party receivables. GAO's tests identified high failure rates VA-wide as well as for CPAC and non-CPAC medical centers related to the requirement for timely follow up with third-party insurers on unpaid amounts. GAO's tests also found high failure rates associated with the lack of documentation of follow-up contacts. VA lacks policies and procedures and a full range of standardized reports for effective management oversight of VA-wide third-party billing and collection operations. Further, although VA management has undertaken several initiatives to enhance third-party revenue, many of these initiatives are open-ended or will not be implemented for several years. Until these shortcomings are addressed, VA will continue to fall short of its goal to maximize third-party revenue, thereby placing a higher burden on taxpayers.
For further information on this statement, please contact J. Alfredo Gomez, at (202) 512-4101 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Kim Frankena, Assistant Director; Christina Bruff; David Dayton; Leah DeWolf; Barbara El Osta; Bradley Hunt; and Erin Preston. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the Trade Adjustment Assistance (TAA) for Firms program, which is administered by the Department of Commerce's (Commerce) Economic Development Administration (EDA). Over the past decade U.S. imports have almost doubled, reaching $2.7 trillion in 2011. During the same period, the United States entered into free trade agreements that liberalize trade with 14 partner countries. Further trade liberalization is being pursued, including a Transpacific Partnership among 11 nations in the Asia-Pacific region. Although trade expansion can enhance the economic welfare of all trade partners, many firms and workers experience difficulties adjusting to import competition. Congress has responded to concerns about these difficulties with trade adjustment assistance programs. Established in 1962, the TAA for Firms program provides technical assistance to help trade-impacted, economically distressed firms make adjustments that may enable them to remain competitive in the global economy. In fiscal years 2009 through 2012, EDA received $15.8 million annually for the TAA for Firms program. EDA uses its appropriation for the TAA for Firms program to fund 11 TAA Centers (center), which provide assistance to U.S. manufacturing, production, and service firms in all 50 states, the District of Columbia, and the Commonwealth of Puerto Rico. Congress amended the TAA for Firms program under that part of the American Recovery and Reinvestment Act of 2009 known as the Trade and Globalization Adjustment Assistance Act (TGAAA) of 2009 and mandated that we review the operation and effectiveness of these amendments. This testimony is based on our September 2012 report that examined (1) the results of the legislative changes on program operations and participation, (2) the performance measures and data that EDA uses to evaluate the program and what these tell us about the program's effectiveness, and (3) how program funding is allocated and spent. First, we found that the four changes mandated by the 2009 legislation contributed to improvements in program operations and increased participation: (1) Creation of director and other full-time positions: The creation of a director and other full-time positions for the program resulted in reduced firm certification processing times for petitions. (2) New annual reporting on performance measures: EDA has submitted three annual reports to Congress on these performance measures as a result of the legislation. (3) Inclusion of service sector firms: According to our analysis of EDA data, the inclusion of service sector firms allowed EDA to certify 26 firms not previously eligible for assistance from fiscal years 2009 through 2011. (4) Expansion of the "look-back" period from 12 months to 12, 24, or 36 months: Our analysis of EDA data shows that 32 additional firms participated in the program from fiscal years 2009 through 2011 based on the expansion of the look-back period from 12 months to 12, 24, or 36 months. Prior to the legislative changes, firms were only allowed to compare sales and production data in the most recent 12 months to data from the immediately preceding 12-month period. Second, we found that EDA's performance measures and data collection for the TAA for Firms program provide limited information about the program's outcomes, although our economic analysis found a statistically significant association between participation in the program and an increase in firm sales. EDA collects data to report on 16 measures to gauge the program's performance, such as the number of firms that inquired about the program and the number of petitions filed, but most of these measures do not assess program outcomes. EDA is exploring better ways to assess the effect of their efforts on firms. Third, in terms of how funds are allocated and spent, we identified key weakness pertaining to EDA's funding formula. EDA has allocated funding to the 11 TAA Centers using a funding allocation formula that comprises a set of weighted factors; however, the formula does not take into account the potential number of firms in need of the program and differences in costs across the centers. According to a key standard--beneficiary equity--a funding allocation formula should distribute funds according to the needs of respective populations and should take into account the costs of providing program services, so that each service area can provide the same level of services to firms in need.
The Drug-Free Communities Act of 1997 established the Drug-Free Communities Support Program. The program’s two major goals are to: (1) establish and strengthen collaboration among communities, private non- profit agencies, and federal, state, local, and tribal governments to support the efforts of community coalitions to prevent and reduce substance abuse among youth; and (2) reduce substance abuse over time among youth and adults by addressing the factors in a community that increase the risk of substance abuse and promoting the factors that minimize the risk of substance abuse. As authorized by statute, the Director of ONDCP may employ any necessary staff and enter into contracts or agreements with national drug control agencies, including inter-agency agreements to delegate authority for the execution of grants and for such other activities necessary to carry out the program. Since the program’s inception in 1997, ONDCP has delegated certain grant administration activities to other agencies through inter-agency agreements. These inter-agency agreements are drafted each fiscal year, reflecting the necessary changes and lessons learned from the previous year, and are put into effect once they are agreed upon and signed by both parties. In fiscal year 2005, ONDCP administered the program with SAMHSA. This inter-agency agreement was the vehicle through which, consistent with the terms of the statutory requirements, ONDCP sought to ensure the proper management of the program. This inter-agency agreement provided ONDCP with an opportunity to set forth the processes and procedures for the award and management of grants. An Administrator, appointed by the Director of ONDCP, is responsible for carrying out a program to support communities in the development and implementation of plans and programs to prevent and treat substance abuse among youth. The Administrator is responsible for carrying out the program, including setting forth various standards related to the statutory eligibility requirements. From fiscal years 1998 to 2004, the grant program was administered by the Office of Juvenile Justice and Delinquency Prevention Programs, organized under the Department of Justice and headed by the Office of the Assistant Attorney General. From fiscal year 2005 to the present, the grant program has been administered by SAMHSA, organized under the Department of Health and Human Services and headed by its Secretary. See figure 1 for an organizational overview of the program, including the Office of Juvenile Justice and Delinquency Prevention Program’s past involvement. Under the Drug-Free Communities Support Program, grantees receive federal funds each fiscal year and are required to match federal grant funds with non-federal funds including, at the discretion of the Administrator, in-kind contributions for that fiscal year. Grantees may receive funding on a fiscal year basis in two 5-year cycles (although grantees must reapply each fiscal year), for a total of up to 10 years of funding. There are two classes of applicants that a coalition may fall under when applying for funding; each class of applicant has its own set of requirements or characteristics, as shown below. Initial grant applicants are those that are (1) either applying for their first grant, (2) have received 5 years of funding and are applying for a sixth year, or (3) have had a lapse in their funding in the previous fiscal year. Renewal grant applicants have received an award in the previous year and are applying each fiscal year for funding for years 2 through 5 or years 7 through 10. Renewal grant applicants generally do not compete with other applicants for funds. Each fiscal year, initial grant applicants submit their applications for review. Those applications deemed eligible are then forwarded on for peer review. Peer reviewers are external experts who examine applications and score them on the basis of several areas, such as the extent to which a coalition demonstrates effective strategic planning and implementation. These scores are used to assess the applicant and range from 0 (lowest score) to 100 (highest score). ONDCP then determines which coalitions will receive funding, generally awarding grants from the highest peer review score down until all of the funding has been used. Each fiscal year, renewal grant applicants submit abbreviated applications, which include a budget and work-plan, to SAMHSA. SAMHSA reviews these applications to gather required information on the grant applicant’s progress. Then, ONDCP determines whether to continue federal grant support. The 2005 and 2006 grant award process of ONDCP and SAMHSA did not adhere to standards for internal control in the federal government, statutory requirements, and leading practices for collaborating agencies. The process did not adhere to certain federal internal control standards because ONDCP lacked mechanisms for monitoring SAMHSA and ensuring that application reviews were fully documented. Furthermore, ONDCP instituted procedures for screening grant applications that did not ensure that all renewal grantees met statutory eligibility requirements. Finally, ONDCP and SAMHSA experienced collaboration challenges, such as a lack of fully defined roles and responsibilities and procedures for conducting eligibility screening, which hampered their management of the grant-making process. Standards for internal control in the federal government are essential for proper stewardship of public resources because they help ensure accountability and minimize operational problems. Having internal controls that operate as intended related to monitoring and oversight could provide assurance that SAMHSA is conducting its activities in accordance with the inter-agency agreement signed by both agencies. In addition, in managing the Drug-Free Communities Support Program, adequate internal controls, such as ensuring proper documentation of eligibility screening activities, are key to providing accountability in the process. According to internal control standards, management should provide ongoing monitoring of performance. Neither the inter-agency agreement between ONDCP and SAMHSA nor other documentation associated with the grant-making process defined how ONDCP would oversee SAMHSA in conducting its monitoring responsibilities. In July 2006, we reported that in using inter-agency agreements, the issuing agency, among other things, should clearly define roles and responsibilities for conducting monitoring and oversight. However, the inter-agency agreement for fiscal year 2005 did not articulate such specific roles and responsibilities for each of the two agencies. The statute states that ONDCP may enter into inter-agency agreements with other national drug control agencies to delegate authority for the execution of grants and for such other activities necessary to carry out the program. As reflected in the legislative history of the Drug-Free Communities Act of 1997, the ONDCP Administrator of the program would use the terms of the inter-agency agreement to oversee the program and ensure that it is operated and grants are awarded in accordance with the policies and criteria established for the program. However, our review of the inter-agency agreement found no references about how ONDCP would monitor SAMHSA in administering the grant-making process or a description of roles related to these activities. Moreover, we found that no specific policies and procedures for monitoring the program were established before the grant process had begun. Furthermore, ONDCP officials told us that while the inter-agency agreement was intended to serve as the document for the monitoring, oversight, and management of the program, they acknowledged that this was not the case in fiscal year 2005, though officials told us that staff from both agencies met periodically to review the grant program process. ONDCP officials acknowledged, however, that they did not regularly monitor and oversee the grant program because ONDCP expected SAMHSA to carry out its duties as specified in the inter-agency agreement and the funding announcement. Because ONDCP did not conduct ongoing monitoring, the agency increased its risk that it could not provide reasonable assurance that SAMHSA was conducting grant activities, such as eligibility screening, according to ONDCP’s expectations. Standards for Internal Control in the Federal Government call for transactions and other significant events to be clearly documented and all documentation to be properly managed and maintained. However, we found that such documentation was not consistently maintained for reviews of grant applications for statutory eligibility. Specifically, ONDCP and SAMHSA did not provide us documentation on whether fiscal year 2005 renewal grant applicants were screened for statutory eligibility, as we requested. As a result, we asked 10 SAMHSA project officers whether they conducted the screening, of which 5 reported that they screened renewal grant applicants for eligibility in fiscal year 2005. We also reviewed 66 grant application files for funded initial and renewal grantees, that covered fiscal years 2005 and 2006, to determine whether the files contained documentary evidence that screening for statutory eligibility had occurred. Documentary evidence was missing from 47 of the 66 grant files (71 percent) we reviewed. Specifically, for fiscal year 2005, files for 21 funded renewal grant applicants and 1 funded initial applicant contained no documentation. For fiscal year 2006, 25 files for funded initial grant applicants were missing documentary evidence that eligibility screening took place. While our review cannot be generalized to all grant files for fiscal years 2005 and 2006, the lack of documentation in most of the grant files we reviewed indicates increased risk that neither ONDCP nor SAMHSA could provide reasonable assurance that all funded grant applicants were screened for eligibility. In addition, the screening sheets used by agency officials to determine whether an applicant was an eligible coalition did not include all of the statutory eligibility criteria (see fig. 2 for our summary of the statutory eligibility criteria). Specifically, our review of an example of eligibility screening sheets used for fiscal years 2005 and 2006, and our file review of 19 initial grantee applications for fiscal year 2005 where eligibility screening sheets were present, found that the eligibility criteria delineated in the screening sheets used by ONDCP and SAMHSA officials to determine whether an applicant was an eligible coalition omitted some of the statutory eligibility criteria. For example, the screening sheet did not capture whether an applicant had described and documented the nature and extent of the substance abuse problem in the community; described the substance abuse prevention and treatment programs and activities; developed a strategic plan to reduce substance abuse among youth in a comprehensive and long- term fashion; and worked to develop a consensus regarding the priorities of the community to combat substance abuse among youth as required by statute. Nor did the screening sheet examine whether a coalition had established a system to measure and report outcomes consistent with common indicators and evaluation protocols established and approved by the Administrator. Without having all of the statutory eligibility criteria on the screening sheet, ONDCP increased its risk that all statutory eligibility criteria were not assessed and met and that all funded grant applicants were not statutorily eligible to receive either an initial or renewal grant. For the Drug-Free Communities Support Program, by statute, a coalition must meet each of the statutory eligibility criteria each fiscal year to be eligible to receive an initial grant or a renewal grant. ONDCP implemented a separate screening process, not described in the funding announcement, for initial and renewal grant applicants 1 month before the fiscal year 2005 grant awards were to be announced, including the introduction of a criterion that grant applicants could not propose to use over a certain percentage of grant funds for direct services. Direct services are used to provide a distinct and ongoing service or activity for an individual or group of individuals such as prevention programs. ONDCP officials told us that they instituted this screening process because SAMHSA did not produce evidence that all applications had been screened for statutory eligibility in fiscal year 2005 and to ensure that funds were awarded only to eligible coalitions, as required by statute. ONDCP officials told us that because of time constraints, ONDCP did not review all of the approximately 990 initial and renewal grant applicants to determine if they met all of the statutory eligibility criteria. Instead, ONDCP officials said that they established a separate screening process that would enable the agency to more carefully scrutinize initial or renewal grant applicants who met one or more of the three criteria, as described in table 1. The separate screening process that ONDCP officials said they used for the Drug-Free Communities Support Program in fiscal year 2005 differed from the process for federal grant programs in general, as shown in figure 3. As a result of implementing its new screening process 1 month before the fiscal year 2005 grants were to be announced, ONDCP officials reported that they did not screen all initial and renewal grant applicants for statutory eligibility. Doe ppliction meet creening? Doe pplicnt meet eligiility? The criteria and procedures ONDCP officials told us they implemented for the separate screening process differed for initial and renewal grant applicants. One key difference was the direct services threshold applied— 20 percent or more of funds could not be used for direct services for initial grant applicants and 40 percent or more of funds could not be used for direct services for renewal grant applicants. Table 1 describes the different screening process and criteria ONDCP officials said they used for initial and renewal grant applicants in fiscal year 2005. In addition, ONDCP officials reported different outcomes for its separate screening process for initial and renewal grant applicants. For the initial grant applicants, ONDCP officials reported that all of the approximately 180 that were funded met the statutory eligibility requirements. However, ONDCP officials acknowledged that most or about 515 of the approximately 600 (about 86 percent) renewal grant applicants were funded in fiscal year 2005 without ONDCP or SAMHSA ensuring that these grantees satisfied the statutory eligibility criteria. The results of ONDCP’s review are shown in figure 4. Screened for statutory eligibility criteria and eligible for grants (180) Satified tory eligiility criteria and were rded grnt(180) About 55 grant applicants were screened for statutory eligibility criteria and were not eligible for grants and about 30 satisfied statutory eligibility criteria and were awarded grants (85) Were not screened for statutory eligibility criteria, but were awarded grants (515) For the fiscal year 2006 screening process, ONDCP officials said they applied the direct services criterion differently than in fiscal year 2005. The direct services criterion was also described explicitly as an additional eligibility criterion to be applied (that is, eligibility criterion apart from the statutory eligibility criteria) in the 2006 funding announcement. For fiscal year 2006, for initial grant applicants only, the 20 percent direct services threshold was explicitly described in both the inter-agency agreement and in the fiscal year 2006 funding announcement. The inter-agency agreement between ONDCP and SAMHSA stated that “ONDCP shall review applications for compliance with the 20 percent direct services policy.” The funding announcement provided to initial grant applicants stated that “No more than 20 percent of grant funds may be used for direct services.” The funding announcement also stated that if initial grant applicants did not meet this eligibility requirement, their applications would not be forwarded for peer review. While ONDCP clarified the funding announcement for grant applicants in fiscal year 2006, ONDCP did not screen renewal grant applicants for the statutory eligibility criteria in fiscal year 2006 because, according to ONDCP officials, these applicants were already considered to be statutorily eligible for grant funds. ONDCP officials told us that they believed that the initial eligibility screening conducted in a previous year was sufficient for the 4 remaining fiscal years. However, the Drug-Free Communities Support Program’s statutory framework requires that all coalitions meet each of the statutory eligibility requirements each fiscal year to be eligible to receive an initial grant or a renewal grant. As a result of ONDCP’s policy in fiscal year 2006, all of the renewal grant applicants that received renewal grants did so without ONDCP determining whether these applicants satisfied the statutory eligibility criteria for that fiscal year. ONDCP and SAMHSA experienced collaboration challenges, which contributed to the irregularities we identified for fiscal years 2005 and 2006. In October 2005, we identified leading practices that can be used to help enhance and sustain collaboration in instances where multiple agencies have shared responsibility for a program or function. These practices include, among other things, establishing mutually reinforcing or joint strategies, agreeing on roles and responsibilities, and establishing compatible policies, procedures, and other means to operate across agency boundaries. Lack of fully defined roles and responsibilities and documented procedures to follow for eligibility screening hampered the efforts of ONDCP and SAMHSA to effectively manage the grant-making process. As we reported in our prior work, collaborating agencies need to establish strategies that work in concert with those of their partners, which can help in aligning partner agencies’ activities, core processes, and resources to accomplish a common outcome. In fiscal years 2005 and 2006, ONDCP and SAMHSA signed inter-agency agreements, which were intended to provide a strategy for managing the grant-making process. Further, as reflected in the legislative history of the Drug-Free Communities Act of 1997, the Administrator of the program would use the terms of the inter- agency agreement to oversee the program and ensure that it is operated and grants are awarded in accordance with the policies and criteria established for the program. In June 2006, we reported that the use of inter-agency agreements requires, among other things, that the issuing agency define roles and responsibilities for managing the program. However, the inter-agency agreement for fiscal year 2005 did not fully articulate roles and responsibilities for ONDCP’s management of SAMHSA, such as the role of SAMHSA in screening renewal grant applicants for eligibility, and for the grant program overall. Therefore, in fiscal year 2005, confusion occurred over the extent to which eligibility screening had taken place, resulting in ONDCP implementing its separate screening process. Furthermore, procedures governing the grant-making process were not fully documented for the two agencies, as called for by internal control standards for the federal government, to operationalize SAMHSA’s tasks. The inter-agency agreement for fiscal year 2005 states that SAMHSA is responsible for tasks in several broad areas, including issuing the notice of funding, the application renewal process for renewal grantees, and making decisions jointly with ONDCP regarding the selection of grantees and the awarding of initial grantees and renewal awards. However, the inter- agency agreement did not specify what guidance SAMHSA was to follow with respect to screening renewal grant applicants for eligibility. For example, in the absence of detailed information on how ONDCP and SAMHSA would screen initial and renewal grant applicants, SAMHSA officials said they relied on established HHS grant guidance to determine how and when activities such as eligibility screening were to be performed. The inter-agency agreement also did not discuss how ONDCP and SAMHSA would agree upon the policies to be followed in the grant- making process and how those policies would be documented. In addition, ONDCP officials reported that they believed that SAMHSA had agreed to conduct the eligibility screening, and when SAMHSA did not produce its screening sheets, ONDCP officials said they could not be certain that SAMHSA had conducted the eligibility screening. Without formal, written policies and procedures both agencies experienced confusion over what steps would be followed to manage the program. Since fiscal year 2006, ONDCP has taken steps to strengthen its management of the grant-making process by establishing senior-level management groups to address collaboration and monitoring issues, eliminating its use of the direct services eligibility criterion in response to reauthorizing legislation, and clarifying grant program roles and responsibilities. However, effective oversight is still lacking because ONDCP has neither developed nor documented its approach to monitoring and overseeing the program as a whole. Furthermore, roles and responsibilities for managing the program continue to remain fully undefined, including the appropriate role of the program Administrator. The ONDCP Reauthorization Act of 2006 prohibited the Director of ONDCP from imposing any eligibility criteria on initial applicants or renewal grantees not included in the statute. In response, ONDCP changed certain procedures for fiscal year 2007 but has not changed its procedures to ensure that renewal grant applicants are screened for statutory eligibility. In response to our inquiries about what steps have been taken to monitor and oversee the grant program since fiscal year 2005, senior ONDCP officials told us that they have taken action to clarify monitoring roles and responsibilities and improve related documentation. However, effective oversight is still lacking because ONDCP has neither developed nor documented its approach to monitoring and overseeing the program as a whole. According to ONDCP officials, to address long-term strategic issues involving, among other things, monitoring activities, various senior-level management groups have been established, at least one of which meets every 4 to 6 weeks to address collaboration and monitoring issues. These officials said that no charter or mission statements were developed for these meetings, and officials were unable to provide us with information on any specific outcomes or policy changes that have occurred to enhance oversight of the program. The fiscal year 2007 inter-agency agreement includes a brief statement that SAMHSA is to provide a monthly report on high-risk grantees to ONDCP for, in their view, “the effective oversight” of the program, but no other program monitoring and oversight information is specifically discussed. Without defined oversight activities for ensuring successful completion of the work across all activities, ONDCP lacks reasonable assurance that required tasks are being performed in accordance with its directives. ONDCP officials acknowledged that they were aware that grant file documentation was missing during fiscal years 2005 and 2006. However, as of fiscal year 2007, ONDCP had not yet put in place mechanisms to ensure that documentation of eligibility screening were included in grant files, as called for by internal control standards. Nor had ONDCP amended the eligibility screening sheets for fiscal year 2007 to capture all the statutory eligibility criteria. Without ensuring that screening sheets contain all the required statutory eligibility criteria, ONDCP increased its risk that grantees were not statutorily eligible to receive grants. ONDCP and SAMHSA have taken steps to clarify their respective roles and responsibilities for handling program administration and coordination issues in their fiscal year 2007 inter-agency agreement. For example, whereas the inter-agency agreement for fiscal year 2005 did not delineate roles and responsibilities for administering the grant-making process, the inter-agency agreement written for fiscal year 2007 provides additional information. Specifically, the fiscal year 2007 agreement includes a new attachment describing ONDCP’s role and responsibilities for the program. The fiscal year 2007 inter-agency agreement states, for example, that ONDCP will convene “cooperative partners” meetings with the Center for Substance Abuse Prevention to enhance program coordination and collaboration. Further, the inter-agency agreement states that SAMHSA would provide certain services in connection with the awarding of grants in accordance with policies and procedures agreed upon by ONDCP and SAMHSA. However, all of these policies and procedures that ONDCP and SAMHSA agreed to are not documented in the inter-agency agreement or in other documentation. For example, the agreement states that ONDCP and SAMHSA shall agree on terms of the programmatic and budget review prior to reviewing renewal applications and that ONDCP shall make final funding decisions for these grants. However, the fiscal year 2007 agreement does not specify what the programmatic and budget review processes are and when they should be completed and by which agency. While efforts to clarify how the grant program is to be administered and the designation of the Director of ONDCP as responsible for making final funding decisions represents a step in the right direction, the Administrator’s role is still not specified in the inter-agency agreement. As noted earlier, the Administrator’s role is defined by statute as the entity that generally carries out the grant program. While the fiscal year 2007 agreement states that the Director of ONDCP will make final funding decisions, the role of the Administrator is not specifically identified, described, or defined in the inter-agency agreement. Leading practices for collaborating agencies state that agreement on roles and responsibilities is a necessary element for a collaborative working relationship. Because the Administrator is responsible for generally carrying out the grant program, it is particularly important that the agencies agree upon and document what this leadership role entails. Without doing so, confusion over managing the program could continue to occur. In response to congressional concerns and complaints from renewal grant applicants that were not funded in fiscal year 2005, the ONDCP Reauthorization Act of 2006 prohibited the Director of ONDCP from imposing any eligibility criteria on initial applicants or renewal grantees not included in the statute. ONDCP officials said that, as a result of the prohibition in the reauthorizing act, they eliminated their use of the 20 percent direct services eligibility criterion for fiscal year 2007. Moreover, the fiscal year 2007 inter-agency agreement no longer contains the 20 percent direct services criterion, consistent with the reauthorization act. Instead, ONDCP officials stated that they take direct services into account as part of the overall evaluation of an application during the peer review process. Nonetheless, ONDCP continues to not screen renewal grant applicants for statutory eligibility. As stated earlier in this report, by statute, to be eligible to receive a renewal grant, a coalition shall meet all of the statutory eligibility criteria. However, ONDCP officials stated that they consider the initial screening they conduct to be sufficient for the remaining 4 years for renewal grantees. In fiscal year 2007, renewal grant applicants were not required to submit any supporting documentation (e.g., proof of eligible coalition members) to verify that they met all the statutory eligibility criteria nor were they screened for statutory eligibility. ONDCP’s approach does not take into account that a coalition’s eligibility status could change from one fiscal year to another if, for example, representatives from the different sectors in fiscal year 2006 left the coalition in fiscal year 2007 and were not replaced. ONDCP’s decision to not screen renewal grant applicants for eligibility in fiscal year 2007 raises questions about whether the agency could provide reasonable assurance that participating coalitions remain statutorily eligible. ONDCP officials told us that the eligibility criteria that were missing from the eligibility screening sheets (e.g., whether an applicant had described and documented the nature and extent of the substance abuse problem in the community) are not used during the eligibility process, but rather are considered during the peer review process. However, these criteria are intended to be used to determine statutory eligibility for grant funds. According to ONDCP’s practices, peer reviewers do not typically determine eligibility; they examine only eligible applications for an application’s strength. Without ensuring that all statutory eligibility criteria were met, ONDCP cannot be certain that all funded grant applicants were eligible. For fiscal years 2005 and 2006, ONDCP and SAMHSA did not adhere to key federal internal controls standards in the federal government and did not meet all statutory requirements in administering the Drug-Free Communities Support grant program. ONDCP has been unable to show that only eligible coalitions received grants in accordance with the Drug- Free Communities Support Program’s statutory framework. In particular, because ONDCP has decided not to conduct eligibility screening each fiscal year for renewal grant applicants, it is unable to ensure that these coalitions remain eligible for their duration in the program, consistent with statutory requirements. Without well-functioning internal controls, such as maintaining grant documentation and conducting on-going monitoring, ONDCP cannot demonstrate consistent implementation of the key steps required in the grant-making process—notably, the screening of grantees to determine their eligibility—to Congress, grant-seeking applicants, and the public at large. Such lack of assurance raises questions about whether public resources are properly safeguarded. ONDCP took steps following fiscal year 2006 to enhance its administration of the grant program, such as informing applicants of changes in its eligibility screening processes and revising its procedures to address internal control and other deficiencies. However, ongoing weaknesses in the monitoring and oversight of the program mean it may not be possible to ensure that all appropriate guidance and policies are being met or communicated to grant applicants. And while ONDCP and SAMHSA have made an effort to define roles and responsibilities in the fiscal year 2007 inter-agency agreement, important functions, such as the leadership role of the Administrator, remain unaddressed. Without fully defined and agreed upon roles and responsibilities, the agencies may not be able to avoid miscommunication and lack of collaboration over the grant-making process in the future. To strengthen its administration, oversight, and internal controls for the Drug-Free Communities Support Program, we recommend that the Director of the Office of National Drug Control Policy take the following three actions: 1. Develop and document its approach to monitoring and overseeing SAMHSA and the program as a whole. 2. Ensure that the coalitions receiving an initial grant or a renewal grant satisfy all of the statutory eligibility criteria for each fiscal year and that this is fully documented. 3. Fully define the roles and responsibilities of SAMHSA and ONDCP, including those of the Drug-Free Communities Support Program Administrator, in the inter-agency agreement prepared for each fiscal year. We provided a draft of this report to ONDCP, HHS, and DOJ for review and comment. ONDCP and HHS provided written comments, which are summarized below and included in their entirety in appendixes III and IV, respectively. In addition, ONDCP and DOJ provided technical comments, which we incorporated as appropriate. In commenting on the draft report, the Director of ONDCP described the efforts it has underway or planned to address our recommendations. Although these actions are intended to strengthen the management of the grant review process, based on the ONDCP Director’s response, additional efforts could help ensure that our recommendations are fully implemented, as discussed below. Regarding our first recommendation on developing and documenting an approach to program monitoring and oversight, the Director noted that ONDCP has added more detail in the inter-agency agreement on the roles and responsibilities of ONCDP and SAMHSA and created and implemented a policy manual for the program, which was implemented in 2007. However, our analysis of ONDCP’s policy manual showed that it does not include information on whether and how ONDCP will conduct oversight and monitoring. The Director also stated that ONDCP has documentation of minutes and activities that take place at their monthly interagency management meetings. During our review, we repeatedly requested these minutes but ONDCP officials said that no formal minutes were developed or maintained and did not provide them. However, we have modified the report to delete the statement that minutes were not available. We continue to believe that this recommendation remains valid because, without defined oversight activities for ensuring successful completion of the work across all activities, ONDCP increases its risk that it cannot provide reasonable assurance that required tasks are being performed in accordance with its directives. Concerning our second recommendation that ONDCP ensure that the coalitions receiving an initial grant or a renewal grant satisfy all of the statutory eligibility criteria for each fiscal year and that these decisions are fully documented, the ONDCP Director noted that they have taken steps to ensure that all renewal applicants are screened for eligibility, consistent with our recommendation, and documentation related to screening applicants is maintained in grant files. However, the ONDCP Director took issue with our position that all statutory eligibility requirements should be included on the screening sheets used to document application reviews. The Director said that the initial joint review by ONDCP and SAMHSA staff, combined with the application review and scoring process conducted during peer reviews, effectively ensure that applications recommended for funding meet statutory eligibility requirements. However, we continue to maintain that including all statutory eligibility requirements on the screening sheets could increase ONDCP’s assurance that all funded applicants are statutorily eligible and that these decisions are fully documented. Regarding the third recommendation on clearly defining roles and responsibilities, the ONDCP Director noted that ONDCP plans to add additional details to the inter-agency agreement on the role of the Drug- Free Communities Support Program Administrator. The ONDCP Director commented on some of the data we presented in this report. Specifically, the Director expressed concern that we were unable to accurately identify the number of applications for fiscal years 2005 and fiscal years 2006, noting that ONDCP had provided us with spreadsheets that identified the number of applications received and their disposition. During our review, we worked continuously with ONDCP officials in an effort to obtain accurate data on the number of applications received. Nonetheless, we were unable to do so because, despite our numerous attempts, we could not resolve inconsistencies in the information ONDCP provided over the course of our review, causing us to question its accuracy. However, we believe that the numbers we present on grant applications are sufficient to illustrate orders of magnitude. We revised our scope and methodology discussion in appendix I to explain the efforts we took to obtain the most accurate information available from ONDCP. Moreover, the Director took issue with our presentation of ONDCP’s data in figure 4, which showed that ONDCP excluded approximately 190 initial applicants in fiscal year 2005 from the possibility of being funded without being screened for statutory eligibility. He indicated that by doing so ONDCP would have incurred an unnecessary expenditure of taxpayer dollars to review applications which fell below the funding threshold (range of peer review scores for which applicants are awarded funding). We agree that taxpayer dollars should not be expended unnecessarily. However, we disagree that figure 4 conveys the point the director is asserting; rather, our analysis of ONDCP’s data shows that some of these approximately 190 initial applicants had peer review scores above the funding threshold and were excluded on the basis of one or more of the other criteria in the separate screening process. The Assistant Secretary for Legislation at HHS also commented on a draft of this report. Specifically, he described actions that SAMSHA has taken, in partnership with ONDCP, to strengthen internal controls and management of the DFC grant program. For example, the Assistant Secretary said that program information has been consolidated into the SAMHSA Drug-Free Communities Support Program Operations Manual, SAMHSA and ONDCP officials are meeting monthly to enhance inter- agency coordination, and SAMHSA and ONDCP are currently jointly screening applications for statutory eligibility and SAMHSA retains the screening checklists. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Director of the Office of National Drug Control Policy, the Secretary of the Department of Health and Human Services, the Attorney General of the Department of Justice, and interested congressional committees. We will also make copies to others upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report or would like to discuss it further, please contact me at (202) 512-6806 or by email at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To examine the extent to which Office of National Drug Control Policy (ONDCP) and the Substance Abuse and Mental Health Services Administration (SAMHSA) conducted its screening and grant-related activities for the Drug-Free Communities Support Program in accordance with standards for internal control in the federal government, established laws, and leading practices for collaborating agencies, we reviewed available program documents, including, but not limited to, the Request for Applications funding announcements; the inter-agency agreements between ONDCP and SAMHSA, and the documented outcomes of specific review activities and recommendations for grant funding decisions. We compared ONDCP’s and SAMHSA’s grant-related activities in fiscal years 2005 through 2007 with criteria in GAO’s Standards for Internal Control in the Federal Government. These standards, issued pursuant to the requirements of the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), provide the overall framework for establishing and maintaining internal control in the federal government. Also pursuant to FMFIA, the Office of Management and Budget (OMB) issued Circular A-123, revised December 21, 2004, to provide the specific requirements for assessing the reporting on internal controls. Internal controls and the definition of internal control in OMB Circular A-123 are based on GAO’s Standards for Internal Control in the Federal Government. We also considered our prior work on results-oriented government related to leading practices for federal collaboration as well as leading practices for awarding grants. In addition, we compared ONDCP and SAMHSA’s grant related activities in fiscal years 2005 through 2007 with statutory criteria included in the Drug- Free Communities Act of 1997, the Drug-Free Communities Support Program Reauthorization Act of 2001, and the Office of National Drug Control Policy Reauthorization Act of 2006. Further, to determine whether ONDCP and SAMHSA had documented eligibility screening, we reviewed grant applications for fiscal years 2005 and 2006. We did not review all available applications because this was not practical. Instead, we attempted to select probability samples of applications to review; this would have allowed us to generalize the results to all of the fiscal year 2005 and 2006 applications (about 990 and over 700, respectively, for a total of about 1,690). However, because ONDCP and SAMHSA were unable to provide accurate counts of the total numbers of applications in either year, we were unable to do so. We made numerous attempts to resolve inconsistencies in the information ONDCP provided over the course of our review, which caused us to question its accuracy. Consequently, we reviewed systematic random samples of available funded and unfunded applications for these years. For fiscal year 2005, we reviewed 20 funded, initial applications and 21 funded, renewal applications, 30 unfunded, initial applications, and 20 unfunded, renewal applications. For fiscal year 2006, we reviewed 25 funded, initial applications and 10 unfunded, initial applications. Although the applications we reviewed for both fiscal years were randomly selected, because the applications were not representative of all applications in either year we cannot generalize the results to the larger populations of fiscal year 2005 or 2006 applicants. Even so, our review of these 126 grant applications provided us with perspective on how ONDCP and SAMHSA handled these applications. Additionally, because accurate counts of the total numbers of applications in fiscal years 2005 and 2006 were not available, we provide approximate numbers of applications to illustrate orders of magnitude. In addition to our review of documents and grant files described above, we interviewed key staff at ONDCP and SAMHSA about how they conducted and documented grant-related activities since 2006. While our discussions with the agency officials from ONDCP and SAMHSA focused on the agencies’ review methods and funding decisions implemented in fiscal years 2005 and 2006, we also obtained information for fiscal year 2007 wherever possible to provide the most current information on the program. We did not review the fiscal year 2008 grant-making process because the process was underway during our review and the results of the process were not yet available. We conducted this performance audit from July 2006 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix presents a timeline of key events for the Drug-Free Communities Support Program in fiscal year 2005. In addition to the contact named above, key contributors to this report were Glenn G. Davis, Assistant Director, Lisa G. Shibata, David P. Alexander, Duren Banks, Amy Bernstein, Randall J. Cole, Willie Commons III, Daniel S. Kaneshiro, Alison Martin, Linda S. Miller, Jan B. Montgomery, Raymond J. Rodriguez, and Adam Vogt.
Twenty-five percent of American students ages 13-17 reported using illicit drugs in 2007. The Drug-Free Communities Support Program provides grants to community coalitions involved in reducing youth substance abuse. The Office of National Drug Control Policy (ONDCP) administers the program. ONDCP selected the Substance Abuse and Mental Health Services Administration (SAMHSA) to operate the grant program in fiscal year 2005. In 2005, ONDCP did not award grants to some coalitions who had previously received grant funds (renewal grantees). GAO was asked to assess (1) the extent to which ONDCP and SAMHSA administered grant-related activities for fiscal years 2005 and 2006 consistent with federal internal control standards, statutory requirements, and other guidance and (2) the steps ONDCP has taken since 2006 regarding its administration of grant-related activities. GAO analyzed and compared program documents and grant activities to established guidance, such as federal internal control standards and statutory requirements, and interviewed key program management officials. In fiscal years 2005 and 2006, ONDCP and SAMHSA did not always adhere to applicable federal internal control standards, statutory requirements, and other guidance during the grant-making process. Standards for internal control in the federal government call for agencies to conduct ongoing monitoring of a program's performance, but ONDCP did not conduct such monitoring of SAMHSA or the program overall. Thus, ONDCP increased its risk of not providing reasonable assurance that SAMHSA conducted grant activities, such as eligibility screening. Internal control standards also require that agencies maintain documentation that grant applicants met eligibility requirements each fiscal year. While SAMHSA officials said that they screened all renewal grantees for eligibility in 2005 and ONDCP officials said they screened all initial grantees in 2006, documentation indicating that such screening had occurred was missing from 47 of the 66 grantee files GAO reviewed. ONDCP also lacked a process to ensure that all renewal applicants met statutory eligibility requirements. For example, ONDCP used a separate screening process in fiscal year 2005 that included a criterion that grantees limit funding for direct services, such as enrolling individuals in a drug prevention program. Only renewal grant applicants that met this or one of two other criteria underwent further screening for statutory eligibility. As a result, ONDCP funded about 86 percent of renewal grantees in 2005 without ensuring that they met the statutory eligibility criteria. Leading practices for collaborating agencies call for strategies to ensure common outcomes. However, the inter-agency agreement between ONDCP and SAMHSA did not fully define roles and responsibilities and lacked specific guidance to SAMHSA on eligibility screening. As a result, confusion occurred over issues, such as the eligibility criteria to apply, hampering the two agencies in their efforts to effectively manage the grant-making process. Since 2006, ONDCP has addressed some of the issues described above, by (1) clarifying its role for the program in its 2007 agreement with SAMHSA, (2) establishing management groups to address monitoring issues, and (3) eliminating its use of the direct services eligibility criterion. However, some internal control and other challenges remain. For example, ONDCP has not yet put a mechanism in place to ensure that documentation confirming eligibility is maintained in the grant files. ONDCP also has not documented its approach to overseeing SAMHSA and the program. Without defined oversight activities for ensuring completion of the work, ONDCP lacks reasonable assurance that required tasks are being performed in accordance with management's directives. Also, roles and responsibilities for key elements of grant administration remain largely undefined in that the agencies have not clarified certain services SAMHSA is to provide related to awarding grants or the role of the program Administrator. Without defining these roles, confusion on the steps to follow in managing the program could continue to occur. Finally, as in 2006, ONDCP officials told GAO that they did not screen renewal grant applicants for eligibility in 2007 because the screening that applicants undergo when they first receive a grant is sufficient.
DOE laboratories have primarily used the following types of agreements to transfer technology to U.S. businesses and other organizations: CRADAs: A DOE laboratory and its nonfederal partner(s) agree that their scientists will collaborate on a research project of mutual interest and consistent with the laboratory’s mission. Both parties may contribute personnel, services, and property to the CRADA project, and the partner(s) can provide funding for the laboratory’s research. However, the DOE laboratory cannot provide funding to the partner(s). Intellectual property rights to technology developed under the CRADA are negotiated in advance. In general, the inventing partner retains ownership rights, while the other partner receives appropriate licensing rights. Technical assistance for small businesses: Both NNSA’s and the Office of Science’s laboratories used dedicated funds (provided by the Technology Partnership Program and the Laboratory Technology Research Program, respectively) to provide technical assistance to small businesses. Work-for-others agreements: A DOE laboratory agrees to conduct a defined scope of work or list of tasks that is consistent with DOE missions and which does not place the laboratory in direct competition with the private sector. The nonfederal entity pays for the entire cost of the project. While intellectual property rights are negotiable, the nonfederal entity typically retains title rights to any inventions. Technology licensing agreements: A DOE laboratory grants a business an exclusive or nonexclusive license to use its intellectual property in return for a licensing fee and/or royalties. User facility agreements: A DOE laboratory permits outside organizations to use its unique research equipment and/or facilities to conduct research. For nonproprietary research, almost all of the users are supported by federal grants, typically through the National Science Foundation or DOE. For proprietary research, the private organization pays the full cost for using research equipment or facilities and retains title rights to any intellectual property. Table 1 shows the dedicated funding that the Congress has made available for technology partnerships through the Technology Partnership Program for NNSA’s laboratories and weapons production facilities and the Laboratory Technology Research Program for DOE’s Office of Science laboratories. The Technology Partnership Program, which provided funding for DOE’s nuclear weapons laboratories and production facilities, peaked at $214 million in fiscal year 1996 and was subsequently phased out by fiscal year 2001. The Laboratory Technology Research Program, which provided funding for DOE’s Office of Science laboratories, also declined from a peak of $47 million in fiscal year 1995 to $3 million in fiscal year 2002. DOE requested $3 million for the Laboratory Technology Research Program for fiscal year 2003 and has announced that it will terminate this program once previously approved projects have been funded. In the early 1990s, DOE created the Office of Research and Development Management within the Office of the Under Secretary to promote and oversee technology transfer at DOE’s laboratories and production facilities. In March 1996, at the direction of the Congress, DOE disestablished this office and eliminated all of its staff positions. Subsequently, in 1999, DOE established a Technology Transfer Working Group, composed of representatives from 25 DOE organizations, to oversee and coordinate technology transfer policies. The working group has no permanent staff positions. The 12 DOE laboratories surveyed have substantially reduced their participation in CRADAs and technical assistance to small businesses in recent years, primarily because DOE research program funding has not replaced dedicated funding for technology partnerships. On the other hand, the number of work-for-others agreements, technology licenses, and user facility agreements has increased during the past 10 years. (See tables 5 and 6 in app. I for data on each laboratory’s technology transfer activities and nonfederal entities’ financial support.) Finally, two laboratories have identified non-DOE sources to support their efforts to provide local small businesses with technology assistance. Table 2 shows that active CRADAs at DOE laboratories—which peaked at 1,111 in fiscal year 1996—dropped by more than 40 percent to 606 in fiscal year 2001. In particular, CRADAs that continued from the prior year dropped from 861 in fiscal year 1996 to 440 in fiscal year 2001. Much of this decline occurred in fiscal year 2000, when 360 CRADA projects ended. (See table 7 in app. I for each laboratory’s newly executed and continuing CRADAs.) The initial growth and subsequent decline in CRADAs over the past 10 years mirrors the change in DOE’s dedicated funding for technology partnerships through NNSA’s Technology Partnership Program and the Office of Science’s Laboratory Technology Research Program. Since peaking in fiscal year 1996, the drop in CRADAs has been greatest at the laboratories for which dedicated funding constituted a substantial share of partnership funding. For example, from 1996 through fiscal year 2001, the number of new CRADAs dropped from 12 to 7 and total active CRADAs dropped from 55 to 30 at the Office of Science’s Lawrence Berkeley National Laboratory. The Laboratory Technology Research Program was the DOE source of funding for 68 percent of these CRADAs. The termination of Technology Partnership Program funding resulted in more than a 60-percent drop in active CRADAs at NNSA laboratories. According to technology transfer managers at the DOE laboratories we visited, their laboratories are likely to have fewer CRADAs in the future because of DOE funding constraints. For example, the number of CRADAs at Oak Ridge National Laboratory dropped from 256 in fiscal year 2000 to 79 in fiscal year 2001 primarily because of funding constraints. In addition, as a result of unanticipated cuts in fiscal year 2002 funding for the Laboratory Technology Research Program—from $10 million in fiscal year 2001 to $3 million in fiscal year 2002—the Office of Science funded only 5 of the 12 multi-year CRADA proposals previously approved for funding by its peer review process. The partners for the other seven approved CRADAs were informed that funding for their projects would not be available in fiscal year 2002. The Office of Science has announced that these 12 CRADAs will be the last ones funded by the Laboratory Technology Research Program, which will be terminated. The three laboratories that have historically relied on DOE program funds to support CRADAs have participated in at most 50 CRADAs per year each. For example, total CRADAs at the National Renewable Energy Laboratory have grown from 14 in fiscal year 1996 to 21 in fiscal year 2001, primarily because the Energy Efficiency and Renewable Energy Program, whose mission includes working with industry, has provided funding support for most of these CRADAs. CRADAs at the Idaho National Engineering and Environmental Laboratory peaked at 50 in fiscal year 1996 and subsequently fell to 32 in fiscal year 2001. Figure 1 shows that CRADA funding from all sources peaked at over $500 million in fiscal year 1995. Since then, DOE funding has declined while partners have provided a greater proportion of CRADA support through funding and in-kind contributions. These trends reflect the decline in the total number of active CRADAs and the fact that DOE’s research programs generally have not provided the funding support for CRADAs that NNSA’s Technology Partnership Program and the Office of Science’s Laboratory Technology Research Program had previously provided. Funding from some DOE programs has increased, however. For example, the Energy Efficiency and Renewable Energy Program, which provided $16.6 million for CRADAs in fiscal year 1996, provided $40.1 million of the $81 million in total DOE funds for CRADAs in fiscal year 2001. (See tables 8 and 9 in app. I for the financial support of CRADAs by DOE research programs and partners.) With the decline in DOE funding support for CRADAs, the bulk of support for CRADAs has come from the laboratories’ partners. Before fiscal year 1997, CRADA partners primarily provided in-kind contributions that covered the costs incurred by their scientists. Since then, CRADA partners have provided more funding to cover part, or all, of the DOE laboratory’s costs for CRADAs. In fiscal year 2001, CRADA partners provided 76 percent of the total financial support for CRADAs through funding and in-kind contributions—specifically, partners paid all of the costs for 23 percent of active CRADAs and jointly funded the DOE laboratory’s costs for 15 percent of active CRADAs. (See table 10 in app. I for the type of financial support that partners provided.) While these funds enabled the DOE laboratories to leverage their resources, technology transfer managers at several laboratories noted that many ongoing CRADAs were terminated early and potentially beneficial CRADA projects were stopped during negotiations because a business learned that it would have to pay a substantial part, or all, of the laboratory’s research costs in addition to its own costs. In recent years, about 33 percent of the CRADAs were with small businesses, 50 percent were with large or intermediate businesses, and 13 percent were with universities or consortia. (See table 11 in app. I.) Table 3 shows that the DOE laboratories’ other technology transfer activities funded by businesses and other nonfederal entities have grown substantially in the past 10 years—work-for-others agreements are more than four times greater and technology licenses and user facility agreements are eight times greater. Businesses and other nonfederal entities have provided more funding for work-for-others agreements than for all other types of technology transfer activities combined. Funding from nonfederal entities for work-for-others agreements increased from $31 million in fiscal year 1992 to over $188 million in fiscal year 1999. In fiscal year 2001, there were 1,527 work-for-others agreements funded at $147 million. Although the nonfederal entity is required to pay all of the project costs, many businesses use a work-for-others agreement, rather than a CRADA. The work-for-others program allows them to obtain title, in most cases, to any intellectual property developed under the agreement while the title and licensing rights to any intellectual property developed under a CRADA are subject to negotiations. (See table 12 in app. I for work-for-others agreements by laboratory.) In contrast, the research under a work-for- others agreement typically is less beneficial for the DOE laboratory than research under a CRADA because (1) it is not required to provide direct benefit to the program missions, although it must be consistent with them; (2) the laboratory’s scientists typically do not collaborate on research with the nonfederal entity’s scientists; and (3) the laboratory does not normally have rights to any resulting intellectual property. During the past 10 years, the laboratories’ technology licensing activities significantly increased, from 189 licenses with $4.7 million in license income in fiscal year 1992 to 1,720 licenses with $19.3 million in license income in fiscal year 2001. The growth in technology licensing can be traced to the 1984 amendments to the Patent and Trademark Amendments of 1980, commonly known as the Bayh-Dole Act, which allowed DOE’s laboratories operated by universities or nonprofit organizations to retain title to inventions that their scientists made. Subsequently, the National Competitiveness Technology Transfer Act of 1989 added technology transfer as a mission of the DOE laboratories. (See table 13 in app. I for technology licenses by laboratory.) User facility agreements, which provide access to unique DOE research equipment and facilities, increased from 252 in fiscal year 1992 to more than 2,000 in fiscal year 2001. In particular, Brookhaven National Laboratory had 741 agreements in fiscal year 2001 that provided nonfederal entities with access to its specialized facilities such as the National Synchrotron Light Source. Similarly, Oak Ridge National Laboratory had 604 agreements with nonfederal entities in fiscal year 2001. The 12 DOE laboratories have reduced their technical assistance to small businesses from a high of 746 agreements in fiscal year 1995 to 246 agreements in fiscal year 2001. This decline reflected the phasing out of dedicated funding for technology partnerships, which the NNSA and Office of Science laboratories could use to support technical assistance. More recently, two laboratories have used other, non-DOE sources of funding to provide technical assistance to local small businesses. Sandia National Laboratories have an agreement with the state of New Mexico that entitles Sandia to up to $1.8 million per year in tax relief for assistance provided to small businesses in the state. Similarly, Pacific Northwest National Laboratory has received funding from an economic development agency in Washington to provide technical assistance. These laboratories accounted for more than two-thirds of the DOE laboratories’ technical assistance agreements in fiscal year 2001. According to DOE laboratory managers, the most important barrier to effective technology transfer was the lack of dedicated DOE funding for technology partnerships, including funding targeted at small businesses.(See table 4.) According to laboratory managers, other important barriers are closely associated with the lack of dedicated funding for technology partnerships and raise serious concerns about the future of CRADAs at their laboratories. While the laboratory managers also identified certain administrative issues that have delayed, or even stopped, potential partnerships, several of them told us that the long delays in obtaining DOE approval of CRADAs, common in the mid-1990s, have mostly been addressed. Managers at 8 of the 12 DOE laboratories we surveyed cited the lack of dedicated DOE funding for CRADAs as an important barrier that has constrained technology partnerships at their laboratories. Each of these laboratories had received dedicated funding under either the Technology Partnership Program or the Laboratory Technology Research Program. According to several laboratory and DOE officials, DOE’s research managers generally have questioned whether technology partnerships would provide direct benefits to NNSA’s missions of stockpile stewardship and nuclear nonproliferation and the Office of Science’s mission of basic science. As a result, research managers have been reluctant to substitute limited research funds for the dedicated technology transfer funding that was phased out in recent years. Because DOE funding was not available, several laboratories had to advise many of their CRADA partners that they would either have to pay the project’s full costs, including those incurred by the DOE laboratory’s scientists, or the laboratory would terminate the CRADA. Sandia National Laboratories managers told us that they had terminated 18 CRADAs early in fiscal year 2000 because of such funding constraints. Three laboratories stated that the lack of dedicated DOE funding was a “show stopper” for CRADAs. For example, managers at Lawrence Berkeley National Laboratory told us that because many of the laboratory’s research program budgets have been squeezed in recent years, research managers have little flexibility to support CRADAs or other types of technology partnerships. Alternatively, CRADA partners— particularly small businesses—are unwilling or unable to fund all of the research costs. The Lawrence Berkeley managers believe that dedicated funding is important for maintaining a critical mass of CRADAs—without the likelihood of funding support, scientists will not invest the effort to develop strong funding proposals for potentially useful collaborations. Moreover, according to managers at several laboratories, previous DOE funding support for CRADAs likely led to an increase in work-for-others agreements and CRADAs funded by nonfederal partners in recent years. These managers believe that dedicated funds have provided the laboratories with an opportunity to “get their foot in the door” with companies. Once the partners are familiar with the capabilities of the national laboratories, they are more likely to want to continue working with the laboratories, according to the managers. Several managers cited the importance of dedicated funding for commercializing many of their laboratories’ technological innovations because there often is a gap in the funding needed to translate the innovation into possible commercial applications, a gap that some managers referred to as the “valley of death.” The Lawrence Berkeley managers told us that CRADAs have enabled technology licensees to collaborate with the laboratory’s scientists to develop commercial applications. According to Lawrence Berkeley and Argonne managers, based on the number and quality of proposals that their scientists had previously submitted for Laboratory Technology Research funding, each of these laboratories could effectively use $10 million per year in dedicated funding for CRADAs. Managers at 4 of the 12 laboratories stated that the lack of dedicated DOE funding was not an important barrier for CRADAs. In particular, three of these four laboratories had not received dedicated funding. Furthermore, two of these three laboratories—the National Renewable Energy Laboratory and the National Energy Technology Laboratory—primarily conduct research for the Energy Efficiency and Renewable Energy Program and the Fossil Energy Program, respectively, which may have been more willing than some of the other DOE programs to use regular research funds to support CRADAs because their missions include working with industry. Managers at 8 of the 12 DOE laboratories cited the lack of dedicated funding for technology partnerships as an important barrier that has constrained small business participation at their laboratories. In particular, managers at two laboratories told us that the lack of dedicated funding was a “show stopper” for small businesses because a small business generally did not have the funds available to pay all, or part, of the DOE laboratory’s costs—in addition to its own costs—for a CRADA research project. Managers at several of the laboratories also cited the importance of dedicated DOE funding as a basis for providing technical assistance to small businesses. Managers cited various examples of a laboratory scientist correcting a manufacturing problem or improving a product after spending a few days with a small business. Managers at 8 of the 12 laboratories told us that uncertainty about DOE’s continued financial support for CRADAs was an important barrier. In particular, managers at several Office of Science laboratories told us that Laboratory Technology Research Program funding cutbacks in recent years had created ill will among CRADA partners whose funding support was cut and uncertainty among laboratory scientists and their partners about whether to pursue CRADA proposals for projects that were unlikely to get funded. Some scientists at laboratories we visited discussed their frustration at having funding disappear after they had nurtured working relationships with industry scientists to develop potential technology transfer projects and—much more time-consuming, in their perspective— persuading the partner’s key financial and management staff of the project’s merit. These experiences create “legends” about the difficulties of working with DOE laboratories, according to the deputy director of the Lawrence Berkeley National Laboratory. Managers at 10 of the 12 DOE laboratories cited the lack of a high-level, effective advocate for technology partnerships in DOE headquarters as an important barrier that has constrained their technology transfer activities. Similarly, managers at 9 of the 12 laboratories told us that the lack of DOE institutional commitment to technology partnerships as a way to accomplish program missions was an important barrier. Managers stated that technology partnerships, which cut across DOE programs, need an advocate in DOE headquarters who is not tied to a specific research area and has sufficient visibility within DOE to effectively foster technology partnerships. More specifically, managers at several Office of Science laboratories cited the need for an advocate because they believe that funding technology partnerships is a low priority within the Office of Science. They noted that when the Congress reduced the fiscal year 2002 funding for the Office of Advanced Scientific Computing Research, funding for the Laboratory Technology Research Program was disproportionately cut—from the president’s budget request of $6.9 million to $3 million—compared with other research programs in this office. In March 2002, the Office of Science announced that it will terminate the Laboratory Technology Research Program once its previously approved CRADAs have been funded. Both laboratory managers and DOE headquarters officials stated that DOE’s lack of commitment to technology partnerships is caused, in part, by the cross-cutting nature of the research carried out through CRADAs and other technology transfer activities. They noted that technology partnerships often provide important results and fulfill DOE’s broader responsibility to disseminate knowledge, but the partnerships may not always be directly tied to the specific goals of a single DOE research program. As a result, these partnerships are likely to be a lower priority for research managers responsible for meeting specific goals. Because DOE’s research budgets have declined in recent years, it is even less likely that these managers will be willing to fund research activities that, while potentially valuable, extend beyond their immediate programs, according to the laboratory managers. Finally, DOE officials noted that DOE’s Technology Transfer Working Group is not an internal advocacy group for technology transfer, but a virtual organization with no full-time permanent staff. The working group was established after DOE eliminated its full-time technology transfer organization in 1996 at the Congress’ direction. The working group, which convenes monthly by teleconference, oversees technology transfer policy and practices, identifies issues, and coordinates the DOE headquarters response to these issues. Other than through its organizational representatives, the working group has no direct interface with Secretarial-level officials concerning matters related to resources for technology transfer and is not in a position, by itself, to serve as an advocate among top-level DOE officials for such resources. Managers at 9 of the 12 laboratories told us that DOE’s requirement that the partner pay in advance for research conducted at the laboratory was an important barrier to technology partnerships at their laboratory. Generally, DOE requires an advance payment for about 90 days of work, if (1) a project is expected to cost more than $25,000 and last more than 90 days or (2) the nonfederal partner will contribute more than $25,000 for its portion of the research that DOE laboratory scientists will conduct. (For shorter or less costly projects, the partner is required to pay its entire share in advance.) Some laboratory managers told us that the advance payment requirement has presented problems in negotiating, for example, work-for-others agreements or jointly funded CRADAs with small or large businesses or with universities. While the requirement rarely stops an agreement from being signed, it has delayed negotiations, particularly when a small business cannot readily provide an upfront payment. The advance payment requirement typically is more burdensome for small businesses than large businesses because small businesses are less likely to have the funds available to prepay work, according to laboratory managers. DOE’s policy permits exceptions to this requirement; for example, the contractor operating the laboratory may negotiate with DOE a smaller advance payment for a small business that is unable to meet the standard requirement. Some laboratory managers told us that the advance payment requirement had created serious problems for small businesses that sought the laboratory’s assistance as a subcontractor for a project under either the Small Business Innovation Research (SBIR) program or the Small Business Technology Transfer (STTR) program. While DOE requires an advance payment for conducting research, the SBIR and STTR programs typically provide payments for completed work, leaving the small business with the problem of providing funding to bridge this gap. Managers at one laboratory questioned the need for the advance payment requirement for an SBIR or STTR project when the payment is coming from another federal program. In some cases, the federal agency funding the SBIR or STTR project has agreed to provide some funding upfront to help cover the DOE laboratory’s work. Alternatively, managers at two of the DOE laboratories told us that they have assisted partners with a bridge loan by using an account set aside for such purposes by the contractor that operates the laboratory for DOE. Managers at 7 of 12 DOE laboratories cited the U.S. competitiveness requirements in the DOE model CRADA as an important barrier to technology partnerships at their laboratory. DOE requires that partners either manufacture substantially in the United States or provide a plan for ensuring that the partnership will result in a net economic benefit to the U.S. economy. Specifically, DOE’s model CRADA states that because a purpose of the CRADA is to provide substantial benefit to the U.S. economy, partners are required to (1) substantially manufacture in the United States any products embodying the intellectual property developed under the CRADA; (2) incorporate any processes, services, and improvements developed under the CRADA into the partner’s U.S. manufacturing facilities either prior to or simultaneously with implementation outside the United States; and (3) not reduce the use of such processes, services, and improvements in the United States because of their introduction elsewhere. DOE officials noted that DOE’s requirements are more stringent than those in the Federal Technology Transfer Act of 1986, which requires that laboratory directors “give preference to business units located in the United States which agree that products embodying inventions made under the cooperative research and development agreement or produced through the use of such inventions will be manufactured substantially in the United States.” Some laboratory managers said that DOE’s requirements have created particular difficulties for large U.S.-based multinational companies, including IBM and Procter & Gamble, that would like to collaborate with a DOE laboratory. Managers noted that multinational companies often are unwilling to sign an agreement containing DOE’s competitiveness clause because of its possible implications in subsequent years on the company’s strategic manufacturing decisions. Alternatively, the managers noted that companies could submit a detailed explanation to DOE of how the CRADA research will provide “alternative benefits” to the U.S. economy. They pointed out, however, that documenting alternative benefits can be a long and cumbersome process. In addition, managers at 4 of the 12 laboratories cited as an important barrier the long delays—up to 6 months—associated with consulting the Office of the U.S. Trade Representative for CRADAs involving a company controlled by a foreign company or government. The Federal Technology Transfer Act of 1986 and Executive Order 12591 require that laboratory directors consider whether the foreign company’s government permits comparable access to U.S. companies. The executive order also requires that laboratory directors consider whether the foreign company’s government has policies to protect U.S. intellectual property. Moreover, the executive order directs laboratory directors to consult with the Office of the U.S. Trade Representative in addressing these issues. Managers at some of the 12 DOE laboratories cited other barriers to technology transfer, but we did not find a general consensus that these problems needed to be addressed. For example, managers at four laboratories cited administrative burdens and time delays in negotiating and signing a technology partnership agreement. Managers at Los Alamos National Laboratory told us that it takes about 3 months, on average, from the time funding for a CRADA is approved until the agreement is signed. Managers at Oak Ridge National Laboratory cited the administrative burden associated with obtaining DOE headquarters approval for technology partnerships as small as a $5,000 technical assistance project and suggested that DOE establish a threshold below which local approval would suffice. Managers at several laboratories, however, told us that DOE has made major improvements in reviewing CRADAs since the mid-1990s, when we reported that, on average, it took four DOE contractor-operated laboratories about 7.5 months to implement a one-collaborator, one- laboratory CRADA. We provided DOE with a draft of this report for its review and comment. We met with DOE officials, including the director of the Office of Science and Technology Policy, who said that DOE found the report to be a reasonable representation of the technology partnering activities at the 12 DOE laboratories surveyed. In commending GAO for gathering pertinent data and analyzing trends and barriers, DOE stated that the report provides a sound basis for assessing the current situation and charting future directions. DOE stated that, for purposes of portraying a broad perspective, it was helpful to include the work-for-others program among the five types of agreements most commonly used to transfer technology to U.S. businesses and other organizations. DOE also noted that a considerable amount of technology transfer takes place in the normal course of executing technical work associated with mission-related contracts and financial assistance, and that this work was not included in the report as technology transfer. While we agree with DOE that the laboratories’ technology transfer activities are not limited to the five types of agreements discussed, we note that the laboratories’ role in other forms of technology transfer was outside the scope of our review. DOE officials also provided comments to improve the report’s technical accuracy, which we incorporated as appropriate. To obtain trend data on technology development partnerships, we asked managers at each of the 12 DOE laboratories to provide participation and funding data for fiscal years 1992 through 2001. To help ensure consistency across locations, we worked with these managers to establish uniform definitions and resolve any discrepancies. In addition, we (1) interviewed officials at DOE headquarters and (2) visited Argonne National Laboratory, Lawrence Berkeley National Laboratory, and Oak Ridge National Laboratory to obtain the views of administrators and scientists about their laboratories’ participation in and funding of technology partnerships. To identify any barriers that may limit DOE laboratories’ efforts to transfer technology to potential nonfederal partners, we interviewed officials at DOE headquarters and obtained the views of laboratory administrators at each of the 12 DOE laboratories. We conducted our review from October 2001 through March 2002 in accordance with generally accepted government auditing standards. We did not independently verify the data provided by DOE’s laboratories. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees, the secretary of energy, the director of the Office of Management and Budget, and other interested parties. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report were Richard Cheston, Kerry Hawranek, and Susan Swearingen. Appendix I: Technology Transfer Activities of 12 DOE Laboratories Funding was made available beginning in fiscal year 1994 through DOE’s Defense Programs’ Small Business Initiative. Data were not readily available. Oak Ridge was unable to provide the number of technical assistance for small businesses agreements by fiscal year, but estimated that the laboratory entered into 100 of these agreements over the 10-year period. Data were not readily available. Amounts shown are Ames’ portion of the total royalties received by Iowa State University Research Foundation per a formula in the laboratory’s management and operating contract. Data were not readily available. Data were not readily available. Data were not readily available. Data were not readily available.
Since 1980 Congress has passed laws to facilitate the transfer of technology from federal laboratories to U.S. businesses. In particular, the National Competitiveness Technology Transfer Act of 1989 authorized federal laboratories operated by contractors, including the Department of Energy's (DOE) national laboratories, to enter into cooperative research and development agreements (CRADA). Under a CRADA, the partner and DOE laboratory agree to jointly conduct research and typically share the research costs. By fiscal year 1992, DOE's national laboratories were among the leading federal laboratories participating in CRADAs. Recently however, the 12 laboratories that DOE surveyed have substantially reduced their CRADA partnerships and their technical assistance to small businesses. Instead, the laboratories have increasingly transferred technology through agreements that did not involve collaborative research and were funded by a business or other nonfederal entity. Managers at most of the laboratories say the lack of dedicated funding for technology for transfer to technology partnerships, including funding targeted to small businesses, is the most important barrier to their technology transfer activities. Managers at most laboratories said that DOE's lack of a high-level, effective advocate for technology transfer and DOE's lack of commitment to technology partnerships were important barriers. Several managers also said that requirements, such as DOE's advance payment clause, were often financially burdensome for small businesses.
Customs and Border Protection. From November 28, 2009, to March 1, 2010, CBP officers working at the Saipan and Rota airports processed 103,565 arriving travelers, granting 11,760 (11 percent) parole. During this period, more than 80 percent of arriving travelers came from Japan or South Korea. Of arriving travelers from China and Russia, 86 percent (10,398 of 12,131) and 90 percent (1,027 of 1,146), respectively, were paroled into the CNMI only, under DHS authority. In addition, CBP signed right-of-entry agreements with the CNMI government that gave the agency access to the airports to prepare for implementation of federal border control. Immigration and Customs Enforcement. Since November 28, 2009, 10 ICE officials detailed to Saipan have identified aliens in violation of U.S. immigration laws and have processed or detained aliens for removal proceedings. From December 7, 2009, to March 1, 2010, ICE identified approximately 264 aliens subject to possible removal from the CNMI— including approximately 214 referrals from the CNMI Attorney General’s office with pending CNMI deportation orders and 49 referrals from the ICE Office of Investigations and the community—and requested immigration status information about these individuals from the CNMI Department of Labor. As of March 1, 2010, ICE officials had processed 72 of the 264 aliens for removal proceedings. As of March 26, 2010, ICE officials told us they had not deported any of the 72 aliens being processed for removal but that 31 were scheduled for immigration hearings by the end of March 2010 and 9 had agreed to waive their right to a hearing and to be deported after completing their criminal sentences. U.S. Citizenship and Immigration Services. In March 2009, USCIS opened an Application Support Center in Saipan and stationed two full- time employees at the center to provide information services, interview residents currently eligible to apply for lawful permanent resident status or citizenship, and process requests requiring biometric services such as fingerprints or photographs. For calendar year 2009, USCIS processed 515 CNMI applications for permanent residency and 50 CNMI applications for naturalization or citizenship, more than doubling the number of interviews conducted for applications for residency or citizenship from calendar year 2008, according to data provided by USCIS officials. By March 17, 2010, USCIS had also received 1,353 advance parole requests and approved 1,123 of them. USCIS also granted parole-in-place status to 705 individuals for domestic travel and granted 24 group paroles. Department of Homeland Security. To facilitate implementation of CNRA in the CNMI, DHS led meetings with the other departments charged with implementing CNRA; reported to Congress on the budget and personnel needed by the DHS components; and initiated outreach to the CNMI government. However, DHS has not finalized an interdepartmental agreement with other U.S. departments regarding implementation of CNRA and has not specified changes in its resource requirements as directed by Congress. DHS issued an interim rule for the CNMI-only work permit program on October 27, 2009, but a court injunction has prevented implementation of the rule. The interim rule establishes (1) the number of permits to be issued, (2) the way the permits will be distributed, (3) the terms and conditions for the permits, and (4) the fees for the permits. In issuing the interim rule, which was scheduled to take effect on November 27, 2009, DHS announced that it would accept comments in the development of the final rule but was not following notice-and-comment rulemaking procedures, asserting that it had good cause not to do so. In its November 2, 2009, amendment to its ongoing lawsuit to overturn portions of CNRA, the CNMI filed a motion for a preliminary injunction to prevent the operation of the DHS interim rule. The CNMI argued in part that DHS had violated procedural requirements of the Administrative Procedure Act, which requires notice and the opportunity for public comment before regulations can go into effect. On November 25, 2009, the federal District Court for the District of Columbia issued an order prohibiting implementation of the interim rule, stating that DHS must consider public comments before issuing a final rule. In response to this preliminary injunction, DHS reopened the comment period from December 9, 2009, until January 8, 2010. As of May 18, 2010, DHS had not yet issued a final rule, and as a result, CNMI-only work permits are not available. DHS received numerous comments on the interim rule from the CNMI government, a private sector group, and interested businesses and individuals. The CNMI government commented that the rule was incomplete and would damage CNMI workers, employers, and community. In addition, the Saipan Chamber of Commerce raised concerns regarding the economic impact of the regulations and made a proposal to make it easier for workers with the CNMI-only work permit to return from travel outside the commonwealth. DHS plans to issue a final rule for the CNMI-only work permit program in September 2010. On January 16, 2009, DHS issued an interim final rule for the Guam-CNMI joint visa waiver program, which went into effect November 28, 2009. The program is intended to allow visitors for business or pleasure to enter the CNMI and Guam without obtaining a nonimmigrant visa for a stay of no longer than 45 days. DHS’s rule designates 12 countries or geographic areas, including Japan and South Korea, as eligible for participation in the program. DHS considered designating Russia and China as eligible for participation, because visitors from those countries provide significa economic benefits to the CNMI. However, because of political, security, and law enforcement concerns, including high nonimmigrant visa refusal rates, DHS deemed China and Russia as not eligible to participate in the program. nt In developing the Guam-CNMI visa waiver program, DHS officials consulted with representatives of the CNMI and Guam governments, both of which sought the inclusion of China and Russia in the program. In May 2009, DHS officials informed Congress that the department is reconsidering whether to include China and Russia in the Guam-CNMI visa waiver program. On October 21, 2009, the Secretary of Homeland Security announced to Congress and the Governors of the CNMI and Guam the decision to parole tourists from China and Russia into the CNMI on a case-by-case basis for a maximum of 45 days, in recognition of their significant economic benefit to the commonwealth. Public comments on the regulations from the Guam and CNMI governments and private sectors emphasized the economic significance of including China and Russia in the program. Guam officials argued that tourist arrivals in Guam from traditional markets were declining and that access to the China tourism market presented an important economic benefit. CNMI officials noted that the CNMI economy would be seriously damaged unless the CNMI retained access to the China and Russia tourism markets. The regulations became effective on November 28, 2009. DHS plans to issue a final rule for the program in November 2010. In September 2009, DHS proposed a rule to allow a large proportion of CNMI foreign investor permit holders to obtain U.S. CNMI-only nonimmigrant investor treaty status during the transition period. According to the proposed rule, eligibility criteria for this status during the transition period include, among others, having been physically present in the CNMI for at least half the time since obtaining CNMI investor status. Additionally, investors must provide evidence of maintaining financial investments in the CNMI, with long-term business investors showing an investment of at least $150,000. In commenting on the proposed rule, the CNMI government stated that about 85 of 514 long-term business entry permit holders could not qualify if an investment level of $150,000 is required. The CNMI also reported that 251 of the 514 permit holders were granted at a $50,000 required investment level and were “grandfathered” in 1997, when the minimum investment requirement was increased. The CNMI projected that after the end of the transition period, only 42 of 514 long-term business entry permit holders may be able to meet the minimum investment level to qualify for federal investor status. DHS accepted comments on the proposed rule until October 14, 2009, and intends to issue a final rule in July 2010. CBP and the CNMI government have not yet signed long-term occupancy agreements that would allow CBP to reconfigure space that the CNMI government has provided in CNMI airports. As a result, the agency is operating in facilities that do not meet its standards for holding cells and secondary inspections. The current configuration of CBP’s space at the Saipan airport does not include holding cells that meet federal standards. As a result, CBP lacks space to temporarily detain individuals who may present a risk to public safety and to its officers. In addition, owing to a lack of adequate space for secondary inspections, CBP officers process parole applications at the airport in primary inspection booths, resulting in increased wait times for arriving visitors who are not applying for parole. U.S. law requires international airports to provide, without charge, adequate space to the U.S. government to perform its duties. However, the CNMI government stated that the port authority is not in a financial position to provide space to CBP without charge. In commenting on a draft of our report, the CNMI stated that the commonwealth is not prepared to enter into negotiations with CBP unless it is assured that the request for space has been cleared at least at the assistant secretary level at DHS and that the department has received the necessary assurance from Congress that the funds necessary to fulfill CBP’s space needs will be available. As of April 2010, CBP continued to seek access to approximately 7,200 additional square feet of space at the Saipan airport, and the two parties had not concluded negotiations regarding long-term occupancy agreements for space at the Saipan and Rota airports. Key differences related to cost have not yet been resolved. ICE has been unable to conclude negotiations with the CNMI government for access to detention space in the CNMI correctional facility. In March 2010, ICE estimated that it required 50 detention beds for its CNMI operations. Under a 2007 agreement between the U.S. Marshals Service and the CNMI Department of Corrections, the CNMI adult correctional facility in Saipan provided the U.S. government 25 detention beds at $77 per bed per day. As of September 2008, less than 30 percent of the facility’s beds (134 of 513) were filled. To obtain needed detention space, ICE proposed to either amend the 2007 U.S. Marshals Service agreement before it expired on April 1, 2010, or establish a new agreement with the CNMI government. As of March 2010, after a year of negotiation, ICE had not finalized an agreement with the CNMI government owing to unresolved cost documentation issues, according to a senior ICE official. Since January 2010, negotiations between ICE and the CNMI regarding detention space have been on hold. Given the current lack of needed detention space, ICE has identified three alternatives regarding detainees it seeks to remove from the CNMI while removal proceedings are under way: (1) release detainees into the CNMI community, under orders of supervision; (2) transport detainees to other U.S. locations; or (3) pay the CNMI’s daily rate for each detainee, if the CNMI provides appropriate documentation justifying its proposed rate. According to ICE officials, because of flight risk and danger to the community, ICE prefers to detain aliens with prior criminal records while they await their immigration removal hearings. However, since November 2009, ICE has released 43 detainees into the CNMI community under orders of supervision, including 27 with prior criminal records. According to ICE officials, orders of supervision are appropriate for detainees who do not present a danger to the community or a possible flight risk. In addition, as of March 2010, ICE had paid a total of approximately $5,000 to transport two detainees to Guam and one to Honolulu. Since January 2010, negotiations between ICE and the CNMI government regarding access to detention space have been at an impasse. As of March 1, 2010, DHS components lacked direct access to CNMI immigration and border control data contained in two CNMI databases, the Labor Information Data System (LIDS) and the Border Management System (BMS). The CNMI government assigned a single point of contact in the CNMI Department of Labor to respond to CBP, ICE, and USCIS queries from the database, most commonly for verification of an individual’s immigration status. DHS component officials have expressed concerns about the reliance on a single CNMI point of contact. ICE officials expressed the following concerns, among others: Relying on one CNMI point of contact to verify immigration status for individuals subject to ICE investigations could compromise security for ongoing operations. Because the CNMI point of contact is an indirect source, basing ICE detention and removal decisions on data provided by the point of contact could lead to those decisions’ eventual reversal in court. USCIS officials’ concerns included the following: Direct access to LIDS would allow USCIS to verify information provided by applicants for immigration benefits such as advance parole. Direct access to the data would facilitate the processing of applications for CNMI-only work permits and for CNMI-only nonimmigrant treaty investor status. In February 2010, CNMI officials reported that the point of contact assigned to work with the U.S. government had promptly supplied information on individual cases to U.S. officials from immigration and border control databases. A senior CNMI official also stated that if the point of contact is unable to respond to future DHS inquiries in a timely manner, CNMI officials would be willing to engage in additional discussions regarding more direct access to LIDS and BMS. However, according to ICE officials, the CNMI responses to ICE inquiries have not been timely and have not always provided sufficient information. We examined ICE records of 68 inquiries and found that CNMI response times ranged from 16 minutes to around 23 hours, averaging roughly 4-and-a-half hours. ICE officials reported that the responses contained first and last names and LIDS numbers but rarely included biographical or other identifying information. DHS has communicated, at the department and component levels, with the CNMI government regarding access to CNMI immigration data. During a September 2009 meeting between the Governor of the CNMI and the Secretary of Homeland Security, the Governor proposed providing restricted access to information contained in LIDS and BMS, for a fee and in exchange for airline flight entry data. On February 18, 2010, the Governor sent a letter to CBP reiterating the CNMI’s request that DHS share advance passenger information provided by the airlines. On March 31, 2010, CBP responded to the CNMI letter, stating that the CNMI’s intended use of the advance passenger information did not justify the data’s release to CNMI authorities. As of March 2010, DHS and the CNMI government were at an impasse regarding any exchange of passenger information for CNMI immigration and border control data. DHS components have taken a number of steps since November 28, 2009, to ensure effective border control procedures in the CNMI. Additionally, DHS and other agencies have taken steps to implement CNRA provisions for workers, visitors, and investors, although the programs for workers and investors are not yet available to eligible individuals in the CNMI. Despite the DHS components’ progress, however, their inability to conclude negotiations with the CNMI government regarding access to airport space, detention facilities, and CNMI databases has resulted in continuing operational challenges. Although the DHS components have made continued efforts to overcome these challenges without department- level intervention, in each case, their efforts have encountered obstacles. Negotiations with the CNMI government for long-term access to the CNMI airports have not been concluded, and key differences remain unresolved; meanwhile, negotiations for access to CNMI detention facilities and databases have reached impasse. Without department-level leadership as well as strategic approaches and timeframes for concluding its components’ negotiations with the CNMI, DHS’s prospects for resolving these issues is uncertain. To enable DHS to carry out its statutory obligation to implement federal border control and immigration in the CNMI, we recommended that the Secretary of Homeland Security work with the heads of CBP, ICE, and USCIS to establish strategic approaches and timeframes for concluding negotiations with the CNMI government to resolve the operational challenges related to access to CNMI airport space, detention facilities, and information about the status of aliens. DHS agreed with our recommendation. Madame Chairwoman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. Begins with transition period start date and ends December 31, 2014, under P.L. 110-229, enacted May 8, 2008. May be extended indefinitely for up to 5 years at a time by the U.S. Secretary of Labor. Begins with transition period start date and continues permanently. In addition to the person named above, Emil Friberg, Assistant Director; Michael P.Dino, Assistant Director; Julia A. Roberts, Analyst-in-Charge; Gifford Howland, Senior Analyst; Ashley Alley, Senior Attorney; and Reid Lowe, Senior Communications Analyst, made key contributions to this report. Technical assistance was provided by Martin De Alteriis, Ben Bolitzer, Etana Finkler, Marissa Jones, and Eddie Uyekawa. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our work on the status of efforts to establish federal border control in the Commonwealth of the Northern Mariana Islands (CNMI) and implement the Consolidated Natural Resources Act of 2008 (CNRA) with regard to foreign workers, visitors, and investors in the CNMI. In May 2008, the United States enacted CNRA, amending the U.S.-CNMI Covenant to establish federal control of CNMI immigration. CNRA contains several CNMI-specific provisions affecting foreign workers and investors during a transition period that began in November 2009 and ends in 2014. In addition, CNRA amends existing U.S. immigration law to establish a joint visa waiver program for the CNMI and Guam by replacing an existing visa waiver program for Guam visitors. During the transition period, the U.S. Secretary of Homeland Security, in consultation with the Secretaries of the Interior, Labor, and State and the U.S. Attorney General, has the responsibility to establish, administer, and enforce a transition program to regulate immigration in the CNMI. CNRA requires that we report on the implementation of federal immigration law in the CNMI. This testimony summarizes findings from our recent report regarding (1) steps that the Department of Homeland Security (DHS) has taken to establish federal border control in the CNMI; (2) actions that DHS has taken to implement programs for workers, visitors, and investors; and (3) unresolved operational challenges that DHS has encountered. DHS and its components have taken a number of steps to secure the border in the CNMI and to implement CNRA-required programs for foreign workers, visitors, and foreign investors. However, the components face certain operational challenges that they have been unable to resolve with the CNMI government. Steps taken to establish border control: DHS and its components have taken the following steps, among others, to establish federal border control in the CNMI. (1) Customs and Border Protection (CBP). Since November 2009, CBP has inspected arriving travelers in Saipan and Rota. (2) Immigration and Customs Enforcement (ICE). Also since November 2009, ICE has identified individuals who may be in violation of U.S. immigration laws and has begun processing some aliens for removal. (3) U.S. Citizenship and Immigration Services (USCIS). In March 2009, USCIS opened an application support center. For calendar year 2009, USCIS processed 515 CNMI applications for permanent residency and 50 CNMI applications for naturalization or citizenship. (4) DHS. DHS has taken several department-level actions to facilitate implementation of CNRA but has not finalized an interdepartmental agreement regarding implementation of CNRA and has not yet specified its resource requirements for this effort as directed by Congress. Actions taken to implement worker, visitor, and investor programs: DHS has begun to implement CNRA-required programs for foreign workers, visitors, and foreign investors but has not yet finalized key regulations. As a result, certain transition programs remain unavailable. (1) Foreign workers. On October 27, 2009, DHS issued an interim rule to implement a CNMI-only work permit program required by CNRA for foreign workers not otherwise admissible under federal law. However, a November 2009 U.S. District Court ruling, responding to an amended lawsuit by the CNMI government, prohibited implementation of the interim rule, stating that DHS must consider public comments before issuing a final rule. As a result, CNMI-only work permits are not currently available. (2) Visitors. DHS has established the Guam-CNMI visa waiver program. However, the program does not include China and Russia, two countries that provide significant economic benefit to the CNMI. (3) Foreign investors. DHS has proposed a rule to allow a large proportion of investors holding CNMI foreign investor permits to obtain U.S. CNMI-only nonimmigrant treaty investor status during the transition period. DHS plans to issue a final rule in July 2010; until then, the program is not available. Unresolved operational challenges: DHS components and the CNMI government have not yet negotiated solutions to operational challenges regarding access to CNMI airport space, detention facilities, and databases. (1) Airport space. Lacking long-term occupancy agreements and adequate space at CNMI airports, the agency is operating in facilities that do not meet its standards for holding cells and secondary inspections. (2) Detention facilities. Lacking an agreement with the CNMI government regarding detention space, ICE has released a number of aliens with criminal records into the community under orders of supervision and has paid to transport several detainees to Guam and Hawaii. (3) Databases. Lacking direct access to the CNMI's immigration and border control databases, ICE officials have instead directed data requests to a single CNMI point of contact, limiting their ability to quickly verify the status of aliens and potentially compromising the security of ongoing operations.
Nationwide implementation of E911 by local wireline telephone companies began in the 1970s. With wireline E911 service, emergency calls are automatically routed to the appropriate 911 call center, and the call taker receives the telephone number and street address of the caller. In 1996, FCC adopted rules for wireless E911. Wireless E911 technology provides emergency responders with the location and callback number of a person calling 911 from a mobile phone. Implementing wireless E911 involves deploying technologies that are able to calculate the geographic coordinates of the caller’s location at the time of the call and display these coordinates as a location the call taker can understand. When a wireless caller dials 911, the call must be routed along the networks of both a wireless telephone company and a wireline telephone company before terminating at a call center, known as a Public Safety Answering Point (PSAP). There are more than 6,000 PSAPs nationwide, often at a county or city level. PSAPs vary in size and technical sophistication. Some large urban PSAPs have dozens of call takers and split the functions of call taking and dispatching the proper emergency responder. Smaller PSAPs are sometimes staffed by only two or three call takers who also handle dispatch. In some rural areas, the PSAP may be the sheriff’s office. As shown in figure 1, the wireless carriers, local exchange carriers, and PSAPs must have appropriate equipment and interconnections for wireless E911 calls to be sent to and received by PSAPs with the caller’s location information. For example, wireless carriers must finance the implementation of a caller location solution and test their equipment to verify its accuracy. Local exchange carriers are generally responsible for ensuring that all the necessary connections between wireless carriers, PSAPs, and databases have been installed and are operating correctly. The original E911 system was designed to carry only the caller’s telephone number with the call, and the associated fixed address was obtained from an established database. Wireless E911, however, requires more data items, and the mobile caller’s location must be obtained during the call and delivered to the PSAP separately using additional data delivery capabilities. To translate the latitude and longitude location information into a street address, PSAPs usually must acquire and install mapping software. PSAPs may also need to acquire new computers to receive and display this information. Getting PSAPs the technology needed to receive wireless E911 location information is primarily a state and local responsibility because PSAPs serve an emergency response function that has traditionally fallen under state or local jurisdiction. As a result, states and local jurisdictions establish timetables for implementation by their PSAPs and fund the equipment upgrades needed by their PSAPs for E911 service. The only federally mandated time frames for implementing wireless E911 technologies are those placed on wireless carriers by FCC. In 1996, FCC responded to the rising number of mobile telephone subscribers and the resulting increase in wireless 911 calls by adopting rules for wireless E911 that established a two-phase implementation approach for the wireless carriers and set deadlines for wireless carriers regarding their part in E911 deployment.months of a request from a PSAP, wireless carriers be prepared to provide the PSAP with the wireless phone number of the caller and the location of the cell site receiving the 911 call (Phase I information); and (2) by October 2001, or within 6 months of receiving a request from a PSAP, wireless carriers be prepared to provide the PSAP with the geographic coordinates of the caller’s location with greater precision, FCC required that (1) by April 1998, or within 6 generally within 50 to 300 meters (Phase II information). As we reported in 2006, most states and the District of Columbia collect fees to cover the costs of implementing wireless E911. States collect fees on a variety of telecommunications services including wireline, wireless, “prepaid wireless,” and VoIP. DOT has recognized the relationship between wireless E911 services and highway safety and, in 2001, contracted with NENA to develop a state/county database that tracks E911 implementation. As part of the contract, NENA created a database of counties, including information about implementation of wireless E911, which is updated with data gathered directly from state and county representatives. Now completely funded by NENA, the database is accessible through http://www.nena.org. The New and Emerging Technologies 911 Improvement Act of 2008 (NET 911 Act) requires FCC to submit an annual report to Congress detailing the status in each state of the collection and distribution of fees or charges for the support or implementation of 911 or E911 services to ensure transparency and accountability. The annual reports are to include findings on the amount of revenues obligated or expended by each state or political subdivision thereof for any purpose other than the purpose specified in the state or local law adopting the fee or charge. FCC has submitted four reports to Congress covering the state activities of calendar years 2008 to 2011. In addition, the National 911 Program— housed within NHTSA’s Office of Emergency Medical Services—has helped to provide federal leadership and coordination in supporting and promoting optimal 911 services." Because of changes in the public’s use of communications technology and the aging infrastructure of the legacy 911 network, 911 services are transitioning to an NG911 system that uses Internet Protocol (IP)-based technology to deliver and process 911 traffic. Such a system will provide increased capabilities as shown in table 1. With NG911, PSAPs are expected to be able to process all types of emergency communications including voice, data, and video. According to NENA, Emergency Services IP Networks are among the basic building blocks required for NG911. They are managed, multipurpose networks that support public safety communications services and use broadband technology capable of carrying voice plus large amounts of data using Internet protocols and standards. As part of the NG911 Initiative, DOT has created an NG911 system design and tested it to show that the design will be capable of accommodating communications from a wider range of devices including cellular calls, instant messaging, wireline calls, “telematics” (automatic crash notification data directly from the vehicle), VoIP calls, and live video feeds. In September 2009, NHTSA and NTIA announced more than $40 million in grants to help PSAPs implement E911 and NG911 technologies. To be eligible for the program, the applicant had to certify that the state and other taxing jurisdictions within the state had not used designated E911 funds for any other purpose than for which they were designated within 180 days preceding the application date. The grant period concluded at the end of 2012. In all, NHTSA and NTIA awarded grants ranging from $200,000 to $5.4 million to 30 states and territories to help implement NG911 services. NHTSA officials told us that they are currently conducting an evaluation of the grant program and that they will release a final report on http://www.911.gov. Although states faced challenges and delays in the past, they have made significant progress implementing wireless E911. According to NENA data as of March 2013, 98 percent of PSAPs are capable of receiving Phase I location information and 97 percent have implemented Phase II for at least one wireless carrier. This represents a significant improvement in implementation since our previous reports in 2003 and 2006 as shown in table 2. According to NENA data, 142 U.S. counties (representing roughly 3 percent of the U.S. population) do not have some level of wireless E911 service. According to federal and association officials, these areas are primarily rural or tribal counties that face special challenges implementing wireless E911 service. According to the National 911 Program, rural agencies may lack the funding resources needed for technology upgrades, equipment, and training. Rural and tribal areas typically are large geographically but less densely populated than urban areas. In addition, because it may take first responders longer to reach the scene of an emergency, call-takers in PSAPs serving rural areas may be required to stay on the phone longer with callers or provide more extensive emergency instruction to callers until help arrives. Furthermore, federal and local officials told us about the following specific challenges facing rural and tribal areas: Tribal lands face special challenges related to 911 services because of several barriers to improving telecommunications on tribal lands. We have previously reported that the barriers to improving telecommunications on tribal lands most often cited by tribal officials, service providers, and others we spoke with were the rural, rugged terrain of tribal lands and tribes’ limited financial resources. These barriers increase the costs of deploying infrastructure and limit the ability of service providers to recover their costs, which can reduce providers’ interest in providing or improving telecommunications services. Other barriers include the shortage of technically trained tribal members and providers’ difficulty in obtaining rights of way to deploy their infrastructure on tribal lands. The limited emergency response resources typical of rural areas can be relatively quickly overwhelmed in disasters or large-scale incidents, according to the National 911 Program. For example, officials from rural counties in one state told us that their PSAPs were overwhelmed with multiple calls following a recent derailed train incident. These calls paralyzed their 911 systems and prevented other 911 calls from reaching the PSAPs during the incident. According to FCC officials, network-based “triangulation”—a solution used by some wireless carriers to determine a caller’s location— depends on the ability of three cell towers to access the caller’s Network-based triangulation can be particularly mobile device.challenging in rural areas that have fewer cell towers than more densely populated areas. According to rural officials in one state we contacted, some homes in rural areas do not have addresses and some streets do not have names. Before E911 can be implemented in these areas, addresses will have to be created and mapping of those addresses will have to be completed so that automated location services can be provided. Providing E911 services is primarily a state and local government responsibility, but USDA has programs that are available to help rural and tribal areas gain access to wireless E911 services. On September 12, 2011, USDA adopted a final rule that described program eligibility requirements for a 911 Access Loan Program to make loans and loan guarantees to finance the construction of interoperable, integrated public safety communications networks in rural areas. These networks offer several advantages, including the ability to precisely locate rural wireless 911 calls. Funds for this program are available through the Rural Utilities Service’s traditional Telecommunications Infrastructure Loan Program.In addition, USDA’s Community Facilities Program supports essential infrastructure and services for public use in rural areas of 20,000 in population or less. Financing for community facilities projects covers a broad range of interests, including health care, education, public safety, and public services. A USDA official said that this program could be used in a variety of ways to help rural areas gain access to wireless E911, including constructing PSAPs or providing the necessary equipment, software, computer networks, and power supplies. Even though some rural and tribal counties do not have wireless E911 service, almost 97 percent of the overall population has some Phase I wireless coverage and approximately 98 percent has some Phase II wireless coverage, according to NENA data. Furthermore, as shown in figure 2, 25 states and the District of Columbia have fully implemented wireless E911 Phase I and Phase II in all counties. As we reported in 2006, all 50 states and the District of Columbia collect—or have authorized local entities to collect—funds for 911. State methods for collecting funds vary in structure, fee amounts, and services covered, among other things. For example, some states collect fees or charges for 911 and administer a statewide 911 program. Other states authorize local entities to collect fees or charges for 911 and to administer 911 programs at the local level. Still other states use a combination of these approaches. However, some local jurisdictions have not begun collecting 911 funds even though they are authorized by their state to do so.told us that their county had not begun collecting 911 funds—even though they have state authorization to do so—because they would have had to collect $10 per line per month to obtain enough funding to implement E911. Representatives from a rural county with a population under 5,000 Overall, in response to FCC’s request that states report the total amount of 911 funds collected in calendar year 2011, 43 states reported collecting—or authorizing local entities to collect—a total of about $2.3 billion, although because of how this information was collected, the actual amount collected may be higher. States also reported a range of fees collected. For example, states reported wireline and wireless fees ranging from $0.08 to $5.00 per customer per month. According to FCC’s report, most states reported using 911 funds for purposes consistent with their funding statutes in 2011. In addition to spending 911 funds on implementing wireless E911 service, states and localities use 911 funds for operations, maintenance, personnel, and NG911 preliminary activities, among other things. However, six states— Arizona, Georgia, Illinois, Maine, New York, and Rhode Island—reported using almost $77 million of funds collected for E911 implementation for other purposes in 2011, as detailed below.funds for these purposes. State laws permit using Arizona. The state reported transferring 13 percent (or about $2.2 million) of funds collected for 911 purposes to its general fund to help address the state’s budget crisis. Arizona also transferred 911 funds to its general fund in 2009 and 2010. According to state officials, these transfers occurred as part of a state budget bill that authorized the transfers. Once funds were transferred to the general fund, Arizona 911 officials could not be certain how they were spent. An Arizona official said that, because of the transfers to the general fund, Arizona had to return a $1.25 million grant to NHTSA and NTIA that would have been used to help Arizona with its deployment of Phase II of wireless E911. Georgia. The state reported collecting $13.7 million in 911 fees for prepaid wireless phones and did not allocate any of these funds for 911 use. Georgia also collected fees on prepaid wireless phones in 2009 and 2010 but did not allocate these funds for 911 use. According to a written response from a Georgia official, Georgia law does not require that these funds be appropriated for 911 purposes. The funds were collected and deposited into the state’s general fund in accordance with state law. Illinois. The state reported legislatively transferring $2.9 million out of the state’s 911 fund in state fiscal year 2012, which is funded by a statewide fee on wireless subscribers and from which the state makes monthly distributions to local 911 authorities. According to state officials, these funds were transferred to another fund to maintain that fund’s liquidity. Moreover, in calendar years 2010 and 2011, the state borrowed $1.4 million and $5.2 million from the state’s fund used to reimburse wireless carriers for E911-related expenses, which is also funded by the statewide fee on wireless subscribers. These borrowed funds were repaid within 18 months, as required by Illinois law. Maine. As part of personnel service reduction initiatives, the state reported imposing across-the-board furloughs and benefit reductions on state employees, including personnel in the state 911 office, and a little less than $25,000 was transferred from the state’s 911 fund to the state’s general fund in 2010 and 2011. Because the salaries and benefits for employees in the state 911 office are paid for exclusively through 911 funds, the funds that went to the state’s general fund for the furlough days and benefit reductions constituted using 911 funds for purposes other than 911, in accordance with state law according to the state’s submission to FCC. As a result, Maine was ineligible for 911 grant funds from NHTSA and NTIA. New York. According to state officials, New York transferred $45 million from the State Wireless Telephone Emergency Account to the state’s general fund, and made similar transfers to the general fund in 2009 and 2010. According to state officials, the transfer of these funds, authorized by state statute, did not affect the ability of the state to reimburse municipalities for approved 911 expenditures or to otherwise support its 911 programs. Rhode Island. Per the state’s method of funding 911, as provided for in state statute according to state officials, revenues from the state’s 911 fees are deposited into the state’s general fund, and the 911 program receives its budget from the general fund. In 2011, approximately $17.3 million was collected, but only approximately $4.8 million was appropriated for the 911 program leaving about $13 million in the general fund. Fee revenues were similarly distributed in 2010 and 2011. The District of Columbia and Louisiana did not report to FCC on their use of 911 fees and charges for calendar year 2011. We made several attempts to obtain this information, but officials did not respond to us. However, we can provide information from their reports to FCC in previous years. District of Columbia. The District of Columbia reported to FCC in 2011 and 2010 on its collection and use of 911 taxes and fees. FCC did not report that funds were used for purposes other than 911. Louisiana. Louisiana did not submit a report to FCC on its taxes and fees in 2010, but did in 2011. In that report, Louisiana did not directly state whether funds were used for anything other than 911 purposes, and FCC did not report that the state had used funds for other purposes. We have previously reported that misalignment between fees and services for which they are charged reduces both equity and economic efficiency. Moreover, stakeholders in other industries have reported that misalignment between the amount of fee collections and expenditures undermines the credibility of the fee. As states collect funds for 911 purposes and then use those revenues for other purposes, there is risk of confusing stakeholders and members of the public who pay these fees and undermining the credibility of 911 fees. However, states occasionally pass laws allowing the use of 911/E911 fees for non-E911 purposes. FCC officials have stated that they do not have the authority to override state law in this regard. We have identified three features of FCC’s approach to collecting and reporting information from states that are contrary to best practices set forth in our previous reports on data collection and analysis, which have limited the usefulness of FCC’s reports. Specifically, in its approach, FCC (1) uses only open-ended questions to solicit information from states, (2) lacks written guidelines for interpreting states’ responses and ensuring that results can be reproduced, and (3) does not describe the methodology used to analyze the information in states’ reports. FCC has used only open-ended questions to solicit information on state fees and charges for 911 services. FCC officials stated that they regard this approach as the most effective way to elicit responsive information from the states because it requires the states to explain their definitions and procedures in plain language rather than responding to “yes/no” questions or submitting purely quantitative data. We have previously reported that while open-ended questions may be unavoidable when engaged in exploratory work and can be useful to obtain responses that might further clarify the meaning of answers to close-ended questions, When answering open- open-ended questions have several limitations.ended questions, respondents may provide wide-ranging responses that vary and may result in inconsistent information, making it very difficult to consistently and completely tabulate or aggregate responses. Closed- ended questions, on the other hand, can yield data that may be easier to meaningfully track and compare. FCC asks states to report, among other things, the amount of 911 fees or charges imposed and the total amount collected. States’ responses to this question varied widely and respondents often omitted relevant information. For example, some states clearly identified the services—wireline, wireless, pre-paid, and VoIP—to which fees applied, while other states did not specify the services to which fees applied. Because states were not specifically asked whether they collected fees for specific services, it is unclear whether these counts are inclusive or exclusive of these specific services. If FCC had asked closed-ended questions that required respondents to address such distinctions, FCC would have been better able to consistently track fees for various services over time, which could address matters such as whether 911 funding is evolving with changing technology. However, since this information has not been asked in a way that it can be tracked, trend analysis is not possible. Moreover, when reporting the total amount of funds collected, states vary widely in their manner of reporting. Some states provide a total amount without any distinguishing features. Some states break out the amount collected by state and local authorities; others break out the amount collected by type of service. Some states provide an actual number whereas others provide an estimate. Because of the open-ended question format, it is nearly impossible to aggregate these results in a useful manner. FCC does provide all state responses in an appendix to its annual reports, and FCC officials stated that doing this facilitates public review and discussion. The inclusion of these state submissions can support public review, particularly in examining the relationship among responses in a particular state. However, the provision of the state reports may not readily lend itself to obtaining specific or discrete types of information from the responses to the open-ended questions. For example, one item asks whether the state has written criteria regarding the allowable uses of the collected funds. This item is embedded in a request for multiple pieces of information. If any interested parties wanted to know how many states and which ones reported having written criteria, they would have to read through all the responses to that item for all submitting entities to obtain the information sought. A closed-ended item could readily capture which state does and does not have written criteria. We have previously reported that with open-ended questions, the responses are often textual and not easily tabulated, and a process called “content analysis” must be used to classify or code the responses. As part of the process of conducting content analysis, a coding manual should be prepared for use by those classifying the responses. A good coding manual is viewed as indispensible in ensuring coding of the highest quality, and an important measure for judging the quality of a content analysis is the extent to which the results can be reproduced. However, FCC officials told us that, while they describe the methodology used to collect the data from states, they do not have an internal written coding manual or similar document that describes how the content of a state’s responses are interpreted or coded. FCC officials noted that this had not been problematic to date because the same FCC staff members had conducted the analysis each year but indicated that development of such a manual would be helpful to ensure future continuity. Because there is no written documentation on how this analysis was conducted, nor the decisions rules that FCC followed in developing its summary classification of responses, there is no basis for independently reproducing the results of FCC’s analysis. We also found that FCC has not been consistent in how it makes certain characterizations in its report. For example, we identified three states— Georgia, Maine, and New York—that provided similar responses each year but FCC characterized the responses differently in different years. For example, Maine reported that it had transferred funds from the 911 fund to the general fund in calendar years 2010 and 2011 as part of a statewide personnel reduction initiative, as described above. FCC characterized Maine as using 911 funds for other purposes in its 2012 report but not in its 2011 report. In another example, Georgia reported that 911 fees on prepaid wireless devices remained in the state’s general fund rather than being allocated for 911 use in its 2010, 2011, and 2012 reports to FCC. FCC identified Georgia as having used funds for other purposes in its 2010 and 2012 reports but not in its 2011 report. FCC officials acknowledged that these three states should have been identified as using funds for other purposes in its 2011 report, but that officials corrected this in the 2012 report. However, FCC did not indicate in the 2012 report that a mistake was made in the 2011 report. A reader who noted that the states were not listed as having used funds for other purposes in the 2011 report, might believe that these states changed their practices from one year to the next, when, in fact, the states reported essentially the same information each year. We also identified inconsistent characterizations in FCC’s summary table, which indicates whether states used 911 fees or charges for other purposes. In 2012, FCC used four ways of coding whether states used funds for other purposes—”Yes,” “No,” “No information,” and “DNP” (defined in FCC’s report as “did not provide”). However, “DNP” was used in three very different circumstances: when the state did not submit a report to FCC, such as in Louisiana and the District of Columbia; when the state did not provide an answer to the question of whether the state used funds for other purposes; or when the state indicated that all or a portion of the funds are controlled by local entities and that the state could not be certain how the funds were used. As an example of this last case, several states indicated that local entities control some expenditure decisions, which in some cases received the “DNP” designation, but in other cases received the designation “No” or “No information.” If FCC had written guidelines for interpreting state responses, it could have ensured more consistent characterization of state responses. According to FCC’s Information Quality Guidelines—which are meant to ensure that all data FCC disseminates reflect a level of quality commensurate with the nature of the information—quality is demonstrated through the incorporation of a methodological section or appendix that describes, at a minimum, the design and methods used during the creation, collection, and processing of the data, as well as the compilation or analysis of the data in products including reports prepared In its annual reports, FCC included a detailed description for Congress.of its methodology for collecting responses from states. For example, FCC describes how, in addition to the public notices, FCC sent letters to the Office of the Governor of each state and territory and the Regional Directors of the Bureau of Indian Affairs requesting the information sought in the public notices. FCC sent second notice letters and placed calls to those states and territories that had not responded. However, FCC has not published its methodology for how the report’s analysis was conducted. In particular, FCC has not included in its annual report a description of the decision rules used in determining whether a state used 911 funds for other purposes. As stated in the previous section, FCC does not have an internal written procedures manual or similar document that describes how the content of a state’s responses are interpreted or coded. If FCC had one, it could use information from that coding manual to explain its analysis and decision rules in its annual report. The lack of a description of the methodology for FCC’s analysis is particularly problematic as FCC officials told us that FCC changed its method of making analysis decisions in its most recent report. FCC officials stated that based on their experience with the first three information collections and associated reports, FCC revised the questions included in their 2012 information request. Specifically, one question was modified to elicit specific information on the programs and activities for which 911 funds were used along with how those programs and activities support 911. According to FCC officials, this modification enabled FCC to classify states’ responses with greater accuracy. While FCC’s 2012 report clearly states that modifications were made to the questions and each annual report includes the questions included in that year’s information request, the effects of these changes are not clear to the reader. In some cases, this methodological change resulted in differing characterizations from reports issued in 2011 to 2012, and it is not clear to the reader whether states no longer characterized as having used funds for other purposes had changed their practices of using funds for other purposes or whether the different characterization was a result of FCC’s change in methodology. For example, in FCC’s 2011 report, FCC identified both Virginia and West Virginia as states that had used 911 funds for other purposes. However, based on the additional information provided by these states in 2012, FCC determined that Virginia and West Virginia spent 911 funds in accordance with their respective state statutes governing 911 funding and therefore were not identified as using funds for other purposes. According to FCC officials, in gathering information for 2012, FCC asked additional questions to identify the specific uses of 911/E911 funds that were authorized under state law. They also characterized a state as using E911 funds for purposes other than E911 only if the state reported that is used 911/E911 funds for purposes not designated by the state’s funding statute. Because FCC has not published its methodology for analysis and decision rules for determining whether a state used 911 funds for other purposes and further never explicitly stated that a different method was used in 2012, this lack of disclosure could lead report users to misinterpret the results shown in the report. In particular, although it would appear that as time has passed, fewer states were using funds for other purposes, at least some of this difference is attributable to FCC’s change of methodology. We have previously reported that results-oriented organizations make sure that the information they collect are sufficiently complete and accurate to support decision making. FCC officials stated that seeking narrative responses from each state and publishing those responses demonstrate transparency. However, several pages of individual and varied responses may have limited usefulness to decision makers, who may need high-level descriptions and aggregated information. Furthermore, FCC is missing an opportunity to analyze funding trends because its method of asking questions does not result in answers that can be readily tracked from year to year. FCC is also missing an opportunity to provide more detailed aggregated information in its reports—such as amounts of fees, services covered, and total amount of funds collected—that would be helpful to decision makers who are trying to understand current methods of financing 911. FCC officials told us that they are seeking comment from stakeholders on FCC’s required annual report to Congress, as well as on information provided by states and other reporting entities, and that they will use this information to improve reporting. To implement NG911 nationwide, states must address technology, regulatory, and funding challenges, according to multiple government officials. For example, technological changes need to be made at PSAPs since existing call centers are incapable of some critical functions, such as linking with one another during emergencies. As such, PSAP calls currently cannot be transferred so PSAPs have limited means to act as back-up for one another when operations in one part of the country become overloaded or shut down because of circumstances such as hurricane evacuations or wildfires, according to DOT’s Research and Innovative Technology Administration. With respect to regulatory challenges, current laws and regulations in most states do not effectively enable the implementation of new technologies or allow the level of coordination and partnerships among government and public safety stakeholders, service and equipment providers, PSAPs, and 911 authorities that is necessary to implement IP-enabled 911 systems, according to NHTSA. Moreover, in the National Broadband Plan, FCC noted many of the existing state and federal regulations governing 911 were written before the technological capabilities of NG911 existed and For example, have therefore hampered the implementation of NG911.state, association, and industry officials have expressed concern about uncertainty regarding liability protection related to NG911. Stakeholders also expressed concerns about funding mechanisms for NG911. State revenues from long-established funding methods tied to wireline services are decreasing as more consumers disconnect their traditional home phones in favor of wireless devices or other services such as mobile VoIP. Despite NG911 implementation challenges, many states have started funding preliminary NG911 activities, and some areas have developed regional NG911 projects. For example, in responding to FCC’s data collection effort, 33 states reported that expenditure of 911/E911 funds for NG911 activities is permissible under current state law. Of these, 16 states reported that funds had been expended in 2011 for some NG911 activities including planning, network development, and equipment acquisition. As examples of regional NG911 projects, the Counties of Southern Illinois Next Generation 911 project has been identified by NENA as an early adopter of a regional approach to NG911. The project includes connecting 21 PSAPs through an Emergency Service IP Network, creating identical data centers in 2 counties, and obtaining NG911 equipment and information for a 15-county region in southern Illinois. Similarly, the state of Texas is conducting an NG911 project that is partially funded with federal grants from NHTSA and NTIA. The project involves constructing a detailed geospatial database of over 200 Texas counties that will be needed for a statewide NG911 system. The database should allow the new system to pinpoint the PSAP that needs to respond to a caller based on location. Even though 911 services remain primarily a state and local government responsibility and NG911 overall is in the early planning stages, FCC is working with federal, state, and private sector partners to help states address NG911 implementation challenges. For example, one of FCC’s federal advisory committees—the Communications Security, Reliability, and Interoperability Council (CSRIC)—makes recommendations to FCC to promote reliable 911 service and issued a report in March 2011 framing the transitional issues to NG911. CSRIC members are selected from public safety agencies, consumer or community organizations or other nonprofit entities, and the private sector. FCC also released a 5- point plan, based on recommendations made in the National Broadband Plan, to encourage NG911 implementation and to help states address some of the technology, regulatory, and funding challenges to implementation. Key elements of FCC’s 5-point plan include: Develop location accuracy mechanisms for NG911. Existing location technologies do not perform effectively in all environments. For example, global positioning technologies may not work deep inside a steel-and-concrete building, or even in a suburban residential basement, but may work in wood frame construction or near office windows. FCC officials said CSRIC plans to release a report in 2013 on indoor location accuracy. Enable consumers to send text, photos, and videos to PSAPs. In December 2012, FCC issued a notice of proposed rulemaking examining rule changes meant to enable people to send text messages to 911. The proposal was based on the voluntary commitment by the four largest U.S. wireless carriers to make text-to- 911 available to their customers by May 15, 2014. The proposed rulemaking would also require all wireless carriers and interconnected text-messaging providers to send automatic “bounce back” error messages by June 30, 2013, to consumers attempting to text 911 when the service is not available in order to inform consumers and prevent confusion. Additionally, NHTSA and NTIA have made more focused efforts to address NG911 technology challenges. As required in the New and Emerging Technologies 911 Improvement Act of 2008, NHTSA’s and NTIA’s National E911 Implementation Coordination Office developed a national plan in September 2009 for migrating to IP-Enabled 911 Systems, which lays a foundation for addressing technological challenges associated with enabling consumers to send text, photos, and videos to PSAPs. Facilitate completing and implementing NG911 technical standards. CSRIC has identified technical standards, related technical gaps, and the overall readiness of the NG911 applications. In addition, CSRIC has classified the importance and urgency of resolving the identified technical gaps. Develop a governance framework for NG911. As required by the Next Generation 911 Advancement Act of 2012, FCC released a report in March 2013 with detailed recommendations to Congress to create a new legal and regulatory framework for transitioning from legacy 911 to NG911 networks. The report includes detailed information on the major NG911 challenges and 24 specific recommendations to Congress and others, such as state and local public safety authorities, to address the challenges. For example, FCC recommended that Congress promote a consistent nationwide approach to key elements of NG911 deployment, including standards that support seamless communication among PSAPs and between PSAPs and emergency responders; appropriate liability protection to encourage technological innovation and rapid deployment of NG911; and provisions to make NG911 fully accessible to people with disabilities. In addition, NHTSA has developed guidelines for state NG911 legislative language to help address state regulatory challenges. In doing so, NHTSA obtained input from local, regional, state, and federal public-sector stakeholders, as well as private-sector industry representatives and advocacy associations. NHTSA has also worked with the National Conference of State Legislatures to create a database of 911 bills that have been introduced in the 50 states and the District of Columbia. The information is updated bi-weekly and includes information on multiple topics including funding and appropriations. Develop a funding model for NG911. Based on a CSRIC recommendation, NHTSA is currently working with a contractor with expertise in economics and a Blue Ribbon Panel to help states develop new options for funding 911. According to NHTSA officials, a report on this effort is expected to be released in 2014. In addition, in FCC’s 2013 report to Congress on the legal and regulatory framework for NG911 services, FCC made three recommendations to Congress for updating NG911 funding mechanisms. Specifically, FCC recommended that Congress should (1) develop incentives for states to broaden the base of contributors to NG911 funding to more accurately reflect the benefits derived from NG911 service, (2) encourage states to provide funding for NG911 as well as legacy 911 purposes as part of any existing or future funding mechanism, and (3) condition grants and other appropriate federal benefits on a requirement that funds collected for 911/NG911 funding be used only for 911 or NG911 purposes and provide for appropriate enforcement of such requirements. Most of the country has now implemented wireless E911 services, but this took over a decade to accomplish. New technology and eroding funding mechanisms have highlighted the need for 911 to evolve to a new system that can accommodate next generation technologies and that is based on an adequate source of funding to maintain the system. For NG911 to avoid the slow start that wireless E911 experienced, networks will need to be formed that will require regulatory changes at multiple levels of government. Although NG911 is still in nascent form, FCC, DOT, and others in the federal government are working together to conduct the research and planning needed to provide the foundation for states to address the technology, regulatory, and funding challenges to implement NG911 more efficiently than they implemented E911. Notably, FCC’s March 2013 report identified potential steps for Congress to take to create a legal and regulatory environment that will assist states, PSAPs, service providers and other stakeholders in accelerating the nationwide transition from legacy 911 to NG911. The report provided 24 specific recommendations to Congress and others, such as state and local public safety authorities, to address the challenges of implementing NG911. FCC has been collecting and reporting information on states’ use of 911 and E911 funds on an annual basis for 4 years and, as mandated by law, will continue to do so. Collecting and reporting this information requires resources from both FCC and the states, so it is in the best interest of all parties for the information to be presented in the most useful way possible. Given that FCC’s future annual reports will likely include information on the transition to NG911 services, it is important that FCC collect information in a way that provides information that can be tracked over time. For example, as the federal government provides information for states as they transition to a potential new funding system, it would be helpful to have information that tracks current trends and patterns in state funding. However, because FCC’s method of asking questions does not result in answers that can be tracked from year to year, there is no federal tool that can be used at this time to understand how or if states are adjusting their funding for the transition to NG911. Furthermore, FCC is missing an opportunity to provide more detailed aggregated information in its reports—such as amounts of fees, services covered, and total amount of funds collected—that would be helpful to decision makers. For example, having more readily accessible, detailed information about the current status of 911 funding would provide decision makers with a better understanding of how to address the challenges that arise in funding NG911 services. Following best practices for data collection and analysis—such as using closed-ended questions when possible and clearly communicating how open-ended information is coded and analyzed—would help ensure that the information FCC collects is measureable and could be tracked, resulting in more useful information for Congress and others who are researching funding mechanisms for the future of 911 services. We recommend that the Chairman of FCC follow best practices for data collection and analysis to improve FCC’s current method of collecting and reporting information on states’ use of 911 funds, by, for example, using closed-ended questions when possible, developing written internal guidance for analyzing data, and fully describing the methodology for its report. We provided a draft of this report to FCC and DOT for their review and comment. In response, FCC concurred with our recommendation to improve its current method of collecting and reporting information on states’ use of 911 funds. FCC stated that it is examining ways to augment current collection of information to yield more precise information and to provide more quantitative data in future reports. Specifically, FCC noted that it will (1) consider using closed-ended questions as part of future data collections to facilitate tracking and analyzing data, (2) provide greater clarity in its guidelines for analyzing data, and (3) include a more detailed description of its methodology in future reports. FCC further stated that it has taken a variety of steps to enhance the transparency and usefulness of the information it gathers and has sought comment on the accuracy and completeness of state responses to FCC’s information collection. FCC officials believe these steps will also improve the accuracy and efficacy of its reporting. FCC’s written comments are reprinted in appendix II. DOT provided technical comments which we incorporated as appropriate. We are sending copies of this report to the Chairman of FCC, the Secretary of Transportation, and interested congressional committees. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The Next Generation 911 Advancement Act of 2012 mandated that we review states’ collection and use of 911 funds. This report presents information on (1) the progress that has been made in implementing wireless Enhanced 911 (E911) in the last decade, (2) the extent to which states are collecting and using 911 funds for 911 purposes and the usefulness of FCC’s reporting about this issue, and (3) challenges to implementing NG911 services and federal efforts to facilitate its deployment. To address these issues, we interviewed federal, state, regional, and association representatives. We interviewed officials from the Federal Communications Commission (FCC) regarding states’ collection and use of E911 funds and the progress made in deploying wireless E911 and NG911 throughout the country. We also interviewed officials from offices within the Departments of Transportation (DOT) and Agriculture about E911 and NG911 deployment. We interviewed representatives from associations including the National Emergency Number Association (NENA), the National Association of State 911 Administrators, the Association of Public-Safety Communications Officials, CTIA-The Wireless Association, and the Competitive Carriers Association about states’ collection and use of E911 funds and about E911 and NG911 deployment. We visited Illinois, where we interviewed officials from the State 911 Office, officials associated with a regional NG911 project, and representatives from rural counties in southern Illinois that have not yet begun E911 implementation. We obtained and examined relevant reports and materials from these officials and representatives. We selected Illinois because we were informed about the regional NG911 project in southern Illinois from stakeholders and because Illinois reported using 911 funds for other purposes to FCC in its 2010, 2011, and 2012 reports. We also interviewed Texas officials with responsibility for the state’s NG911 pilot project because Texas received the largest E911/NG911 grant from NTIA and NHTSA and because stakeholders mentioned that the state was making progress on implementing NG911. Information obtained from Illinois and Texas is not generalizable to any other states. In addition, we gathered further information from state officials in the five other states that reported using E911 funds for other purposes in their 2012 reports to FCC: Arizona, Georgia, Maine, New York, and Rhode Island.respond to FCC’s request for information—Louisiana, District of Columbia, American Samoa, Northern Mariana Islands, and U.S. Virgin Islands—but none responded to our request. In addition, we attempted to contact jurisdictions that did not To understand the progress that has been made in deploying wireless E911 services throughout the country, we reviewed our previous reports on wireless E911 implementation in 2003 and 2006, and we obtained and analyzed county- and state-level E911 deployment data collected by NENA as of December 2012. To determine the reliability of this data, we reviewed relevant documentation and interviewed cognizant officials about their processes for reviewing the data and ensuring their accuracy. We determined that the NENA data were sufficiently reliable for the purposes of our report. To determine the extent to which states are collecting and using E911 revenues for E911 purposes and the usefulness of FCC’s reporting about this issue, we obtained FCC’s 2010 through 2012 annual reports on state collection and distribution of 911 and E911 fees and charges as well as states’ responses to FCC’s information-collecting effort upon which the FCC’s annual reports are based.information that the states provided to the information FCC reported. We also performed year-to-year comparisons, identifying differences in how FCC characterized states’ responses in different years. To determine the reliability of this data, we reviewed relevant documentation, and interviewed cognizant officials about their processes for reviewing the data and ensuring their accuracy. Except where we have noted some inconsistencies and concerns with FCC’s analysis of state-reported information, we consider the data sufficiently reliable for the purposes of this report. In assessing the usefulness of FCC’s reporting, we reviewed best practices set forth in our previous reports and other professional literature on methods for collecting, analyzing and reporting information We analyzed the states’ reports to FCC, comparing the and data. To identify federal efforts to facilitate NG911 services, we reviewed FCC’s report to Congress entitled Legal and Regulatory Framework for Next Generation 911 Services as well as associated stakeholders’ responses to FCC’s public notice on NG911. In addition, we reviewed relevant laws and regulations pertaining to E911 and NG911, including the Wireless Communications and Public Safety Act of 1999, the ENHANCE 911 Act of 2004, the New and Emerging Technologies 911 Improvement Act of 2008, and various state laws governing the collection and use of 911/E911 fees. We also reviewed relevant reports from FCC, DOT, the Congressional Research Service, industry, and other stakeholders, including FCC’s National Broadband Plan. In addition to the contact named above, Sally Moino, Assistant Director; Thomas Beall; Amy Higgins; David Hooper; SaraAnn Moessbauer; Joshua Ormond; Amy Rosewarne; and Rebecca Rygg made key contributions to this report.
Wireless E911 service refers to the capability of 911 call takers to automatically receive location information from 911 callers using mobile phones. The current E911 system is not designed to accommodate emergency communications from the range of new technologies in common use today that support text, data, and video. Although deploying wireless E911 and NG911 is the responsibility of state and local governments, FCC is required by law to report annually on the funds states collect to provide 911 services such as E911. The Next Generation 911 Advancement Act of 2012 required GAO to review states’ collection and use of 911 funds. In this report, GAO presents information on (1) progress implementing wireless E911 in the last decade, (2) states’ collection and use of 911 funds and the usefulness of FCC’s reporting on this issue, and (3) challenges to implementing NG911 services and federal efforts to facilitate its deployment. GAO reviewed FCC’s annual reports, states’ responses to FCC’s information-collecting efforts, and documents from FCC and DOT regarding E911 and NG911. GAO reviewed best practices for collecting and analyzing data and interviewed federal and state officials and other stakeholders. Although states faced challenges and delays in the past, states have made significant progress implementing wireless Enhanced 911 (E911) since 2003. Wireless E911 deployment usually proceeds through two phases: Phase I provides general caller location information by identifying the cell tower or cell site that is receiving the wireless call; Phase II provides more precise caller-location information, usually within 50 to 300 meters. Currently, according to the National Emergency Number Association (NENA), nearly 98 percent of 911 call centers, known as Public Safety Answering Points (PSAPs), are capable of receiving Phase I location information, and 97 percent have implemented Phase II for at least one wireless carrier. This represents a significant improvement since 2003 when implementation of Phase I was 65 percent and Phase II was 18 percent. According to NENA's current data, 142 U.S. counties (representing roughly 3 percent of the U.S. population) do not have some level of wireless E911 service. The areas that lack wireless E911 are primarily rural and tribal areas that face special implementation challenges, according to federal and association officials. According to data collected by the Federal Communications Commission (FCC), all 50 states and the District of Columbia reported collecting--or authorizing local entities to collect--funds for wireless E911 implementation, and most states reported using these funds for their intended purpose. Six states--Arizona, Georgia, Illinois, Maine, New York, and Rhode Island--reported using a total of almost $77 million of funds collected for 911 implementation for other purposes (e.g., transferring 911 funds to the general fund) in 2011. Using funds in this way is permissible by state law in these states, but it creates the risk of undermining the credibility of 911 fees in those states. The manner in which FCC collects and reports information on state 911 funds limits the usefulness of its annual report. In particular, contrary to best practices for collecting and analyzing data, FCC uses only open-ended questions to solicit information from states, lacks written guidelines for interpreting states' responses and ensuring that results can be reproduced, and does not describe the methodology used to analyze the data it collects. As a result, FCC is missing an opportunity to analyze trends and to provide more detailed aggregated information that would be useful to decision makers. Next Generation (NG911) will enable the public to reach PSAPs through voice and data, such as text messages, but stakeholders have identified a variety of technical, regulatory, and funding challenges to implementing it. For example, many of the existing state and federal regulations governing 911 were written before the technological capabilities of NG911 existed. The federal government is taking steps to help states address challenges. In particular, the Department of Transportation (DOT) has focused on research through the NG911 Initiative, and FCC released a 5-point plan to encourage NG911 implementation. FCC's plan includes (1) developing location accuracy mechanisms for NG911; (2) enabling consumers to send text, photos, and videos to PSAPs; (3) facilitating the completion and implementation of NG911 technical standards; (4) developing a governance framework for NG911; and (5) developing a funding model for NG911. FCC also released a report in March 2013 that detailed specific recommendations to Congress for a legal and regulatory framework for NG911. FCC should follow best practices for data collection and analysis to improve its current method of collecting and reporting information on state 911 funds. In response, FCC concurred with GAO's recommendation and agreed to take action to address it.
To conduct this work, we first identified the top four federal construction agencies based on the amount of funds obligated during fiscal year 2013 using data from USASpending.gov. We selected three of the top four agencies for further review. These agencies were the Departments of Defense—specifically the Departments of the Army (Army) and the Navy (Navy)—and Veterans Affairs (VA), and the General Services Administration (GSA). The Army, Navy, VA and GSA accounted for almost 75 percent of the total $28 billion obligated for construction contracts in fiscal year 2013. To identify what is known about the prevalence of bid shopping on federal construction projects, we interviewed agency contracting officials, prime contractor and subcontractor trade associations, prime contractors and subcontractors; and reviewed GAO reports, articles, academic literature, and congressional testimonies addressing bid shopping. To identify possible cases of bid shopping, we used the Contractor Performance Assessment Reports System (CPARS)construction contracts where the prime contractors received low performance rating in subcontractor management. From these contracts, we judgmentally selected two contracts from each of the selected agencies for in-depth review, based on contract award amount, type of contract, and project location. At the Department of Defense, we selected two contracts awarded by the U.S. Army Corps of Engineers (USACE) and two awarded by the Naval Facilities Engineering Command (NAVFAC). The contracts were awarded between fiscal years 2009 and 2011, and in total were valued at about $760 million. We also performed a high-level review of two additional GSA contracts during our design phase to get a sense of what contract documentation (e.g. subcontracting plan) to request that may identify subcontractors on a construction project. To address how the federal government monitors subcontractor performance under federal construction contracts and if necessary, takes action to address unsatisfactory performance, we reviewed the FAR, which identifies tools available to the agencies to monitor and take actions to address or correct deficiencies. We also obtained and reviewed pertinent agency guidance and supplements to the FAR, Small Business Administration (SBA) regulations, and the Office of Management and Budget-Office of Federal Procurement Policy guide to best practices for contract administration. To determine if the federal agencies use these tools on construction contracts where there were subcontracting issues, we obtained information from the same eight construction contracts identified above to determine how agencies monitor prime contractor and subcontractor contract performance and address unsatisfactory performance. We reviewed documentation in the contract files such as the solicitation, proposals (including the technical evaluation), inspection reports, contractor performance reports, and other key documents for identification, monitoring and compliance purposes. We reviewed selected change orders for the contracts in our review to identify some of the reasons for cost increases and schedule delays. We also interviewed agency contracting officials, prime contractors and subcontractors about their experience in the contracting process. We contacted all eight prime contractors associated with the contracts in our sample to obtain information on their process to select subcontractors; three responded to our request. We also contacted 67 subcontractors for those eight contracts and received responses from eight. Our results from the analysis of these construction contracts are not generalizable agency- or government-wide. For the agencies and contracts we reviewed, our approach provided greater depth and insight into subcontractor selection. However, we did not determine whether agencies’ construction contract oversight was effective. Further, to address this objective, through interviews and literature searches, we identified 12 states that took actions through regulation that could mitigate bid shopping. We contacted these states and from the five responses received, we obtained information from officials responsible for construction contracting on bid shopping and the actions taken to mitigate its use. In addition, we spoke with officials from SBA about its requirement that for contracts over certain dollar thresholds, a large prime construction contractor notify the contracting officer in writing if it does not subcontract to a small business subcontractor that was used in preparing its proposal. We also spoke with officials at the Smithsonian Institution about its practices to require the listing of subcontractors to ensure that those selected can adequately perform the work. We conducted this performance audit from February 2014 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. After a prime contractor is awarded a construction contract, it negotiates with subcontractors on the scope of work and price before awarding subcontracts. This period is known as the buyout process—if a prime contractor were to practice bid shopping, it would shop the original subcontractor’s bid to other subcontractors at this point. See figure 1 for a general description of the process to obtain federal construction services. For the purposes of this report, we refer to the prime contractor’s offer to the government as a proposal; we refer to the subcontractor’s offer to the prime contractor as a bid. We reported on the topic of bid shopping in 1971 and again in 1981. In those reviews, although bid shopping was generally acknowledged by government contracting personnel, state officials, and individuals in the construction industry to be a prevalent practice and recognized as a longstanding recurrent complaint by subcontractors, we were not furnished evidence of any specific cases of bid shopping having occurred. Contract administration is the management of all actions after award through the closing of the contract to assure that contractors comply with contract terms. Contract administration includes all dealings between parties to a contract from the time one is awarded until the work has been completed and accepted or the contract terminated, payment has been made, and disputes have been resolved. After a construction contract is awarded, the federal government is represented in the contracting process by a contracting officer, who has authority to modify or terminate contracts on behalf of the government. The contracting officer receives support from his/her onsite representative—the contracting officer’s representative (COR) who is the liaison between the government and the prime contractor. The responsibilities of the COR may include contract administration tasks and directing daily operations within the scope of the contract, and monitoring contractor and subcontractor performance to help ensure that requirements meet the terms of the contract. We were not able to determine if bid shopping occurs or does not occur on federal construction projects, and thus were not able to determine the prevalence of bid shopping. Officials at the selected agencies we reviewed were not aware of instances of bid shopping on their construction projects, and we could not find evidence of bid shopping in the contract files we reviewed. However, many contractors in the construction industry that we spoke with told us that bid shopping does, in fact, occur. Our discussions with prime contractors as well as subcontractors indicated that some subcontractors may have a perception of bid shopping during the buyout process between prime contractors and its subcontractors. We found no conclusive evidence of bid shopping in our reviews of contract files, and government officials furnished no evidence of bid shopping. Government officials at the agencies we reviewed stated that they were not aware of bid shopping occurring on their construction projects. These officials also stated that the government in most cases does not have insight into how the prime contractors select their subcontractors, and if bid shopping were to occur, a contracting officer would not be aware of it unless a subcontractor filed a complaint. The selected contracts that we reviewed all showed evidence of prime contractors’ performance issues with the management of subcontractors—such as untimely replacement of defective work—yet none presented evidence or complaints of bid shopping. Although none of the prime contractors, subcontractors, and construction industry associations we spoke with were able to provide current evidence of bid shopping on federal projects, they all told us that, based on their knowledge of the industry, it does in fact occur. Most of the subcontractors stated that they had not experienced it for themselves. The industry associations we spoke with were not in agreement on the prevalence of bid shopping—one thought it was very prevalent, while the others thought it was happening in only selected circumstances or was not sure how often it was occurring. Almost all of the prime contractors and subcontractors we spoke with told us that a prime contractor that practiced bid shopping would alienate subcontractors, and a few added that prime contractors would eventually not be able to find subcontractors willing to work with them. In our discussions with prime contractors, we found that the process used to develop a proposal and the subsequent selection of subcontractors may lead to a perception of bid shopping by subcontractors. Prime contractors told us that in preparing their proposals, they try to obtain multiple subcontractor bids from each trade (e.g., electrical, plumbing, mechanical). They then use the multiple subcontractors’ bids to prepare their own estimates to include in their proposal to the government. The prime contractors we spoke with stated that they generally do not use a specific price bid by one subcontractor, but use the bids to benchmark their own estimates. One prime contractor representative we spoke with stated that the company tries to obtain three bids for each trade when developing a proposal to get a better sense of a reasonable estimate to include in its proposal. We also found that the process the prime contractor goes through to submit a proposal can be chaotic. According to both prime and subcontractors we interviewed, subcontractors, to remain competitive, often wait to submit their bids to the prime contractor until just minutes before the prime contractor is required to submit its proposal to the agency, which allows minimal time for the prime contractor to ensure that the bids are reasonable and cover the required scope of work. Four of the subcontractors we spoke with told us that this is one method to help prevent their bids being shopped prior to contract award. For a large project, the subcontractors’ bids can number in the hundreds. In fact, one prime contractor estimated that for one large project it may review approximately 500 bids to prepare its proposal. Further, according to prime contractors, it can be uncertain at the time their proposals are submitted to the government that subcontractors bids include the full scope of work, so they must do their best to quickly assess the accuracy and completeness of the various bids they review for one trade. In addition, a prime contractor told us that he may submit a low proposal price depending on who he is competing against, and then hope to negotiate further with the subcontractors during the buyout process. We were told by the prime contractors that not until after the government awards the contract do they negotiate the specific tasks and prices with subcontractors during the buyout process. The prime contractor will verify that each subcontractor has a complete scope of work for the project, and then select and award a contract to a subcontractor to do the work. The subcontractors stated that it is common practice for prime contractors to negotiate with them on the scope and price of its proposed subcontract after contract award. Several of the subcontractors we interviewed also stated that a subcontractor may erroneously believe that its bid is being bid shopped, when in fact negotiation is part of the normal buyout process. Further, most of the subcontractors we interviewed told us that if they have not done business with and are unfamiliar with a specific prime contractor’s negotiation procedures or if a prime contractor is known to shop bids, they may propose an inflated price under the assumption that the prime contractor will negotiate that price down during the buyout process. In contrast, if they are bidding to a prime contractor they know, they are more likely to provide the best price in the first bid. Accordingly, it is difficult to sort out whether the prime contractor’s selection of subcontractors results from bid shopping or from the chaotic and challenging process of bidding and buying out a federal construction project. Subcontractors have suggested that bid shopping leads to poor quality construction, however we found that the selected agencies have existing tools to hold the prime contractor accountable for a project’s work quality and progress and, when performance is unsatisfactory, have methods to address or correct deficiencies. The government can be protected from poor quality construction if it appropriately uses the various tools at its disposal to manage and address deficiencies. Examples of oversight tools include onsite agency representatives, daily construction progress reports and periodic inspection reports. These tools can help agencies catch instances of poor quality construction for immediate remedy. If problems persist, agencies have methods for addressing or correcting unsatisfactory performance including withholding of payments to the prime contractor and potential government-wide reporting of poor contractor performance. Further, one tool that is used by some states to prevent the poor quality construction allegedly caused by bid shopping is bid listing—whereby the prime contractor must name the subcontractors in its proposal to the state government. Bid listing may provide insight into subcontractor substitution after award. However, as past analyses of the use of bid listing in the federal government have found, the benefit of requiring it for the prevention of bid shopping is questionable in part because of the administrative burden. When contracting for construction services, the federal government’s direct contractual relationship is with the prime contractor and not with the subcontractor, a doctrine of contract law known as privity of contract. In general, this means that the government cannot direct subcontractors to perform tasks under the contract, and the prime contractor retains legal and management responsibility for overall contract performance. Agency officials also said that due to privity of contract, they hold the prime contractor fully accountable for the subcontractors’ work quality. In our review of selected contracts, we found that agencies use a variety of tools to monitor and assess work quality and progress on a project. Specifically, the tools agencies use include onsite representatives, inspection reports, and a host of reporting requirements to monitor and assess work quality and progress. Further, the FAR generally provides that each contract shall include quality control requirements which the prime contractor must meet in the performance of the construction contract. Agencies must use these quality control requirements to assess the prime contractor and their subcontractors to ensure they meet these standards for materials, workmanship and timeliness. Some of the tools agencies in our review used include the following: Onsite representatives. We found that agency construction projects had an onsite representative (e.g. resident engineer for construction or COR). The agency onsite representative is the agency’s “eyes and ears” during construction and observes the work of the prime contractor and its subcontractors on a daily basis to ensure their work and materials conforms to contract requirements. It is this person’s responsibility to assess the project’s work quality, timeliness and performance of equipment and systems to help ensure that the federal government receives the services it contracts for. Daily construction reports. The contracts in our review generally require the prime contractor to furnish a report each day summarizing the daily activities onsite. At the VA, for example, the reports must show the number of trade workers, foremen/forewomen, and pieces of heavy equipment used by the prime contractor and its subcontractors on the prior day. The reports must also give a breakdown of employees by craft, location where employed, and the work performed for the day, and a list of materials delivered to the construction site on the date covered by the report. Examples of reports we reviewed for a NAVFAC project included the name of the firm and its trade (e.g. carpenters and electricians), and the type of work that they performed such as plumbing, and installing switches and doors. Periodic onsite progress meetings. Some of the contracts in our review identify what types of meetings should be held, who is to attend, and how often the meetings should be held. But depending on the type and complexity of the project, government contracting personnel can require more frequent meetings with prime contractors to monitor progress of the construction work. For example, USACE provided us examples of minutes from biweekly coordination meetings with the prime contractor; and biweekly meetings with the prime contractor and certain subcontractors for the purpose of testing the performance of certain systems, such as heating and ventilation systems. For one VA contract, the government’s representatives required weekly meetings with the prime contractor and subcontractors to discuss project progress and to identify problems and solutions to those problems. Inspection reports. Inspections are performed at various stages of the project to ensure that the execution of the contract by the prime contractor (and its subcontractors) meets contract specifications. Our review of examples of inspection reports from a couple of the contract files showed inspections of work quality and in some cases testing of systems, such as mechanical, fire and electrical systems, throughout the project. The contracts may also require the prime contractor to notify the onsite representative when inspections and tests are to be conducted so that he or she may choose to be present to observe. Deficiency reports. If the agency identifies prime contractor or subcontractor nonconformance with contract material or workmanship requirements, we found that a report is prepared to notify the prime contractor of the deficiencies. The report notes who is responsible for taking corrective action, and tracks the status of corrective actions taken. In examples of the deficiency reports we reviewed, if the listed deficiency in material or workmanship was not corrected, then the deficiency stayed on the reports until it was corrected to contracting officer’s satisfaction. In the final stages of a project, a list of tasks that need to be completed or corrected before the agency will accept the building for occupancy is developed, called a punch list. We found examples of punch lists in the contract files we reviewed identifying tasks to be resolved prior to final acceptance. Monthly progress payment reports. The FAR provides that agencies may make monthly progress payments to prime contractors as the work progresses. To achieve this, agency contracting personnel and the prime contractor agree to a schedule of tasks that need to be performed and their value. At the conclusion of each month, the prime contractor submits a payment request to the contracting officer that identifies the materials delivered and the percentage of work performed for each task. Since a single payment can be for millions of dollars, it is important that the project’s contracting personnel adequately review the contractors’ payment requests to ensure that work is billed accurately, reflects the materials used, and the work has been performed. If any discrepancies are noticed they must be resolved before any payment is made. The progress payment reports we reviewed from one GSA project summarized the status of cost and schedule information to inform how much of the contractor’s monthly payment request should be approved. Submission of weekly certified payrolls. The prime contractors also submitted weekly certified payrolls, which includes information on subcontractors, so that the government can check for accuracy of wages including overtime and categorization for a particular trade (e.g., tile setter and electrician). Examples from contract files we reviewed at GSA and NAVFAC showed that the payrolls included information such as the name of subcontractor, name of employee and trade title, days and hours worked for the week. Liquidated damages. To protect itself from construction delays, an agency can include the liquidated damages clause in a prime contract, meaning that if the prime contractor fails to complete the work within the time specified in the contract, the contractor pays the government a daily fixed amount for each day of delay until the work is completed or accepted. Liquidated damages can result from a delay caused by the prime contractor or one of its subcontractors. They are not to serve as a penalty but to represent an assessment of probable damage costs that would be incurred by the agency if delay causes the work to extend beyond the contractual completion date. The liquidated damages daily rate for failure to timely complete the work is included in the prime contract. These costs can vary depending on the project. For example, for one VA contract the daily rate is $2,800 per calendar day of delay and for one USACE contract the daily rate is $16,500 per calendar day. While oversight can help agencies identify instances of poor quality construction, we found that the selected agencies use a number of methods to prompt the prime contractor to address or correct deficiencies identified during oversight activities if problems persist. In all cases, the government held the prime contractor solely responsible for correcting all deficiencies, whether or not the deficient work was performed by a subcontractor or the prime contractor. Retaining or withholding a certain percentage of each monthly progress payment owed by the government to prime contractors is a powerful motivator to encourage prime contractor and subcontractor performance. The FAR allows agencies to withhold up to 10 percent of each monthly progress payment to prime contractors in accordance with the contract until completion of all contract requirements. According to GSA officials, prime contractors can in turn withhold a similar percentage amount from each subcontractor’s monthly progress payments until satisfactory completion of the work. Progress payments may be withheld on the contracts we reviewed to account for materials and workmanship deficiencies or lack of progress on the project by prime contractors or subcontractors. For one USACE contract, the agency retained $850,000 in payment for flooring and carpet damage and metal panel problems, among other things. Some agency officials we interviewed stated that prime contractors take their performance evaluations on a current contract very seriously because a negative performance report can work against them in trying to win future federal construction contracts. The FAR generally requires agencies to evaluate and document contractor performance on contracts that exceed certain dollar thresholds at least annually and at the time the work is completed, and to make that information available to other agencies through PPIRS, a shared government-wide database.completing past performance evaluations, the assessing officials rate the contractor on various elements such as quality of the product or service, schedule, and cost control. In addition, for each element, a narrative is provided to support the rating assigned. When assessing a contractor’s past performance for a potential contract award, contracting officials may consider the evaluations in PPIRS. In For all of the eight contracts we reviewed, the contracting officer provided or approved performance evaluations for the prime contractors on the projects. For example, one rating for a GSA contract was marked marginal because the concrete subcontractor did not place the concrete correctly, and the prime contractor was given a poor performance rating based on this subcontractor’s performance. A contracting officer can issue a cure notice informing the prime contractor of its failure to perform, which endangers meeting contractual requirements. A cure notice provides at least 10 days for the prime contractor to correct the issues identified in the notice or otherwise fulfill the requirements. A show cause notice goes a step further, advising the prime contractor that a termination for default is being considered and calls the contractor’s attention to the contractual liabilities if the contract is terminated for default. At this point the prime contractor must show that its failure to perform arose from causes beyond its control and without fault or negligence on its part. In one example from our contract file review, a VA contracting officer issued a cure notice to the prime contractor citing, among other factors, failure to maintain an adequate quality control program to correct work deficiencies. According to the show cause notice, the prime contractor provided a response that did not address the deficiencies highlighted in the cure notice and subsequently the contracting officer issued a show cause notice. Even though the show cause notice was issued, the VA contracting officer did not terminate the contract in part because the prime contractor started to take corrective action. With the host of oversight mechanisms in place, the government can be protected from poor quality construction if it appropriately uses the various tools and methods at its disposal to manage and correct deficiencies. Bid listing is a practice whereby the potential prime contractors are required to identify certain subcontractors in their proposals that it will use if awarded the contract, which moves their selection and initial negotiations with subcontractors to earlier in the contracting process than if bid listing was not used. Bid listing may provide contracting officers with some insight as to when a subcontractor is substituted after contract award because if a proposed listed subcontractor is not used, the prime contractor must notify and justify to the contracting officer the reason for the substitution and obtain the contracting officer’s approval. Congress has on multiple occasions proposed—but never passed—mandatory bid listing requirements to prevent the poor quality construction allegedly caused by bid shopping. Even though the FAR does not currently include a bid listing provision, it does provide the contracting officer authority to ask for identification of prospective subcontractors for the purpose of determining responsibility. We found instances in our review where prime contractors had to list their subcontractors within their proposals for the purpose of ensuring that a critical subcontractor is responsible and can meet specific requirements. For example, a Smithsonian solicitation we reviewed required the listing of subcontractors. Officials at the Smithsonian told us that this is a regular practice they use to ensure that the selected subcontractors can adequately perform the work. This requirement provides similar insights into subcontractor substitution as bid listing, as the prime contractor must notify the contracting officer of a substitution, and show that the new subcontractor is also responsible. More recently, the Small Business Administration, in response to the Small Business Job Act of 2010, implemented regulations effective in August 2013 that require a prime contractor to notify the contracting officer in writing whenever it does not subcontract to a small business subcontractor during contract performance that was used in preparing its proposal. This explanation must be submitted to the contracting officer prior to the submission of the invoice for final payment and contract close- However, during the public comment period prior to the approval of out.the regulations, some commenters expressed concerns on the proposed regulations that the notification requirement would be a disincentive for prime contractors from involving small businesses in the development of their proposal, which may potentially limit small businesses’ ability to gain valuable insight into how prime contractors approach proposal development in general. Past federal research and efforts to mitigate bid shopping through subcontractor bid listing have shown that the benefits of doing so are questionable, in part because of the added administrative burden. In its 1972 report to improve federal procurement practices, the Commission on Government Procurement researched the issue of bid shopping and determined that it would not be materially improved by the adoption of mandatory bid listing requirements and that the cost of implementing requirements would likely outweigh the benefits. As a result, the Commission took a position against a mandatory government-wide requirement for subcontract listing in federal construction. Department of the Interior and GSA previously required subcontractor bid listing but stopped the practice in 1975 and 1983, respectively. GSA testified in 2000 that bid listing would create more harm than benefit and strongly opposed bid-listing requirements for a number of reasons, such as adverse affect on the timeliness and cost of contract performance and increase in the government’s administrative expenses. Congress in the late 1960s created the Commission on Government Procurement to devise fundamental improvements to federal procurement practices. The Commission developed 149 recommendations to both Congress and the executive branch. Commission on Government Procurement, Report of the Commission on Government Procurement, (Washington D.C.: Dec. 31, 1972). subcontractors at the time of contract award. For example, one state reviews bids from subcontractors, and then tells the prime contractor which subcontractors to include in its proposal to the state, rather than the prime contractor selecting its own subcontractors. We found that bid shopping for some of these states was the factor for establishing procedures to identify subcontractors prior to contract award. In discussion with officials from some of these states, the requirement to list subcontractors in the prime contractor’s proposal has been required for many years. Officials from one state told us that they are reconsidering the requirement for bid listing because of the administrative burden it is causing for state contracting officials, specifically an increase in bid protests. According to these officials, unsuccessful contractors are using this requirement to protest contract awards because of administrative mistakes in contractors listing their subcontractors. This is similar to the issue that GSA raised in the early 1980s when it stopped the requirement to list subcontractors. However, according to officials from the other states that we contacted that require bid listing, they have had no complaints from prime contractors in complying with this requirement. Moreover, in the instances from the contracts we reviewed where cost growth or schedule delays occurred as the result of government-driven changes after award or unforeseen conditions, bid listing, if required, would not have prevented these types of changes. The FAR and agencies’ acquisition regulations provide for changes to fixed-price construction contracts, known as change orders. When the change is driven by the government, the government generally bears the additional cost. We found that most of the construction projects we reviewed experienced increased costs and schedule delays as a result of government-driven changes or unforeseen conditions. For example, on one NAVFAC contract the government decided to incorporate furniture, furnishings, and audiovisual equipment into the construction project, resulting in a cost increase of $1.9 million for an $11.6 million contract. On a GSA contract, after construction began, the government found a different site condition than expected and added soil remediation costs of approximately $400,000 and provided an extension of 52 days. Bid shopping is widely considered an unethical business practice, but the prevalence of the practice is unknown. It is difficult to determine, for a particular contract, whether the prime contractor’s selection of subcontractors was truly a result of bid shopping or appears so due to the chaotic nature of bidding for and buying out a federal construction project. Specifically attributing poor performance to bid shopping is therefore also challenging. Though considered an administrative burden, bid listing is an optional practice available to contracting officers who determine it is necessary to ensure that the prime contractor’s proposal identifies qualified responsible subcontractors. Nonetheless, in evaluating a prime contractor’s proposal, the federal government must determine that the price is fair and reasonable. After the prime contractor is awarded a fixed-price contract, it must manage the subcontractors to complete the job within the established contract price and schedule. The government has additional tools available to provide oversight to manage or correct identified deficiencies during a project’s duration. As the prevalence of bid shopping on federal construction contracts is unknown, and bid listing requirements to prevent it have been discontinued by federal agencies in part due to administrative burden, we are making no recommendations. We provided a draft of this report to the Departments of Defense and Veterans Affairs, and the General Services Administration for their review and comment. None provided comments on this report. We are sending copies of this report to the Secretaries of Defense and Veterans Affairs, and the Administrator for the General Services Administration as well as interested congressional committees and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. GAO staff who made key contributions to this report are listed in appendix I. Marie A. Mak, (202) 512-4841 or [email protected]. In addition to the contact named above, Tatiana Winger, Assistant Director; Marie Ahearn; Pete Anderson; Virginia Chanley, George Depaoli, Joe Hunter; Julia Kennon, Kenneth Patton, Russell Reiter and Ozzy Trevino made key contributions to this report.
In fiscal year 2014, the federal government obligated almost $32 billion for construction projects using primarily competitive, fixed-price contracts. In these contracts, the government holds the prime contractor fully responsible for project delivery at the agreed-to price and schedule. Once a construction contract is awarded, the prime contractor must manage subcontractors that typically perform 60 to 90 percent of the work on a construction project. Bid shopping—whereby a prime contractor uses one subcontractor's price in its proposal but negotiates a lower price with a subcontractor after the contract award for the purpose of retaining the difference for its benefit—is considered an unethical business practice by the construction industry. Subcontractors have alleged that bid shopping leads to poor quality construction. GAO was asked to review the government's insight into subcontractor selection and oversight of subcontractor performance on federal construction contracts. This report covers (1) what is known about the prevalence of bid shopping on federal construction projects and (2) what tools the federal government has to monitor and address contractor performance. GAO judgmentally selected and reviewed construction contracts from three of the four federal agencies that obligated the most funds for construction contracts in fiscal year 2013. GAO also interviewed agency and state contracting officials and construction industry representatives. GAO is not making recommendations in this report. The agencies in this review did not provide comments on this report. GAO was not able to determine if bid shopping occurs or does not occur when prime contractors select subcontractors on federal construction projects, but found that the selection process could lead to subcontractors' perceptions of bid shopping. GAO's review of selected contract files did not reveal evidence of bid shopping. Further, officials at the agencies GAO reviewed stated they were not aware of bid shopping occurring on their contracts. Many of the construction contractors that GAO spoke with said that bid shopping occurs, but could not furnish evidence of specific instances. Negotiation procedures between prime contractors and subcontractors may create the impression of bid shopping among subcontractors that submit bids. Specifically, prime contractors explained that they receive multiple subcontractor bids for each trade (e.g., electrical, plumbing) up to minutes before their proposal is submitted to the government; and they typically do not use a specific subcontractor's price in their proposal, but a price informed by the subcontractors' bids. After award, the prime contractor negotiates and selects a subcontractor for each trade during the “buyout process,” as shown below. To hold the prime contractor accountable for a project's work quality and progress, selected agencies use oversight tools such as agency representatives deployed on site, daily progress reports, and When performance is unsatisfactory, agencies use a number of methods to address or correct deficiencies. For example, agencies can withhold progress payments to the prime contractor or report poor contractor performance in government databases. Further, the government can be protected from poor quality construction if it appropriately uses the oversight tools at its disposal. To address bid shopping, some states are using bid listing, which requires the prime contractor to name certain subcontractors in its proposal to the state government. But the benefit of requiring bid listing in the proposal solely for the prevention of bid shopping is not certain, as past analyses of its use in the federal government have found it adversely affects the timeliness and cost of contract performance, and increases the government's administrative expenses.
The mission of VA is to serve America’s veterans and their families with dignity and compassion and to be their principal advocate in ensuring that they receive medical care, benefits, social support, and lasting memorials. VA is a cabinet-level agency with a budget of over $127 billion and is one of the world’s largest health care, medical research, and insurance benefits organizations. In addition to a central office, VA consists of three administrations that generally operate as distinct entities: the Veterans Health Administration (VHA), the Veterans Benefits Administration (VBA), and the National Cemetery Administration (NCA). VHA’s facilities are organized into 21 regional networks, known as VISNs, that are structured to manage and allocate resources to VA health care facilities across the United States. Each VISN is also responsible for coordination and oversight of all administrative and clinical activities within its specified region of the country. We reviewed the status of capital projects in 6 of VA’s 21 VISNs, as shown in figure 1. To provide services to veterans, VA’s current real property portfolio consists of U.S.-owned buildings under VA jurisdiction and control. VA also generally has authority to enter into enhanced-use leases, 3-year outleases, and sharing agreements relating to its real property or space. The assets include, for example, hospitals, clinics, cemeteries, and office buildings where veterans access their many benefits and VA administers its programs. VHA is the largest administration and, in terms of the number of acres owned and square footage, includes the greatest portion of VA’s real property portfolio, as shown in figure 2. In response to our 1999 recommendations for improving agency capital asset planning and budgeting, VA initiated CARES. CARES was the first comprehensive, long-range assessment of VA’s health care capital asset priorities since 1981 and was designed to assess buildings and land ownership under VA’s jurisdiction and control in light of expected demand for VA inpatient and outpatient health care services across a planning horizon through fiscal year 2022. For example, VA recognized that the shift in veterans’ demand for services could be met at community based outpatient clinics that are more geographically accessible to veterans than its hospitals. The CARES process validated gaps in VA’s infrastructure and health care services provided to veterans. The process also included a set of tools for annual capital and strategic planning to enable VA to plan for real property needs to provide quality health care to veterans. Also in response to our 1999 recommendations, VA developed formal 5- year capital plans that are submitted with its annual budget requests to Congress. The 5-year capital plan included in VA’s fiscal year 2011 budget submission document includes the following: capital planning linked to the agency mission, strategic goals, and objectives; baseline assessments and identification of performance gaps—such as underutilized or vacant property and the backlog of repairs needed at its facilities; an alternatives evaluation and resulting risk management plan for these performance gaps; a description of the agency’s planning and approvals process; and a long-term capital plan. Effective planning for capital investments is important for several reasons. First, over time, large amounts of federal funds are spent on capital assets. Second, the performance of capital assets affects how agencies are able to achieve their missions, goals, and objectives to provide service to the public. Finally, capital planning drives budgeting, procurement, and management of an agency’s capital assets. As VA increased its emphasis on outpatient care rather than inpatient care, it was left with an increasingly obsolete infrastructure, including many hospitals built or acquired more than 50 years ago in locations that are sometimes far from where veterans live. This challenge of misaligned infrastructure is not unique to VA. In January 2003, we identified federal real property management as a high-risk area, and VA was cited among those federal agencies that hold a majority of federally owned and leased space. We also reported on VA’s long-standing problems with excess and underutilized property, deteriorating facilities, unreliable real property data, overreliance on costly leasing, and building security challenges. We did this to highlight the need for broad-based transformation in this area, which, if well implemented, will better position federal agencies to achieve mission effectiveness and reduce operating costs. As its newest capital planning effort, VA has initiated SCIP, which will be an agencywide review of VA’s real property priorities and will inform its fiscal year 2012 annual budget submission. According to VA, SCIP will include six key components. Table 1 shows these components and the VA’s planned actions to implement them. SCIP, which VA said builds on its existing capital planning processes, also addresses leading practices. It further strengthens VA’s efforts in some areas and is still evolving and being refined. The SCIP components are linked to VA’s previous capital planning efforts, including CARES and the development of its 5-year capital plan. Figure 3 illustrates VA’s capital planning steps from 1999 to 2010. As a part of its shift from hospital based, inpatient care to outpatient care, VA has made changes to its real property portfolio on the basis of its May 2004 CARES Decision document and subsequent capital planning. As for specific CARES Decision projects, VA reported in its April 2010 Implementation Monitoring Report on Capital Asset Realignment for Enhanced Services that it has completed 13 of 59 planned major and minor construction projects, opened 82 of 156 planned community-based outpatient clinics (CBOC), and has another 19 ongoing major construction projects identified in the CARES Decision. As for net changes to VA’s real property portfolio since the CARES Decision, our analysis of the data in VA’s 5-year capital plans from 2004 to 2009 found that leases and leased space had increased due in part to VA’s efforts to realign its portfolio towards more outpatient facilities, such as CBOCs and vet centers. These centers provide readjustment counseling and outreach services to all veterans and family members dealing with military-related issues. Although U.S.-owned buildings and vacant space under VA’s jurisdiction and control show a decrease because of VA’s disposal of assets, VA shows a net increase as a result of new construction projects. Similarly, the net increase in owned acreage can be attributed to property acquired by VA’s National Cemetery Administration for new cemeteries. These results of VA’s agencywide capital planning efforts since its March 2004 CARES Decision are shown in table 2. Our analysis also showed that, with the exception of hospitals, VA has expanded the number and types of buildings by which it delivers services. Table 3 shows VA’s changes to its real property portfolio in terms of facility types. VA officials and stakeholders generally agreed that changes to the VA real property portfolio have benefited veterans. For example, both groups reported that the new facilities, such as more accessible clinics, had improved veteran access to services by limiting the distance that veterans travel to VA health care facilities. Officials from veteran service organizations with whom we spoke stated that upgrades to VA’s real property portfolio had improved care for veterans. For example, these officials commented that real property changes in VA facilities in Denver, Colorado, and Syracuse, New York, have resulted in improved services for veterans with spinal cord injuries or diseases. Additionally, officials from VA’s central office and the VISNs that we contacted cited recent initiatives, such as telehealth and telemental health services at CBOCs, as being beneficial to veterans. To gain further insight on the steps that VA took to realign its real property portfolio, we observed ongoing and completed projects at 5 VISNs that demonstrated VA’s changes in the areas that CARES identified as priorities: improved access, modernization, special disability programs, underutilized or vacant property, CBOCs, VA and the Department of Defense (DOD) collaboration, long-term care, and mental health. As such, we visited several facilities in those VISNs as described in figures 4 through 9. VA has been appropriated about $16.7 billion from fiscal years 2004 through 2010 for major construction, minor construction, and nonrecurring maintenance. In addition, VA has identified several other high-cost projects that have not yet been funded. For example, VA reported in its 5-year capital plan for fiscal years 2010-2015 that agencywide, it has a backlog of $9.4 billion of facility condition assessment deficiencies (repairs). Furthermore, due to incremental funding of projects, 24 of the 69 ongoing major construction projects listed in the plan needed an additional $4.4 billion to complete. For example, the plan describes funding needed for the new medical facility in Denver. As of fiscal year 2010, VA has been funded only $307 million of the estimated $800 million total project cost. The President’s budget for fiscal year 2011 included a request for $451 million for this project. Even if this amount is funded, VA’s 5-year capital plan reports that this project would still need an additional $42 million to complete construction. VA officials commented that this phased approach enables the agency to request funding in stages that allow for the funding of independent and stand alone portions of projects to be built while allowing available resources to be utilized on other high-priority projects. Like other agencies across the government, VA has faced underlying obstacles that have exacerbated its real property management challenges and caused them to persist over time. Specifically, we have previously reported on such challenges, including competing stakeholder interests, legal and budgetary limitations, and the need for improved capital planning. These challenges can impact the agency’s ability to fully realign its real property portfolio. Regarding competing stakeholder interests, we have reported that VA has faced challenges in coordinating with historic preservation and community organizations, as well as managing established relationships with other health care providers, such as college and university partnerships. While joint ventures for facilities present unique opportunities for VA to explore new ways to provide health care to veterans, it also raises issues for VA. These issues include the benefits and costs of investing in a joint facility compared with those of other alternatives, such as maintaining the existing facility or considering options with other health care providers in the area; legal matters associated with the new facility, such as leasing or transferring property, contracting, and employment; and potential concerns of stakeholders. We have also identified legal and budgetary issues that can hamper agencies’ efforts to address their excess and underutilized real property problems. For example, federal agencies must assess and may be required by law to pay for any environmental cleanup that may be needed before disposing of a property—a process that may require years of study and result in significant costs. Regarding VA, we have reported that some VA managers have retained excess property because the administrative complexity and costs of complying with these requirements were disincentives to disposal. For example, we previously reported that VA stated that except for enhanced-use leases, restrictions on retaining proceeds relating to VA controlled properties are a disincentive for VA to dispose of property. VA officials estimated that the average time it takes to implement an enhanced-use lease can range from 9 months to 2 years. VA can also dispose of underutilized and vacant property to other federal agencies and for programs for the homeless under the McKinney-Vento Act. However, VA officials stated that the process can average 2 years and that the agency may not receive compensation from such agreements entered into under this act. Over the years, we have reported that (1) prudent capital planning can help agencies make the most of limited resources and (2) timely and effective capital acquisitions can result in economical acquisitions that are on budget, on schedule, and in line with mission needs and goals. Both OMB and GAO guidance emphasize the importance of developing a long-term capital investment plan to guide the implementation of organizational goals and objectives and to help decision makers establish priorities over time. Capital planning is an especially important area for VA, given the agency’s efforts to effect a large-scale transformation of its real property portfolio and the substantial capital investment these efforts will require. Congress, OMB, and GAO have identified the need for effective capital planning. In addition, budgetary constraints and demands to improve performance in all areas have put pressure on agencies to make sound capital acquisition proposals. In the overall capital programming process, planning is the first phase—and, arguably, the most important—since it drives the remaining phases of budgeting, procurement, and management. OMB has issued various guidance and requirements for agencies to follow and use in developing disciplined capital programming processes, including the 1997 Capital Programming Guide, to provide agencies with a basic reference for establishing an effective process for making investment decisions. In 1998, GAO issued its Executive Guide on the basis of a study of leading state and local government and private-sector capital investment practices. Our guide (1) summarizes fundamental practices that have been successfully implemented by organizations that are recognized for their outstanding capital decision-making practices and (2) provides examples of leading practices from which the federal government may draw lessons and ideas. Although our guide focuses on fundamental practices, rather than detailed guidance, the practices represent actions and steps to be taken. In addition, the examples presented in our guide illustrate and complement many of the phases and specific steps contained in OMB’s guide. There is a great deal of overlap in the OMB and GAO guides since both suggest similar fundamental practices that are essential to making effective capital investment decisions. Because of the importance of planning, we focused on VA’s implementation of the concepts that underlie the planning phase of OMB’s guide and planning practices in our guide (see fig.10). OMB and our guidance stress the importance of linking capital asset investments to an organization’s overall mission and long-term strategic goals. The guidance also emphasizes evaluating a full range of alternatives to bridge any identified performance gap, informed by agency asset inventories that contain condition information. Furthermore, the guidance calls for a comprehensive decision-making framework to review, rank, and select from among competing project proposals. Such a framework should include the appropriate levels of management review, and selections should be based on the use of established criteria. The ultimate product of the planning phase is a comprehensive capital plan, which defines the long-term capital decisions that resulted from the agency’s capital planning process. Both OMB and our guidance highlight the importance of this plan. The planning phase is the crux of the capital decision-making process and the products that result from this phase are used throughout the remaining phases of the process. We found that VA’s 5-year capital plan and SCIP reflect several of the leading capital planning practices that we have previously discussed. For example, VA’s 5-year capital plan is updated annually as part of its annual budget submission to Congress and contains lists of projects, by administration, for the next 5 years. SCIP is an update to VA’s capital planning process that builds on existing processes, including the principles and tools of CARES, and was used to inform VA’s annual budget submission to Congress for fiscal year 2012. Figure 11 presents examples of how VA’s planning efforts reflect leading practices. We compared VA’s 5-year capital plan with the leading practices. In the area of strategic linkage, we found that VA’s efforts reflect leading practices by identifying projects that received the highest priority ranking using criteria that reflect the goals and mission contained in VA’s Strategic Plan. For example, one of the criteria by which potential capital projects were prioritized was “Departmental Alignment,” which includes the Secretary’s goals for improving management and performance and VA’s strategic goals. In regard to assessing needs and identifying gaps, in 2004 we reported that VA neither had an agencywide inventory of existing capital assets nor agencywide information about the condition of those assets, but VA has since developed a capital asset database. VA officials said they recently completed facility condition assessments for all of its owned buildings and are considering whether to assess the condition of their leased buildings, many of which VA is not responsible for maintaining. VA uses facility condition assessments as one factor in guiding capital investment decisions to improve the condition of its most deteriorated buildings. VA’s 5-year capital plan also includes steps to evaluate various alternatives for addressing real property priorities by requiring that four alternative approaches be considered to bridge any capital need—leasing; status quo; new construction; and rehabilitation, repair, or expansion of existing facilities. In the area of establishing a review and approval framework for VA’s capital investment decisions, VA has a panel chaired by a department- wide group of senior VA management to assess capital investment proposals; evaluate, score, and prioritize proposals by VA administration; and make recommendations through the VA governance process to the Secretary of VA. VA’s 5-year plan uses established criteria by which potential capital projects are evaluated, such as criteria that reflect VA’s goal of increasing veterans’ access to health care and supporting services for veterans suffering from spinal cord injury, traumatic brain injury, and post-traumatic stress disorder. Finally, in 2004 we reported that VA did not have a long-term capital plan that identified agencywide real property priorities. However, VA has since developed a 5-year capital plan, updated annually, which is used to inform the agency’s annual budget submission. It describes VA’s capital planning process and gives brief descriptions of capital investment projects included in its budget submission. VA also modified its capital planning efforts in 2010 by developing a new process, called SCIP, which was used to inform its fiscal year 2012 budget submission to Congress. VA officials told us that SCIP builds on its existing capital planning processes, addresses leading practices, and further strengthens VA’s efforts in some areas. Under SCIP, VA will continue to link its investments with its strategic goals, assess the agency’s real property priorities, evaluate various alternatives, and use a similar review and approval framework when making capital investment decisions. In addition, SCIP also strengthens VA’s capital planning in some areas. Specifically, SCIP extends the horizon of its 5-year capital plan to 10 years, providing VA with a longer range picture of the agency’s future real property priorities. As a result of SCIP, VA officials told us that the agency developed cost estimates for all of its major and minor construction projects, leases, and nonrecurring maintenance projects for the next 10 years. SCIP is also centralizing VA’s process for ranking and selecting capital investments on the basis of established criteria. For example, in the past, VA would develop a list of prioritized projects for each of its administrations, such as VBA, NCA, and VHA, for projects less than $10 million dollars. However, VA is now prioritizing projects from an agencywide perspective across all of its administrations and developing one list to guide its capital planning decisions. VA has also drafted a set of weighted criteria by which it plans to evaluate projects. The criteria listed below assess whether capital investments improve the safety and security of VA facilities by mitigating potential damage to buildings facing the risk of a seismic event, improving compliance with safety and security laws and regulations, and ensuring that VA can provide service in the wake of a catastrophic event; address selected key major initiatives and supporting initiatives identified in VA’s strategic plan; address existing deficiencies in its facilities that negatively impact the delivery of services and benefits to veterans; reduce the time and distance a veteran has to travel to receive services and benefits, increase the number of veterans utilizing VA’s services, and improve the services provided; right-size VA’s inventory by building new space, converting underutilized space, or reducing excess space; and ensure cost-effectiveness and the reduction of operating costs for new capital investments. VA officials said that SCIP is still evolving and being refined. For example, VA officials said that the agency completed a series of “lessons learned” sessions to determine how the process can be improved and to make changes, if needed, for the 2013 budget cycle. Despite the positive aspects of VA’s capital planning efforts, VA’s resulting 5-year capital plan that it provides yearly to Congress lacks transparency about the cost of future priorities beyond the current budget year. For projects VA proposes to initiate in the current budget year, VA’s 5-year capital plan includes current year estimates for cost of construction, equipment, and operating costs for major and minor construction projects, such as new and replacement medical facilities. It also provides estimates to complete these and other ongoing projects in future years. However, the plan identifies other potential projects, not beginning in the current budget year, for which it lists project name but contains no information on what these projects might cost or the priority, as VA has not assigned one to them. For example, VA’s most recent capital plan, submitted with its 2011 budget request, lists potential projects—including 100 major construction and 1,062 minor construction projects—for which pricing estimates are not provided. We have previously reported that capital planning should result in a long- term capital plan with prioritized projects and justification of capital requests, such as project resource estimates and costs. The cost estimates of prioritized projects can then be incorporated into an agency’s annual budget request to Congress. The yearly request reflects the agency’s policy decisions regarding what it has determined, in consultation with OMB, should be funded. VA officials told us that it has been VA’s policy to not include multiyear pricing information for projects in their current 5-year capital plan and budget submission to Congress. VA’s SCIP, according to VA officials and VA documents we reviewed, will identify costs for future projects and information about their relative priority within their organization. VA commented that the future priority of unfunded projects cannot be provided as these projects are reprioritized each year using updated weights and decision criteria. Further, during our review, VA officials told us they are considering the release of future year capital cost estimates to Congress. A decision on the release of this information is expected to be reflected in the fiscal year 2012 budget and SCIP plan to be released in February 2011. VA officials added that pricing information is viewed as an internal tool for prioritizing projects and preparing budget requests and that project cost estimates become more reliable as the projects move closer to the year of construction. While we agree that cost estimates beyond the current year are less reliable, this could be made clear to decision makers, and as the projects move closer to the year of implementation, the estimates can be refined. VA officials told us that the agency already maintains future year estimates internally. While VA may view this information as suitable only for internal use, decision makers in Congress would benefit from having it for several reasons. Specifically, transparency about future priorities allows decision makers to weigh current year budget decisions in context with the magnitude of future costs. In the case of VA, which has identified a significant number of future projects in the tens of billions of dollars, full transparency regarding these future priorities may spur discussion and debate about actions Congress can take to address them. This could include not only appropriations, but also programmatic changes and real property management tools that could help VA to leverage its real property to more efficiently and effectively meet the future needs of veterans. Additionally, transparency regarding future capital costs puts VA’s priorities in context with the overall fiscal condition of the U.S. government. There is widespread agreement that the federal government faces formidable near- and long-term fiscal challenges. GAO has long stated that increased information and better incentives for budget decisions involving both existing and proposed programs that require significant future resources could facilitate consideration of competing demands and help put our finances on a more sustainable footing. And lastly, one of VA’s key stakeholders, the Senate Appropriations Committee, recently asked VA for more information on its future capital project costs. The committee is aware of VA’s SCIP process and requested that the department submit with its fiscal year 2012 budget request, all findings associated with this review. At the time of our review, VA had not determined how it would respond to this request. Providing cost estimates for future projects to Congress for capital programs is not without precedent in the federal government. For example, in 1987, Congress directed the Department of Defense to submit a 5-year defense program (referred to as the future years defense program,(FYDP)) used by the Secretary of Defense in formulating the estimated expenditures and proposed appropriations included in the President’s annual budget to support DOD programs, projects, and activities. The FYDP provides DOD and Congress with a tool for looking at future funding needs beyond immediate budget priorities and can be considered a long-term capital plan. As another example, the judiciary recognized that it was facing space shortages, security shortfalls, and operational inefficiencies at courthouse facilities around the country. In March 1996, the judiciary issued a 5-year plan for courthouse construction, which was intended to communicate the judiciary’s urgent housing needs to Congress and the General Services Administration, and identified 45 projects for funding on the basis of information from Congress and GSA that $500 million could be used as a planning target in estimating funds that will be available for courthouse construction each year. The judiciary also developed a methodology, including criteria and weights, for assigning urgency scores to projects. As another example, we reported earlier this year that House and Senate appropriators have voiced interest in having the Army Corps of Engineers include additional information in the agency’s budget presentation. We found that an information gap is created when an administration highlights its priority projects, but does not provide sufficient information on other future resource needs. Congressional users of the Corps’ budget presentation told us that not having information on future resource needs limits the ability of Congress to make fully informed decisions when making appropriations decisions. Further, such information would increase the usefulness and transparency of the budget presentation. VA has an important mission in serving veterans, and its real property portfolio is critical to ensuring that veterans have access to benefits and services. Billions of dollars have already been appropriated to VA to realign and modernize its portfolio. Further, VA has identified ongoing and future projects that could potentially require several additional billion dollars over the next few years to complete. Given the fiscal environment, VA and Congress would benefit from a more transparent view of potential projects and their estimated costs than VA currently provides. Such a view would enable VA and Congress to better evaluate the full range of real property priorities over the next few years and, should fiscal constraints so dictate, identify which might take precedence over the others. In short, more transparency would allow for more informed decision making among competing priorities, and the potential for improved service to veterans over the long term would likely be enhanced. To enhance transparency and allow for more informed decision making related to VA’s real property priorities, we recommend that the Secretary of Veterans Affairs provide the full results of VA’s SCIP process and any subsequent capital planning efforts, including details on the estimated cost of all future projects, to Congress on a yearly basis. We provided a draft of this report to VA for their review and comment. VA generally agreed with our conclusions and concurred with our recommendation. VA also provided technical corrections and clarifications, which we incorporated as appropriate. See appendix II for VA’s comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, to the Secretary of Veterans’ Affairs, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We addressed the following objectives: (1) To what extent have the Department of Veterans Affairs’ (VA) capital planning efforts resulted in changes to its real property portfolio and what priorities remain? and (2) To what extent do VA’s capital planning efforts follow leading practices and provide the information needed for informed decision making? To determine the extent to which VA’s capital planning efforts, including the Capital Asset Realignment for Enhanced Services (CARES), have resulted in changes to its real property portfolio and to identify the agency’s remaining priorities, we interviewed VA officials located in the central office in Washington, D.C., and 6 Veterans Integrated Service Networks (VISN) and observed VA facilities in 5 of these 6 VISNs. (See table 4 for a list of VA departments, VISNs interviewed and projects observed.) Based on the number of veterans served annually, we selected 2 large, medium, and small VISNs each, out of a total of 21. To further guide our selections, we also considered a number of other factors, including the number of completed and ongoing projects, new medical facilities, and geographic dispersion. Within each VISN, we selected projects in various stages, CARES projects being monitored by VA according to seven centrally tracked implementation measures, and sites throughout the geographic footprint of each selected VISN. We also interviewed senior officials at 5 veterans service organizations chartered by Congress or recognized by VA for claim representation. (See table 5 for a list of the veterans service organizations that we interviewed.) We also reviewed agency data in VA’s CARES Decision, its Implementation Monitoring Report on Capital Asset Realignment for Enhanced Services, and its 5-year capital plan about changes to its real property portfolio and the number and cost of projects needing additional funding. In addition, we reviewed the funding that VA has received for major and minor construction projects and nonrecurring maintenance since fiscal year 2004 in VA budget submission documentation, its 5-year capital plans, and appropriation laws. We assessed the funding and facilities data from the VA and determined it was reliable for our purposes. To determine the extent to which VA’s capital planning efforts follow leading practices and provide the information needed for informed decision making, we interviewed VA officials involved in its capital planning efforts. We also collected information on leading capital planning practices from the Office of Management and Budget’s Capital Programming Guide and GAO’s Executive Guide, and compared it with VA’s efforts as described in the agency 5-year capital plan. In addition, we collected information and interviewed officials on VA’s new capital planning process, called SCIP, and compared it to leading capital planning practices. To compare VA’s efforts with the efforts of other federal agencies that have provided estimates to Congress regarding the magnitude of future real property priorities, we reviewed our previous reports on capital planning across the federal government, including the Department of Defense’s future years defense program and efforts by the judiciary in March 1996 to communicate its urgent housing needs to Congress. Finally, we collected VA data on the agency’s future real property priorities and reviewed a recent request by Congress to VA to develop and submit a comprehensive capital plan, along with other information related to VA’s capital planning efforts. In addition to the individual named above, David Sausville, Assistant Director; Daniel Cain; George Depaoli; Colin Fallon; Wati Kadzai; and Erica Miles made key contributions to this report.
The Department of Veterans Affairs (VA) has undertaken various planning efforts to realign its real property portfolio, including the Capital Asset Realignment for Enhanced Services (CARES), creation of a 5-year capital plan, and its newest effort, the Strategic Capital Investment Planning process (SCIP). Through these efforts, VA has identified numerous real property priorities it believes should be completed if the agency's facilities are to meet veterans' needs for services now and in the future. This congressionally requested report addresses the extent to which VA's capital planning efforts (1) have resulted in changes to its real property portfolio and (2) follow leading practices and provide information for informed decision making. To perform this work, GAO reviewed leading capital planning practices and data on VA's real property portfolio and future priorities. GAO also interviewed VA officials and veterans service organizations, and visited sites in 5 of VA's 21 veterans integrated service networks. Through its capital planning efforts, VA has taken steps to realign its real property portfolio from hospital based, inpatient care to outpatient care, but a substantial number of costly projects and other long-standing challenges also remain. Several of VA's most recent capital projects--such as community based outpatient clinics, rehabilitation centers for blind veterans and spinal cord injury center--were based on its CARES efforts and subsequent capital planning. VA officials and veterans service organizations GAO contacted agreed that these facilities have had a positive effect on veterans' access to services. However, VA has identified several high-cost priorities such as facility repairs and projects that have not yet been funded. For example, VA reported in its 5-year capital plan for fiscal years 2010-2015 that it had a backlog of $9.4 billion of facility repairs. The 5-year plan further identified an additional $4.4 billion in funding to complete 24 of the 69 ongoing major construction projects. Besides substantial funding priorities, VA, like other agencies, has faced underlying obstacles that have exacerbated its real property management challenges and can also impact its ability to fully realign its real property portfolio. GAO has previously reported that such challenges include competing stakeholder interests, legal and budgetary limitations, and capital planning processes that did not always adequately address such issues as excess and underutilized property. VA's capital planning efforts generally reflect leading practices, but lack transparency about the cost of future priorities that could better inform decision making. For example, GAO found that VA's 5-year capital plan links its investments with its strategic goals, assesses the agency's capital priorities, and evaluates various alternatives. Also, SCIP strengthens VA's capital planning efforts by extending the horizon of its 5-year plan to 10 years, and providing VA with a longer range picture of the agency's future real property priorities. While these are positive steps, VA's planning efforts lack transparency regarding the magnitude of costs of the agency's future real property priorities, which may limit the ability of VA and Congress to make informed funding decisions among competing priorities. For instance, for potential future projects, VA's 5-year capital plan only lists project name and contains no information on what these projects are estimated to cost or the priority VA has assigned to them beyond the current budget year. VA officials said during the review that they are considering the release of future year capital cost estimates to Congress. Transparency about future requirements would benefit congressional decision makers by putting individual project decisions in a long-term, strategic context, and placing VA's fiscal situation within the context of the overall fiscal condition of the U.S. government. Providing future cost estimates to Congress for urgent, major capital programs is not without precedent in the federal government. Other federal agencies, such as the Department of Defense, have provided more transparent estimates to Congress regarding the magnitude of its future capital priorities beyond immediate budget priorities. GAO recommends that VA annually provide to Congress the full results of its SCIP process and any subsequent capital planning efforts, including details on estimated cost of future projects. VA concurred with this recommendation.
In carrying out its Afghan assistance efforts, USAID has experienced a number of systemic challenges that have hindered its ability to manage and oversee contracts and assistance instruments, such as grants and cooperative agreements. These challenges include gaps in planning for the use of contractors and assistance recipients and having visibility into their numbers. While this statement focuses on the challenges confronting USAID in Afghanistan, our work involving the Departments of Defense and State has found similar issues not only in Afghanistan but also in other countries, such as Iraq. The need for visibility into contracts and assistance instruments to inform decisions and perform oversight is critical, regardless of the agency or the country, as each agency relies extensively on contractors and assistance recipients to support and carry out its respective missions. While USAID has faced challenges, it has also taken actions to help mitigate some of the risks associated with awarding contracts and assistance instruments in Afghanistan. Most notably, through its vendor vetting program, USAID seeks to counter potential risks of U.S. funds being diverted to support criminal or insurgent activity. Our work has identified gaps in USAID’s planning efforts related to the role and extent of reliance on contractors and grantees. For example, we reported in April 2010 that USAID’s workforce planning efforts, including its human capital and workforce plans, do not address the extent to which certain types of contractors working outside the United States should be used. We further reported in June 2010 that USAID’s workforce plan for fiscal years 2009 through 2013 had a number of deficiencies, such as lacking supporting analyses that covered the agency’s entire workforce, including contractors, and not containing a full assessment of the agency’s workforce needs, including identifying existing workforce gaps and staffing levels required to meet program needs and goals. Such findings are not new. We noted, for example, in our 2004 and 2005 reviews of Afghanistan reconstruction efforts, when USAID developed its interim development assistance strategy, it did not incorporate information on the contractor and grantee resources required to implement the strategy. We determined that this hindered USAID’s ability to make informed decisions on resource allocations for the strategy. Further, as mentioned earlier, such findings have not been unique to USAID. For example, in our April 2010 report, we noted that the Department of State’s workforce plan generally does not address the extent to which contractors should be used to perform specific functions, such as contract and grant administration. In the absence of strategic planning for its use of contractors, we found that it was often individual offices within USAID that made case-by-case decisions on the use of contractors to support contract or grant administration functions. In our April 2010 report, we noted that USAID used contractors to help administer its contracts and grants in Afghanistan, in part to address frequent rotations of government personnel, as well as security and logistical concerns. Functions performed by these contractors included on-site monitoring of other contractors’ activities and awarding and administering grants. The Departments of Defense and State have also relied on contractors to perform similar functions in both Afghanistan and Iraq. While relying on contractors to perform such functions can provide benefits, we found that USAID did not always fully address related risks. For example, USAID did not always include a contract clause required by agency policy to address potential conflicts of interest, and USAID contracting officials generally did not ensure enhanced oversight in accordance with federal regulations for situations in which contractors provided services that closely supported inherently governmental functions. Over the last four years, we have reported on limitations in USAID’s visibility into the number and value of contracts and assistance instruments with performance in Afghanistan, as well as the number of personnel working under those contracts and assistance instruments. Having reliable, meaningful data on contractors and assistance recipients is a starting point for informing agency decisions and ensuring proper management and oversight. In 2008, in response to congressional direction, USAID along with the Departments of Defense and State designated the Synchronized Predeployment and Operational Tracker (SPOT) database as their system of record to track statutorily required information on contracts and contractor personnel working in either Iraq or Afghanistan, a designation which the agencies reaffirmed when the requirement was expanded to include assistance instruments and associated personnel. However, we found that as of September 2011, SPOT still did not reliably track this information. As a result, USAID relied on other data sources, which had their own limitations, to prepare a 2011 report to Congress. Specifically, we found USAID’s reporting to be incomplete, particularly in the case of personnel numbers that were based on unreliable data. For example, for the number of contractor and assistance personnel in Afghanistan, USAID developed estimates that, according to a USAID official, were based in part on reports submitted by only about 70 percent of its contractors and assistance recipients. Further, USAID acknowledged that it had limited ability to verify the accuracy or completeness of the data that were reported. Similarly, we found that the Department of Defense underreported the value of its contracts in Iraq and Afghanistan by at least $3.9 billion, while the Department of State did not report statutorily required information on assistance instruments and the number of personnel working on them in either country. Given the repeated limitations we have found in SPOT and the ability of USAID, Defense, and State to provide statutorily required information, we recommended in 2009 and then subsequently reiterated that the three agencies develop a joint plan with associated time frames to address limitations and ensure SPOT’s implementation to fulfill statutory requirements. In response to our 2009 recommendation, USAID did not address the recommendation, while the Departments of Defense and State cited on-going interagency coordination efforts as sufficient. However, we concluded that based on our findings, coordination alone is not sufficient and have continued to call for the agencies to develop a plan. We have recently begun reviewing the three agencies’ April 2012 report to Congress on their contracts, assistance instruments, and associated personnel in Iraq and Afghanistan and the actions they are taking to improve their database. Commission on Wartime Contracting in Iraq and Afghanistan, Transforming Wartime Contracting: Controlling Costs, Reducing Risks (Arlington, Va.: Aug. 2011). settings, ensuring the government can provide sufficient acquisition management and contractor oversight, and taking actions to mitigate the threat of additional waste due to a lack of sustainment by host governments. We are currently reviewing what actions USAID and the Departments of Defense and State are taking to address the Commission’s recommendations. In response to continued congressional attention and their own concerns about actual and perceived corruption and its impact on U.S. and international activities in Afghanistan, U.S. government agencies have established efforts to identify malign actors, encourage transparency, and prevent corruption. Under the auspices of its Accountable Assistance for Afghanistan initiative, USAID is seeking to address some of the challenges associated with providing assistance in Afghanistan. One element of the initiative is the vendor vetting program. In January 2011, in order to counter potential risks of U.S. funds being diverted to support criminal or insurgent activity, USAID created a process for vetting prospective non-U.S. contractors and assistance recipients (i.e., implementing partners) in Afghanistan. This process is similar to the one USAID has used in the West Bank and Gaza since 2006. USAID’s process in Afghanistan was formalized in a May 2011 mission order, which established a vetting threshold of $150,000 and identified other risk factors, such as project location and type of contract or service being performed by the non-U.S. vendor or recipient. The mission order also established an Afghanistan Counter-Terrorism Team that can review and adjust the risk factors as needed. At the time our June 2011 report on vetting efforts was issued, USAID officials said that the agency’s vendor vetting process was still in the early stages, and that it would be an iterative implementation process, some aspects of which could change—such as the vetting threshold and the expansion of vetting to other non-U.S. partners. We recommended that USAID consider formalizing a risk-based approach that would enable it to identify and vet the highest-risk vendors and partners, including those with contracts below the $150,000 threshold. We also made a recommendation to promote interagency collaboration to better ensure that non-U.S. vendors potentially posing a risk are vetted. Specifically, we recommended that USAID, the Department of Defense (which had a vendor vetting program), and the Department of State (which did not have a vendor vetting program comparable to USAID’s or Defense’s) should consider developing formalized procedures, such as an interagency agreement, to ensure the continuity of communication of vetting results and to support intelligence information, so that other contracting activities may be informed by those results. USAID concurred with our recommendations and noted that the agency had already begun to implement corrective measures to ensure conformity with our recommendations and adherence to various statutes, regulations, and executive orders pertaining to terrorism. Specifically, under the May 2011 mission order, the Afghanistan Counter-Terrorism Team is to work to establish an interagency decision-making body in Afghanistan to adjudicate vetting results, establish reporting metrics for USAID’s vetting process, and work with the vetting unit to modify as needed the criteria used to establish risk-based indicators for vetting. We have previously reported on systematic weaknesses in USAID’s oversight and monitoring of the performance of projects and programs carried out by its implementing partners in Afghanistan. In 2010, we reported that USAID did not consistently follow its established performance management and evaluation procedures with regard to its agriculture and water sector projects in Afghanistan.There were various areas in which the USAID Mission to Afghanistan needed to improve. We found that the Mission had been operating without an approved Performance Management Plan to guide its oversight efforts after 2008. In addition, while implementing partners had routinely reported on the progress of USAID’s programs, we found that USAID did not always approve the performance indicators these partners were using and did not ensure, as its procedures require, that implementing partners establish targets for each performance indicator. For example, only two of seven USAID-funded agricultural programs that were active during fiscal year 2009 and included in our review had targets for all of their indicators. Within the water sector, we found that USAID collected quarterly progress reports from five of the six water project implementers for the projects we reviewed, but it did not analyze and interpret this information as required. We also found that USAID could improve its assessment and use of performance data submitted by implementing partners or program evaluations to, among other things, help identify strengths or weaknesses of ongoing or completed programs. In addition, USAID officials face a high risk security environment and the USAID Mission to Afghanistan has experienced high staff turnover, which hinder program oversight. For example, in July 2010, we reported that the lack of a secure environment has challenged the ability of USAID officials to monitor construction and development efforts.Also, USAID personnel are assigned 1-year assignments with an option to extend assignments for an additional year—which USAID acknowledged hampered program design and implementation. The Department of State’s Office of the Inspector General noted in its 2010 inspection of the entire embassy and its staff, including USAID, that 1-year assignments coupled with multiple rest-and-recuperation breaks limited the development of expertise and contributed to a lack of continuity. We also found that a lack of documentation of key programmatic decisions and an insufficient method to transfer knowledge to successors had contributed to the loss of institutional knowledge—a challenge that we reported USAID should address. In the absence of consistent application of its existing performance management and evaluation procedures and the lack of mechanisms for knowledge transfer, USAID programs are more vulnerable to corruption, waste, fraud, and abuse. In 2010, we recommended, among other things, that the Administrator of USAID take steps to (1) address preservation of institutional knowledge, (2) ensure programs have performance indicators and targets, and (3) consistently assess and use program data and evaluations to shape current programs and inform future programs. USAID concurred with these recommendations and identified several actions the agency is taking in Afghanistan to address them, including the following: In 2011, USAID established mandatory technical guidance for program monitoring officials on how to establish and where to maintain files, in addition to key responsibilities of the office director to ensure that files are maintained before officials leave their positions. In 2010, USAID approved a new performance management plan for its agriculture programs and worked with its implementing partners to align their existing indicators with those in the new plan. In 2011, USAID delegated more authority to field program officers to serve as activity managers of agriculture programs, making them responsible for conducting regular project monitoring and reporting on program performance, verifying data reported by implementing partners, and assuring the quality of data being reported through regular site visits. In addition, USAID has taken steps to increase the use of third-party monitoring to ensure data integrity and quality. Risk assessments and internal controls to mitigate identified risks are key elements of an internal control framework to provide reasonable assurance that agency assets are safeguarded against fraud, waste, abuse, and mismanagement. Although USAID conducted preaward risk assessments for most of its bilateral direct assistance to the Afghan government, we found that USAID’s policies did not require preaward risk assessments in all cases. For example, we reported in 2011 that USAID did not complete preaward risk assessments, such as determining the awardees’ capability to independently manage and account for funds, in two of the eight cases of bilateral direct assistance. USAID made those two awards after the USAID Administrator had committed to Congress in July 2010 that USAID would not proceed with direct assistance to an Afghan ministry before it had assessed the institution’s capabilities. We recommended that USAID update its risk assessment policies to reflect the USAID Administrator’s commitment to Congress. USAID has since updated its policies to require preaward risk assessments for all bilateral direct assistance awards, periodic reassessment, and risk mitigation measures, as appropriate. Since October 2011, USAID has awarded $35 million in direct assistance funds to two Afghan ministries and, in compliance with its updated policies, completed risk assessments prior to awarding the funds in both cases. We also found that USAID established general financial and other controls in its bilateral direct assistance agreements with Afghan ministries, including requiring that the ministries: establish separate noncommingled bank accounts, grant USAID access rights to the bank accounts, have a monitoring and evaluation plan, comply with periodic reporting requirements, and maintain books and records subject to audit. In addition to these general financial controls, USAID is required to establish additional monitoring and approval controls in its direct bilateral assistance agreements that provide USAID funds to Afghan ministries to contract for goods and services. USAID had agreements with two Afghan ministries that allowed them to contract out. However, we previously found that USAID did not always document its approval of these ministries’ procurements prior to contract execution. We recommended that USAID ensure compliance with the monitoring and approval requirements. We are now following up with USAID to ensure it is implementing our recommendation. With respect to direct assistance provided multilaterally through public international organizations such as the World Bank, USAID’s policy is to generally rely on the organization’s financial management, procurement, and audit policies and procedures. We found, however, that USAID has not consistently complied with its multilateral trust fund risk assessment policies in awarding funds to the World Bank’s ARTF. For example, in 2011, we reported that USAID did not conduct a risk assessment before awarding an additional $1.3 billion to the World Bank for ARTF.We also found that USAID did not conduct preaward determinations for 16 of 21 modifications to the original World Bank grant agreement. In response to our findings and a prior GAO report, USAID revised and expanded its guidance on preaward risk assessments for the World Bank and other public international organizations. Under the revised guidance, USAID is required to determine the World Bank’s level of responsibility through consideration of several factors, including the quality of the World Bank’s past performance and its most recent audited financial statements. The World Bank has established financial controls over donor contributions to the ARTF. For example, the World Bank hired a monitoring agent responsible for monitoring the eligibility of salaries and other recurrent expenditures that the Afghan government submits for reimbursement against ARTF criteria. The World Bank also reports that it assesses projects semi-annually as part of regular World Bank supervision in accordance with its policies, procedures and guidelines based in part on project visits. However, we found examples that the financial controls established by the World Bank over the ARTF face several challenges: The World Bank and international donors have expressed concern over the level of ineligible expenditures submitted by the Afghan government for reimbursement. While ineligible expenditures are not reimbursed, the bank considers the level of ineligible expenditures to be an indicator of weaknesses in the Afghan government’s ability to meet agreed-upon procurement and financial management standards. Afghanistan’s Control and Audit Office conducts audits of Afghan government programs, including those funded by the ARTF, but lacked qualified auditors and faced other capacity restraints, according to the Special Inspector General for Afghanistan Reconstruction and USAID. As a result, the office used international advisers and contracted auditors, funded by the World Bank, to help ensure that its audits of ARTF complied with international auditing standards. Security conditions prevented Afghanistan’s Control and Audit Office auditors from visiting most of the provinces where ARTF funds were being spent. The office was able to conduct audit tests in 10 of Afghanistan’s 34 provinces from March 2009 to March 2010 and issued a qualified opinion of the financial statements of ARTF’s salary and other recurrent expenditures. Mr. Chairman, Ranking Member Carnahan, and Members of the Subcommittee, this concludes our statement. We would be happy to answer any questions you may have at this time. For further information on this statement, please contact John P. Hutton at (202) 512-4841 or [email protected] or Charles Michael Johnson, Jr. at (202) 512-7331 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Johana R. Ayers, Assistant Director; Tetsuo Miyabara, Assistant Director; Pierre Toureille, Assistant Director; Thomas Costa; David Dayton; Emily Gupta; Farahnaaz Khakoo-Mausel; Bruce Kutnick; Angie Nichols-Friedman; Mona Sehgal; and Esther Toledo. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 2002, the United States has appropriated nearly $90 billion to help stabilize Afghanistan and build the Afghan government’s capacity to provide security, enhance governance, and develop a sustainable economy. To assist Congress in its oversight, GAO has issued over 100 reports and testimonies related to U.S. efforts in Afghanistan, including those managed by USAID and the Departments of Defense and State. USAID provides assistance to Afghanistan through contracts and assistance instruments, such as grants and cooperative agreements, and in the form of direct assistance—funding provided through the Afghan national budget for use by its ministries. Direct assistance is provided (1) bilaterally to individual Afghan ministries or (2) multilaterally through trust funds administered by the World Bank and the United Nations Development Program. This testimony discusses findings from GAO reports issued primarily in 2010 and 2011 that cover USAID’s (1) management of contracts and assistance instruments, (2) oversight of development-related program performance and results, and (3) accountability for direct assistance. The U.S. Agency for International Development (USAID) has experienced systemic challenges that have hindered its ability to manage and oversee contracts and assistance instruments in Afghanistan. Key challenges include gaps in planning for the use of contractors and assistance recipients and having visibility into their numbers. For example, GAO reported in April 2010 that, absent strategic planning for its use of contractors, individual offices within USAID often made case-by-case decisions on using contractors to support contract or grant administration and risks, such as possible conflicts of interest, were not always addressed. While having reliable data on contractors and assistance recipients is a starting point for informing agency decisions and ensuring proper management, GAO has also reported on limitations in USAID’s visibility into the number and value of contracts and assistance instruments in Afghanistan, as well as the number of personnel working under them. USAID, along with other agencies, has not implemented GAO’s recommendation to address such limitations. USAID, however, has taken other actions to mitigate risks associated with awarding contracts and assistance instruments in Afghanistan. In June 2011, GAO reported on USAID’s vendor vetting program, then in its early stages, which was designed to counter potential risks of U.S. funds being diverted to support criminal or insurgent activity. GAO recommended that USAID take a more risk-based approach to vet non-U.S. vendors and develop formal mechanisms to share vetting results with other agencies, both of which USAID agreed to do. GAO has found systematic weaknesses in USAID’s oversight and monitoring of project and program performance in Afghanistan. In 2010, GAO reported that USAID did not consistently follow its established performance management and evaluation procedures for Afghanistan agriculture and water sector projects. For example, only two of seven USAID-funded agricultural programs included in GAO’s review had targets for all their performance indicators. Moreover, the USAID Mission was operating without a required performance management plan. In addition, GAO reported on a lack of documentation of key programmatic decisions and an insufficient method to transfer knowledge to successors. USAID has taken several actions in response to these findings, such as updating its performance management plan and establishing mandatory guidelines on file maintenance to help ensure knowledge transfer. USAID has established and generally complied with various financial and other controls in its direct assistance agreements, such as requiring separate bank accounts and maintenance of records subject to audit. However, GAO found in 2011 that USAID had not always assessed the financial risks in providing direct assistance to Afghan government entities before awarding funds. For example, USAID did not complete preaward risk assessments in two of eight cases of bilateral assistance GAO identified. With regard to direct assistance provided multilaterally through the World Bank’s Afghanistan Reconstruction Trust Fund (ARTF), GAO found in 2011 that USAID had not consistently complied with its own risk assessment policies, and USAID had not conducted a risk assessment before awarding $1.3 billion to ARTF in March 2010. In response to GAO reports, USAID revised and expanded its guidance on preaward risk assessments for the World Bank and other public international organizations. GAO is not making new recommendations but has made numerous recommendations aimed at improving USAID’s management and oversight of assistance funds in Afghanistan. USAID has generally concurred with most of these recommendations and has taken or planned steps to address them.
NTIS’s basic statutory function is to collect research reports, maintain a bibliographic record and permanent repository of these reports, and disseminate them to the public. In addition, NTIS has developed a variety of information-related services. NTIS charges user fees for the sale of its products to the public and services to federal agencies. Under statutory authority enacted in 1950, NTIS collects reports containing scientific, technical, and engineering information from both domestic and foreign sources in a repository and makes the information available to (1) business and industry, (2) state and local governments, (3) other federal agencies, and (4) the general public to increase U.S. competitiveness in the global economy. The statute does not define scientific, technical, and engineering information. However, the Secretary of Commerce has interpreted this to include “all types of information which have a more or less direct bearing on business and industry generally.” including “economic information, market information, and related information so long as it is reasonably specific and bears some direct relationship to the organization and operation of industrial or business enterprise.” NTIS’s enabling legislation authorized it to charge fees for its The Secretary of Commerce described such information as products and established a policy to recover all costs, as feasible, through the fees. NTIS’s authority was revised in the National Technical Information Act of 1988. cooperative agreements, joint ventures, and other transactions as necessary in the conduct of the business of NTIS, and declared the NTIS repository to be a permanent federal function that could not be transferred to the private sector without congressional approval. The act was subsequently amended by the American Technology Preeminence Act of 1991, This act gave the agency authority to also enter into contracts, required all costs associated with acquisition, processing, storage, bibliographic control, and archiving to be recovered primarily by fees; required agencies to transfer unclassified scientific, technical, and engineering information which results from federally funded research and development to NTIS; and provided that NTIS’s use of new methods or media for information dissemination should include producing and disseminating information in electronic format. Further, the Commerce, Justice, and State, the Judiciary, and Related Agencies Appropriations Act of 1993 established a revolving fund for the payment of all expenses incurred by NTIS and gave it the authority to use that fund without further appropriations action by Congress. Pub. L. No. 100-519 (Oct. 24, 1988); 15 U.S.C. 3704b. million. Accordingly, in August 1999, the Secretary of Commerce proposed closing NTIS by September 30, 2000, because he believed that declining sales revenues would not continue to be sufficient to recover all of the agency’s operating costs. The Secretary attributed this decline partly to other agencies’ practice of making their research results available to the public for free through the Web. He also proposed transferring NTIS’s research report archives to the Library of Congress and requiring federal agencies to give the public free online access to new research reports. GAO-01-490. consideration that Congress look at how this information was defined; whether there was a need for a central repository of this information; and if a central repository was maintained, whether all information should be retained permanently, and what business model should be used to manage it. The Secretary of Commerce agreed with our assessment and raised as a primary question whether there was a need for a central repository in view of the increasing availability of newer publications from sources other than NTIS. The Secretary also noted that the need for a central repository depended on whether the information would be permanently maintained by agencies and whether the information would be easy to locate without the kind of bibliographic control that NTIS provides. Subsequent to the issuance of our reports, in December 2003, Congress passed the 21st Century Nanotechnology Research and Development Act, which provided a coordinated federal approach to stimulating nanotechnology research and development. The act directed the Secretary of Commerce to establish a clearinghouse for information related to the commercialization of nanotechnology research using the resources of NTIS to the extent possible. As of September 2012, NTIS noted that it held over 700 publications in its nanotechnology collection. NTIS currently operates as one of 12 independent bureaus within Commerce, with the mission to help promote the nation’s economic growth by providing access to information that stimulates innovation and discovery. Headquartered in Alexandria, Virginia (with warehouses in Alexandria, Virginia and in Brandywine, Maryland), the agency is organized into five primary offices. To carry out its statutory functions of collecting and maintaining a permanent repository and bibliographic record of research reports, and to disseminate them, the agency offers a variety of products, such as fee-based access to the reports in its repository. In addition, NTIS offers information-related services to federal agencies, such as distribution and order fulfillment, web hosting, and e- training, that are less directly related to its basic statutory function. While NTIS’s service offerings have resulted in increased revenues, allowing the agency to remain financially self-sustaining, it has experienced a net cost relative to its products, calling into question whether the agency’s function of acting as a self-financing repository of technical information is still viable. NTIS is led by a director, who is aided by two executives—a chief information officer and a chief financial officer. In addition, three operational offices have a variety of responsibilities for providing products and services that include collecting and disseminating technical reports, offering access to other information sources, and providing information- related services to federal agencies. Figure 1 displays the NTIS organization. NTIS operates as a unit within Commerce and receives oversight from the Deputy Secretary of Commerce, the Director of NIST,advisory board. In this regard, the NTIS Director communicates progress toward agency goals to the Deputy Secretary of Commerce. For example, the Director participates in biweekly executive management meetings that are held with the Deputy Secretary. At these meetings, executive leads from each of Commerce’s components report on the status of performance and strategic goals within their offices, among other things. and an In addition, under a Commerce departmental order,also reports to the Director of NIST. As outlined in the Commerce organization chart and stated in the Commerce departmental organization order, the NTIS Director is to report and be responsible to the Director of NIST. In turn, the Director of NIST prepares the NTIS Director’s annual performance appraisal. Beyond this, the Director of NIST stated that the agency does not provide any other operational or financial oversight functions for NTIS. For example, NIST does not approve NTIS’s budget (although it does coordinate NTIS’s budget for final inclusion in the department’s overall budget, which is approved by Commerce). Further, NTIS receives guidance on its operations from the NTIS Advisory Board, which was established by law policies and operations of NTIS, including policies related to fees and charges for its products and services. The board, comprised of a chairperson and four members appointed by the Secretary of Commerce, is required to meet at least every 6 months to discuss NTIS’s activities. The board’s last meeting took place in late October 2012, and, according to the notice of open meetings from the Federal Register, the intended focus was on the agency’s strategic business plan. The board submits an annual report to the Secretary of Commerce, which includes strategic and tactical recommendations regarding NTIS’s future operations. in 1988 to review the general As of late October 2012, NTIS was supported by 181 staff, all except 6 of which held full-time positions. These included 103 NTIS employees and 78 contractors. Table 1 shows the number of staff dedicated to the Office of the Director and each of the agency’s other primary offices as of October 2012. 15 U.S.C. 3704b(c). Among these offices, executive functions, such as directing NTIS activities and developing NTIS budgets and policies for the use of information technology, are carried out by the Office of the Director, along with the offices of the Chief Financial Officer and the Chief Information Officer. The three other operational offices have a variety of responsibilities for providing NTIS’s products and services. Table 2 summarizes the specific responsibilities and functions of these offices. As part of its basic statutory function to collect and disseminate technical reports, NTIS offers a variety of fee-based products. According to the agency, its customer base for these products include scientists, engineers, the business community, librarians, information specialists in government, academia, and the general public. The agency has organized its products into three lines of business—”Technical Reports,” “Clearinghouse,” and “Publishing”—as described below. To carry out its statutory function of collecting and disseminating such information, NTIS maintains a searchable repository that contains bibliographic records for the over 2.5 million scientific, technical, engineering, and business research reports it has acquired from federal government agencies, state and local governments, international organizations, and private-sector organizations. NTIS’s bibliographic database provides for the classification and cataloging of the records in this database. For example, according to the NTIS Database Search Guide, NTIS classifies its records into 39 categories. These categories can be used for searching the contents of NTIS’s database. The database covers a host of scientific and technical subjects, such as biology, chemistry, physics, transportation, health care, and the environment. Of the 2.5 million reports, NTIS noted that approximately 700,000 reports have been digitized, with the remainder in physical form, such as paper or microfilm. Access to the reports is provided both through the direct sale of individual reports and by subscription. Individual reports can be purchased via postal mail; phone; e-mail; and NTIS’s online ordering system, accessed through its website, http://www.ntis.gov. Subscription-based access to the reports is obtained through, among other things, NTIS’s National Technical Reports Library, which provides subscribers with the ability to search the bibliographic records repository and to access the approximately 700,000 digitized reports in portable document format (PDF). In addition, NTIS’s Selected Research Service allows a subscriber to select from more than 378 subcategories and automatically receive reports tailored to that area of interest. According to NTIS, reports that have not been digitized can be provided in a digital format when a customer purchases a copy. Prices for individual reports and subscriptions vary. For example, an electronic copy of a report from the Economics and Statistics Administration, Benefits of Manufacturing Jobs: Executive Summary, can be purchased for $15; a “customized CD” for this report can be purchased for $30. Further, an electronic copy of a report from the National Aeronautics and Space Administration (NASA) Marshall Space Flight Center, NASA Robotics for Space Exploration, can be purchased for $15; a “customized CD” for this report can be purchased for $30. With regard to subscriptions, access to the National Technical Reports Library is sold as an annual subscription to institutions based on the number of individuals accessing the library. For example, as of October 2010, an annual subscription providing access for up to 3,000 individuals costs $2,100, while an annual subscription providing access for 18,001 to 28,000 individuals costs $11,200. In addition, subscriptions can be purchased for a specific number of issues of a particular document type. For example, six issues of the Livestock, Dairy, and Poultry Outlook can be purchased for $91. In addition to the technical reports that it collects and disseminates, NTIS disseminates publications covering a wide array of topics on behalf of other federal agencies. According to NTIS, these agencies request that NTIS distribute the publications in print or electronically. The following are examples of the federal products distributed: Standard Occupational Classification Manual. A manual containing information on all occupations in the national economy classified according to the system used by federal statistical agencies for the purpose of collecting, calculating, analyzing, or disseminating occupational data. Food and Drug Administration Food Code Manual. A code and reference document that provides technical and legal information about the regulations of the retail and food service industry. North American Industry Classification System. A publication that details a system for the collection, analysis, and dissemination of industrial statistics used by government policy analysts, academics, researchers, the business community, and the public. Export Administration Regulations. A compilation of regulations issued by Commerce’s Bureau of Industry and Security relating to the control of certain exports, re-exports, and activities. National Correct Coding Policy Manual. A manual developed by the Centers for Medicare and Medicaid Services to control improper coding leading to inappropriate payment in Medicare Part B claims. The manual provides guidance for providers on the correct coding of claims being submitted for reimbursement. Through a memorandum of understanding or interagency agreement, NTIS also provides access to information collected from federal agencies, which it refers to as its “Publishing” line of business. In some instances, NTIS repackages the information with additional features. According to NTIS, agencies initiate the request for these services. These offerings include the following: Drug Enforcement Administration database. The Drug Enforcement Administration database identifies persons and organizations authorized to handle certain controlled drug substances and chemicals under the Controlled Substance Act. NTIS is the authorized official distributor of the database. NTIS provides online subscription access to the database on a weekly, monthly, or quarterly basis. NTIS also provides this information via a searchable CD-ROM. Death Master File. This file, maintained by the Social Security Administration (SSA), contains approximately 85 million records of deaths that have been reported since 1936. The file is used by government; credit reporting organizations; and financial, investigative, medical research, and other industries to verify deaths. Through an agreement with SSA, NTIS is the only authorized official distributor of the Death Master File. In this regard, NTIS provides access to this information, including on DVD, and provides the means to search and download the Death Master File online. World News Connection. NTIS offers access to this online news service, which provides translated and English-language news and information from non-U.S. media sources. The information is obtained from full-text, and summaries of newspaper articles, conference proceedings, television and radio broadcasts, periodicals, and nonclassified technical reports. The material in World News Connection is provided to NTIS by the Open Source Center, a U.S. government agency that provides analysis of foreign open source intelligence. In addition to its product offerings, NTIS offers a variety of fee-based services to federal agencies that are less directly related to its basic statutory function of collecting and disseminating scientific and technical information. These include services that leverage capabilities NTIS has developed in the course of carrying out its mission. Its five service offerings are distribution and order fulfillment, web-based services and federal cloud computing, brokerage services, e-training and knowledge management services, and digitization and scanning services. To provide its services, NTIS enters into memorandums of understanding or interagency agreements with agencies. Further, NTIS offers some of these services through public-private partnerships with private industry, which it refers to as “joint venture partners.” The five types of services are described below. Through a memorandum of understanding or interagency agreement, NTIS distributes large amounts of information products for federal agencies. According to NTIS officials, these services differ from that provided by its Clearinghouse in that they are used for distributing large quantities of agencies’ products rather than selling individual copies of publications. NTIS identified five primary clients for these services: the Internal Revenue Service, the Department of Agriculture, the Department of Education, the Pension Benefit Guaranty Corporation, and the Administrative Office of the U.S. Courts. For example, NTIS currently has an agreement with the Department of Agriculture to distribute on its behalf health and nutrition educational materials in the form of brochures, posters, and similar nutritional products. It also has an agreement with the Department of Education to perform similar services. In 1988, Congress required NTIS to implement new methods or media for disseminating technical information; a 1992 amendment specified that this should include producing and disseminating products in electronic formats. According to NTIS, this has been a primary basis for NTIS’s transformation from a static paper-based distribution operation to a modern, computer-based model and also a basis for NTIS to provide information-dissemination services to other agencies. For example, NTIS’s information systems infrastructure enables it to host federal agencies’ applications and websites. The agency currently has an agreement with the Federal Insurance and Mitigation Administration to provide hosting for two of its websites and associated systems and to host its web-based tool for reviewing underwriting and claims operations. NTIS has also expanded its infrastructure to provide cloud computing services and, according to the agency, is currently offering infrastructure- as-a-service and software-as-a-service. For example, along with joint venture partner Carney, Inc., NTIS had an agreement with the National Archives and Records Administration and currently has an agreement with the Social Security Administration to configure and host the “Jive” platform. According to NTIS, the agency has eight primary clients for its web-based services and federal cloud computing offerings, including the Department of Homeland Security, the Internal Revenue Service, and other federal agency initiatives. NTIS provides billing and collection services on a reimbursable basis to other agencies that, like itself, charge for products and services but which lack the necessary financial infrastructure to do their own billing and collecting. The agency refers to these as its “Brokerage Services.” For example, NTIS had an agreement with the National Agricultural Library to develop, implement, and operate account maintenance, invoicing, and collection procedures for the fees charged by the National Agricultural Library to users of its photocopy and loan services. In addition, the agency had an agreement with the National Library of Medicine to perform invoicing, accounting, and collection services for its Interlibrary Loan services. NTIS officials stated, however, that the agency plans to stop marketing its brokerage services due to the decrease in demand for this service. NTIS’s service offerings have also been expanded to e-training and knowledge management. Specifically, in conjunction with joint venture partners, the agency provides collaborative software solutions, learning management systems and support services, training evaluation software, and talent management applications. For example, NTIS entered into an agreement with Booz Allen Hamilton to provide, among other things, program management; secure Internet hosting; and operations, maintenance, and support services for the Defense Manpower Data Center’s enterprise training program. According to NTIS, as of May 2012, it had 28 primary government clients for this service offering, including the Departments of Justice, the Interior, and the Treasury, and the U.S. Patent and Trademark Office. As another service, NTIS digitizes various document types, such as microfilm or microfiche and paper forms, to assist agencies in complying with section 508 standards. Further, it offers storage and distribution for the documents that it digitizes. For example, NTIS has an agreement with SSA to provide alternative modes of receiving SSA notices and other communications. NTIS provides this service with the assistance of its joint venture partners, Vastec, Inc. and Braille Works, Inc. NTIS offers this information in data compact disc, large print, and audio compact disc. As of May 2012, the agency had six primary clients for its digitization and scanning services, including SSA and the Department of Justice. As a fee-for-service entity, NTIS’s revenues are generated exclusively from its products and services, and all its revenues, expenses, and capital expenditures are deposited into and paid out of its revolving fund. Overall, NTIS had net earned revenues for 8 of the last 11 fiscal years. For example, for fiscal year 2011, the agency reported that net earned revenues from all its functions (products and services) totaled about $1.5 million. According to NTIS’s Financial Report for fiscal year 2011, the was approximately $7.4 revolving fund ending unobligated balancemillion. However, over most of the last decade, the agency has incurred net costs for its products. Specifically, NTIS product expenditures exceeded revenues for 10 out of the past 11 fiscal years. The agency lost, on average, about $1.3 million over the last 11 years on its products. In contrast, NTIS’s overall financial performance has been supported by revenues from its service offerings. The agency’s service revenues increased, on average, about $1.8 million over the last 11 years. In particular, for fiscal year 2011, revenues were about $53.5 million, costs incurred were about $52 million, and the overall net earned revenue was approximately $1.5 million from its service offerings. NTIS officials attribute most of the net earned revenue to the agency’s agreements with the Departments of Agriculture and Education and the Social Security Administration for various service offerings. Table 3 identifies the net earned revenue or net cost for NTIS products and services over the last 11 fiscal years, as reported by NTIS. In those years in which NTIS had net costs, the agency was sustained by cumulative net earned revenues from previous years’ operations. Figure 2 illustrates the net earned revenues and net costs associated with NTIS’s products and services from fiscal year 2001 through fiscal year 2011. The decline in revenue for its products continues to call into question whether NTIS’s basic statutory function of acting as a self-financing repository and disseminator of scientific and technical information is still viable. This is further highlighted by the fact that the services which are financially sustaining the agency are less directly related to this function. Recognizing its financial stance, NTIS has conducted analyses, and identified in its 2011-2016 Strategic Plan actions to help address net costs from its products, including its technical reports. The plan emphasizes that the agency’s collection, culture, and information technology infrastructure are its main strengths; that continued use of less robust business systems and an aging work force are its primary weaknesses; that growth opportunities still exist in the various sectors served by NTIS, whether through products or services; and that NTIS is threatened by and will have to overcome a shrinking customer base for its products. The plan identifies three strategic initiatives to guide NTIS during this period: 1. Increase revenue by enhancing the number of acquisitions, creating new products, reaching more customers, and adding value to what NTIS collects, and to reduce costs by reviewing and improving key business processes. 2. Improve NTIS’s utilization by other agencies by increasing the breadth and depth of its own collection and enhancing the suite of information management services that it can provide. 3. Achieve workforce excellence by focusing on identifying and acquiring the critical workforce skills required to accomplish the agency’s mission in a rapidly changing world. Beyond these initiatives identified in the strategic plan, the Director of NTIS also provided information on several other initiatives under way to address the budget shortfalls from products. These initiatives include the following product and organizational improvements: Enhancing the accessibility of federal science content by shifting from a pricing model for stand-alone products (e.g., paper/print, microfiche, and compact disk media) to one that is subscription-based. Repositioning NTIS to support open government initiatives in science—meeting with agencies such as NIST, the National Oceanic and Atmospheric Administration, and the Government Printing Office to address how NTIS can reposition its programs to support current science information needs. Building collections of reports based on themes and categories that will be supported through subscriptions. Adjusting the NTIS business model to support the increased demand for subscriptions to customers. Reducing staff—for example, the agency has received authorization to provide early retirement for eligible employees and has stopped hiring additional staff. The agency anticipates employee attrition to further reduce current staffing levels. Notwithstanding these efforts, NTIS could likely continue to face challenges in recouping the costs of its products given the increasing availability of technical information from other sources. From fiscal year 1990 through 2011,repository were older reports published in the year 2000 or before; however, the greater demand was for more recently published reports. In this regard, the agency added 841,502 reports to its repository from 1990 through 2011. Of the reports added, approximately 62 percent, or 524,256, had publication dates of 2000 or earlier, while approximately 38 percent, or 317,246, of the reports were published from 2001 to 2011. Specifically, reports that were added to the repository during this period were as follows: most of the additions to NTIS’s 79,943 reports published in 1989 and prior years, 444,313 reports published from 1990 through 2000, 129,591 reports published from 2001 through 2004, 126,225 reports published from 2005 through 2008, and 61,430 reports published from 2009 through 2011. Figure 3 shows the distribution of reports that were added by each publication date age group from 1990 through 2011. With regard to demand for the reports, we estimate that, during fiscal years 2000 through 2011, NTIS distributed (sold) one or more copies of about 419,657, or almost 50 percent, of the 841,502 reports added to its repository during fiscal years 1990 through 2011. Of these 419,657 reports, approximately 78 percent were distributed through a subscription only. NTIS officials attributed this to the fact that subscriptions are a cost-effective way for libraries to meet their collection development requirements within a specific or broad area of interest. The officials noted that direct sales are generally to customers interested in specific topics. However, as shown in figure 4, the agency was more likely to distribute a higher percentage of more recently published reports than older ones. For example, we estimate that between 96 and 100 percent of the reports published from 2001 through 2011 had been distributed, while only about 21 percent of reports published in 1989 or earlier were distributed during this time period. That is, the demand for older holdings in the NTIS repository is lower than for newer publications. Based on our sample, we estimate that most (about 74 percent) of the reports added to NTIS’s repository during fiscal years 1990 through 2011 were readily available from other public websites, and nearly all of these (95 percent) could be obtained for free. Specifically, we estimate that approximately 621,917, or about 74 percent, of the 841,502 reports added to NTIS’s repository from fiscal years 1990 through 2011 are readily available from one of the other four publicly available sources we searched (i.e., the issuing organization’s website; the Government Printing Office’s Federal Digital System website; the U.S. government’s official web portal, USA.gov; or from another website located through a Google search).searching for was another website located through http://www.Google.com The source that most often had the report we were . In addition, of the reports added since fiscal year 1990, those with more recent publication dates were more likely to be available from other public sources than older ones. For example, approximately 87 percent of the reports with publication dates from 2009 to 2011 were available elsewhere, while 55 percent of those published in 1989 or earlier were. Figure 5 shows the estimated availability of reports added to NTIS’s repository since fiscal year 1990 by date of publication. As shown in figure 6, of the reports that were found to be readily available from one of the other four sources that we searched, about 61 percent of those reports had been distributed (sold) by NTIS. Conversely, of the reports that were not found to be readily available from one of the other four sources, most, or about 82 percent, had not been distributed by NTIS. Not only were most of the reports in our sample available from sources other than NTIS, but about 95 percent of the reports available elsewhere could be obtained free of charge from one of the four other sources we searched. The remaining 5 percent were available from the public sources for a fee. Moreover, the year of publication did not appear to have an effect on whether a report was available free of charge. For example, the following reports available for a fee from NTIS were available free of charge from the issuing organization’s website: Hazardous Waste Characteristics Scoping Study, November 1996, Environmental Protection Agency, 278 pages. (At NTIS, print on demand costs $73, electronic $25.) “Homeland Security: Intelligence Indications and Warning,” December 2002, Naval Postgraduate School, 5 pages. (At NTIS, print on demand costs $17, electronic $15.) Export Controls: System for Controlling Exports of High Performance Computing Is Ineffective, 2000, GAO, 60 pages. (At NTIS, print on demand costs $48, electronic $15.) FDA Enforcement Report: July 20, 2011, July 2011, Food and Drug Administration, 28 pages. (At NTIS, print on demand costs $33, electronic $15.) Principal Rare Earth Elements Deposits of the United States: A Summary of Domestic Deposits and a Global Perspective, 2010, Geological Survey, 104 pages. (At NTIS, print on demand costs $60, electronic $25.) Of those reports we found available elsewhere, figure 7 shows the estimated percent that were available elsewhere for free by year that the document was published. The Director of NTIS acknowledged two factors that contributed to the free and public availability of reports from other sources: Federal agencies are providing information, including their federal scientific, technical, and engineering information products, on their websites in electronic format and on central federal information websites such as http://www.data.gov and http://www.science.gov. The information is available for immediate download at no cost. Federal agencies are participating in programs with Internet search engines that permit the public to locate their products for free or for less than they are when purchased from NTIS. In addition, commercial vendors are also able to obtain these information products from agency websites or through Internet search and are able to make these products available for free or at a price lower than that offered by NTIS. Further, NTIS acknowledged in its strategic plan that because the Internet continues to change the way people acquire and use information and permits federal agencies to make their information products available for free, NTIS is challenged to meet its statutory mandate as a self-financing repository and disseminator of technical information. Notwithstanding these acknowledgments, NTIS continues to charge for reports that are freely available from other public sources. NTIS serves as a permanent repository and disseminator of technical information, and by statute, is required to be financially self-sustaining, to the fullest extent feasible, by charging fees for its products and services. While the agency had cumulative net earned revenues as of September 30, 2011, its costs exceeded revenue by an average of about $1.3 million over the last 11 years from the sale of technical information. The agency’s net revenue now comes primarily from services that are less directly related to its basic statutory function. The decline in sales of technical information is due in part to the increasing availability of this information from other sources, including websites and Internet search tools, and often at no charge. Charging for information that is freely available elsewhere is a disservice to the public and may also be wasteful insofar as some of NTIS’s customers are other federal agencies. Taken together, these considerations suggest that the fee-based model under which NTIS currently operates for disseminating technical information may no longer be viable or appropriate. In light of the agency’s declining revenue associated with its basic statutory function and the charging for information that is often freely available elsewhere, Congress should consider examining the appropriateness and viability of the fee-based model under which NTIS currently operates for disseminating technical information to determine whether the use of this model should be continued. The Acting Secretary of Commerce provided written comments on a draft of this report, which are reprinted in appendix II. In its comments, Commerce expressed appreciation for our study and for our focus on the initiatives that NTIS has undertaken. However, the department said NTIS did not believe our conclusions (that the fee-based model under which it operates for disseminating technical information may no longer be viable or appropriate) fully reflect the additional value that NTIS provides with the work that it performs. Commerce stated that, through its federal clearinghouse and repository, the agency provides federally funded reports that are not otherwise readily available, such as most of those issued prior to l989. Additionally, Commerce stated that NTIS recognizes that it cannot remain financially solvent solely through sales and subscriptions of technical reports with expectations that these products will be widely available for free. It added that NTIS is moving in the direction of “open and improved access” to information, but recognizes that it needs to maintain a sustainable financial model and continue providing enhanced value to the information generated by other federal agencies. In this regard, Commerce described features that it believes have added value to the technical reports that NTIS maintains, how these features improve access to the documents and related data, and specific information services that the agency provides to science information professionals. Our report highlighted various initiatives that NTIS has undertaken to provide older reports that might not otherwise be readily available, and to increase the value of its technical reports, information management services, and technology transfer capabilities. However, as discussed in the report, we found that the demand for older holdings in the agency’s repository is lower than for new publications. For example, we estimate that between 96 and 100 percent of the reports published from 2001 through 2011 had been distributed, while only about 21 percent of reports published in l989 or earlier were distributed during this time period. Further, as the agency acknowledged, its financial health is based on both its information product and service missions. Also, as we state in our report, the agency’s net revenue now comes primarily from services that are less directly related to its basic statutory function, while sales of its technical information products have resulted in net losses. This decline in sales of NTIS’s technical reports is due in part to the increasing availability of this information from other sources, including websites and Internet search tools, and often at no charge. With these factors in mind, we stand by our conclusion that the fee-based model under which NTIS currently operates for disseminating technical information may no longer be viable or appropriate. NTIS also provided technical comments on the report via e-mail, which we have incorporated as appropriate. We are sending copies of this report to interested congressional committees. We are also sending copies to the Secretary of Commerce; the Director, NTIS; the Director, NIST; and other interested parties. Copies of this report will also be available at no charge on GAO’s website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to determine (1) how the National Technical Information Service (NTIS) is currently organized and operates, including its various functions, current staffing level, reported cost of operations, and revenue sources; (2) the age of and demand trends for reports added to NTIS’s repository; and (3) the extent to which these reports are readily available from other public sources. To determine how NTIS is organized and operates, we reviewed the agency’s strategic plan, and documentation on its organizational and reporting structure, office staffing level and assigned responsibilities, and types of products and services offered. We also reviewed cost data contained in the agency’s annual financial reports. In addition, we reviewed relevant laws and regulations on NTIS’s authority and responsibilities and our previous reports that discussed its mission and operations. We supplemented our analyses with interviews of the Director of NTIS and other relevant agency officials; we also interviewed officials of the Department of Commerce and its National Institute of Standards and Technology (NIST), which have specific reporting relationships with NTIS. We did not include reports added for fiscal year 2012 because our study only focused on those fiscal years that had been completed at the time that our study was initiated. sample size of 384 reports. All of the estimates made with this sample are weighted to reflect the stratified design. NTIS provided us with the full bibliographic data for each document in our sample. To determine the age of reports added to NTIS’s repository since fiscal year 1990, we used the year of publication for the reports in our sample to estimate the age range (prior to 1990; 1990-2000; 2001-2004; 2005- 2008; and 2009-2011) for all documents added from fiscal year 1990 through fiscal year 2011. To determine the demand trends for reports added to NTIS’s repository during fiscal years 1990 through 2011, we requested the sales data from fiscal year 2000 through fiscal year 2011 for the 384 reports in our stratified sample. NTIS provided the distribution data for both direct sales and subscriptions for all of these documents. We then used our sample and the sales data to estimate the extent and prevalence of the sales among all reports added to the NTIS repository. Specifically, we used these data to estimate the (1) total number of reports distributed by direct sale and subscription, (2) total number of reports distributed one or more times, (3) percentage of reports distributed relative to the year the reports were published, and (4) percentage of reports distributed relative to their availability elsewhere. The observations were statistically weighted in the estimation process to reflect the stratified sample design that we used. To determine the extent to which reports in the repository are readily available from other public sources, we first developed a methodology for conducting systematic Internet searches to determine availability elsewhere. More specifically, as part of this methodology, we searched the Internet to determine if each of the reports included in our sample of 384 reports could be found elsewhere and at no cost. Using a tiered approach, we searched the following four sources in the order shown: (1) the issuing organization’s website; (2) the U.S. Government Printing Office’s Federal Digital System website—http://www.gpo.gov/fdsys; (3) a web search conducted using the federal government Internet portal USA.gov—http://www.USA.gov; and (4) a web search conducted using Specifically, with the commercial search engine http://www.Google.com.this methodology, we determined whether each report was first available at no cost on the issuing organization’s website and, if so, concluded the Internet search at this point. However, if the report was not available, then the search continued to the second source, and so on, until either the report was found to be available at one of the remaining sources, or all sources were exhausted. We then used our results to estimate the percentage of the total population of NTIS reports added to the repository during fiscal years 1990 through 2011 that was available from other public sources. All of the results derived from the sample analyses constituted estimates that are subject to sampling errors. These sampling errors measure the extent to which samples of this size and structure are likely to differ from the population they represent. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. To determine the reliability of the data provided from NTIS’s repository of reports, we performed basic steps to ensure the data provided were valid, and reviewed relevant information describing the database supporting the repository. We tested for duplicate records, missing values, and out-of- range values in the data received from NTIS. We did not assess the reliability of the system used to maintain these data or the processes used in extracting the data for our engagement purposes. To determine the reliability of the sales data provided by NTIS, we conducted interviews with agency officials to gain an understanding of the process by which accounts receivable records are added and managed within NTIS’s system of accounts receivable–”CIS.PUB.” Further, we asked cognizant agency officials specific questions to understand the controls in place for ensuring the integrity and reliability of the data contained in CIS.PUB. In addition, we met with NTIS officials to discuss data collected from NTIS and obtained their assertions regarding the data it provided. Based on the results of these efforts, we found the data sources to be sufficiently reliable, given the way they are reported herein. We conducted this performance audit from February 2012 to November 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Cynthia Scott (Assistant Director), Carl Barden, Virginia Chanley, Elena Epps, Nancy Glover, Alina J. Johnson, Lee McCracken, Constantine Papanastasiou, David Plocher, Bradley Roach, and Tina Torabi made key contributions to this report.
NTIS was established by statute in 1950 to collect scientific and technical research reports, maintain a bibliographic record and repository of these reports, and disseminate them to the public. NTIS charges fees for its products and services and is required by law to be financially self-sustaining to the fullest extent possible. GAO was mandated by Congress to update its 2001 report on aspects of NTIS's operations and the reports in its collection. Specifically, GAO's objectives were to determine (1) how NTIS is currently organized and operates, including its functions, current staffing level, reported cost of operations, and revenue sources; (2) the age of and demand trends for reports added to NTIS's repository; and (3) the extent to which these reports are readily available from other public sources. To do this, GAO reviewed agency documentation, analyzed a sample of reports added to NTIS's collection from fiscal years 1990 through 2011 (reports from the period since GAO's last study and other older reports), and interviewed relevant agency officials. As a component of the Department of Commerce, the National Technical Information Service (NTIS) is organized into five primary offices that offer the public and federal agencies a variety of products and services. As of late October 2012, NTIS was supported by 181 staff, all except 6 of which held full-time positions. NTIS reports its progress toward agency goals to the Deputy Secretary of Commerce, and the Director of NTIS reports to the Director of Commerce's National Institute of Standards and Technology. In addition, NTIS receives oversight of its functions and strategic direction from an advisory board with members appointed by the Secretary of Commerce. NTIS's product and service offerings include, among other things, subscription access to reports contained in its repository in both print and electronic formats, distribution of print-based informational materials to federal agencies' constituents, and digitization and scanning services. NTIS revenues are generated exclusively from direct sales or subscriptions for its products and services. NTIS reported that net revenues from all its functions (products and services) totaled about $1.5 million in fiscal year 2011. However, over most of the last 11 years, its costs have exceeded revenues by an average of about $1.3 million for its products. While NTIS has not recovered all of its costs for products through subscriptions and other fees, it has been able to remain financially self-sustaining because of revenues generated from its services such as distribution and order fulfillment, web hosting, and e-training. The NTIS strategic plan states that the electronic dissemination of government technical information by other federal agencies has contributed to reduced demand for NTIS's products. As a result, the agency is taking steps to reduce its net costs, such as improving business processes and increasing the breadth and depth of its collection. NTIS's repository has been growing with mostly older reports, but the demand for more recent reports is greater. Specifically, NTIS added approximately 841,500 reports to its repository during fiscal years 1990 through 2011, and approximately 62 percent of these had publication dates of 2000 or earlier. However, the agency was more likely to distribute (by direct sale or through a subscription) reports published more recently. For example, GAO estimated that 100 percent of the reports published from 2009 through 2011 had been distributed at least once, while only about 21 percent of reports published more than 20 years ago had been. Of the reports added to NTIS's repository during fiscal years 1990 through 2011, GAO estimates that approximately 74 percent were readily available from other public sources. These reports were often available either from the issuing organization's website, the federal Internet portal (http://www.USA.gov), or from another source located through a web search. Reports published from 1990 to 2011 were more likely to be readily available elsewhere than those published in 1989 or earlier. Further, GAO estimated that 95 percent of the reports available from sources other than NTIS were available free of charge. NTIS's declining revenue associated with its basic statutory function and the charging for information that is often freely available elsewhere suggests that the fee-based model under which NTIS currently operates for disseminating technical information may no longer be viable and appropriate. GAO is suggesting that Congress reassess the appropriateness and viability of the fee-based model under which NTIS currently operates for disseminating technical information to determine whether the use of this model should be continued. In comments on a draft of this report, the Department of Commerce stated that NTIS believes GAO's conclusions do not fully reflect the value that the agency provides. However, GAO maintains that its conclusions and suggestion to Congress are warranted.
By definition, alien smuggling (sometimes called people smuggling or human smuggling) is transnational in that it involves more than one country and also usually involves persons who have consented to be transported to another country. This activity generally produces short- term profits for the smugglers. That is, after the aliens reach their final destinations, they have no continuing relationship with the smugglers. In legal and diplomatic references, alien smuggling is distinct from human trafficking, although both smuggling and trafficking may have similarities or common elements. In human trafficking, the criminality and human rights abuses—such as coercion for prostitution, labor sweat shops, or other exploitative purposes and servitude arrangements—may continue after the migrants reach the United States in order to produce both short- term and long-term profits. Whereas a trafficked person is a victim, an alien who consents to be smuggled is subject to criminal processing and deportation. Given the underground nature of alien smuggling, exact figures quantifying the size or scope of this transnational crime are not available. Nonetheless, estimates by the United Nations and the federal law enforcement and intelligence communities indicate that people smuggling is a huge and highly profitable business worldwide, involving billions of dollars annually, and the United States is a major destination country. People smuggling is a continuously growing phenomenon, according to the International Criminal Police Organization (Interpol). The types of smugglers can range from opportunistic business owners who seek cheap labor to well-organized criminal groups that engage in alien smuggling, drug trafficking, and other illegal activities. Partly because of increased border monitoring by governments, Interpol has noted that criminal networks increasingly control the transnational flow of migrants. That is, willing illegal migrants increasingly rely on the services of criminal syndicates that specialize in people smuggling, even though traveling conditions may be inhumane and unsafe. Alien smuggling generally is prosecuted under section 274 of the Immigration and Nationality Act, which prohibits knowingly or recklessly bringing in, transporting, or harboring certain aliens. Depending on the conduct charged, a conviction under section 274 could result in a maximum penalty of 10 years’ imprisonment per alien smuggled. Moreover, significant enhanced penalties are provided for some section 274 violations that involve serious bodily injury or placing life in jeopardy. If certain violations result in the death of any person, the convicted defendant may be punished by imprisonment for any term of years or be subjected to a death sentence. Other federal criminal statutes may also be applicable. Specifically, alien-smuggling-related offenses are among the list of Racketeer Influenced and Corrupt Organizations predicate offenses (18 U.S.C. § 1961(1)) and also are included within the definition of specified unlawful activity for purposes of the money-laundering statute (18 U.S.C. § 1956). Further, criminal and civil forfeiture statutes may apply to alien- smuggling cases. Although ICE is a primary DHS component for investigating alien smuggling, combating the smuggling of aliens into the United States can involve numerous federal agencies, as well as the cooperation and assistance of foreign governments. In addition to ICE, other relevant DHS components are the Border Patrol (a “front-line defender”), which is now part of CBP, and the U.S. Coast Guard, which is tasked with enforcing immigration law at sea. Additionally, significant roles in combating alien smuggling are carried out by Department of Justice components, including the Criminal Division, the Federal Bureau of Investigation (FBI), and U.S. Attorney’s Offices, and Department of the Treasury components, such as Internal Revenue Service (Criminal Investigation) and the Financial Crimes Enforcement Network (FinCEN). Further, Department of State components have significant roles. For instance, the Bureau of Diplomatic Security—the law enforcement arm of the State Department—is statutorily responsible for protecting the integrity of U.S. travel documents. Perhaps the most coveted and sought after travel documents in the world are U.S. passports and visas. Alien smuggling and travel document fraud often are inextricably linked. An interagency coordination mechanism to help ensure that available resources are effectively leveraged is the National Security Council’s Migrant Smuggling and Trafficking Interagency Working Group, which is cochaired by State and Justice. The Interagency Working Group has a targeting subgroup, whose role is to identify for investigation and prosecution the most dangerous international alien smuggling networks, especially those that pose a threat to national security. Another coordination mechanism is the Human Smuggling and Trafficking Center, an interagency entity for disseminating intelligence and other information to address the separate but related issues of alien smuggling, trafficking in persons, and clandestine terrorist travel. Although its establishment was announced in December 2000, the center was not operational until July 2004. The March 2003 creation of DHS, including its largest investigative component (ICE), ushered in an opportunity for developing a strategy to combat alien smuggling by, among other means, using financial investigative techniques. Two months later, in May 2003, ICE used such techniques to follow the money and prosecute the perpetrators of a smuggling operation that had resulted in the deaths of 19 aliens in Victoria, Texas. The Victoria 19 case has been cited by ICE as representing a new model for fighting alien smuggling—a model that ICE (1) subsequently used to launch a multi-agency task force (Operation ICE Storm) in the Phoenix (Arizona) metropolitan area and (2) reportedly was using to develop ICE’s national “Antismuggling/Human-Trafficking Strategy.” Although its development was announced as early as June 2003, a national strategy for combating alien smuggling had not been finalized and implemented by ICE as of July 5, 2005. During congressional testimony, an ICE official said ICE was developing a strategy that would address alien smuggling (and human trafficking) at the national and international level because as in the war on terrorism, the most effective means of addressing these issues is by attacking the problem in source and transit countries to prevent entry into the United States. In the absence of a national strategy to combat alien smuggling, including investigating the money trail, ICE has used various means to provide interim guidance to investigators. Such guidance included, for instance, the formation of working groups with members from various field offices and disciplines, as well as a presentation at a March 2004 conference of special-agents-in-charge and attachés. Moreover, ICE said it continues to provide guidance to the field in the form of training seminars and managerial conferences. Also, ICE indicated that it has posted guidance and policy memorandums to the field on its Web site, which is available and accessible to agents at their desktops for reference. According to ICE, the Web site is regularly reviewed and updated to ensure that the most recent guidance is available to the field. Additionally, ICE officials said that headquarters staff routinely travel to field offices to review ongoing undercover operations and large-scale investigations to help ensure compliance with existing policies and priorities. ICE officials indicated that the draft strategy was being adjusted to broadly cover all aspects of smuggling—encompassing aliens, as well as drugs and other illegal contraband—and to focus initially on the Southwest border, between the United States and Mexico—the most active area in terms of smuggling activity and open investigations. The officials explained that ICE was developing a comprehensive southwest border strategy, given the anticipated displacement of smuggling activity to other areas along the border resulting from Operation ICE Storm and its expansion statewide under the Arizona Border Control Initiative. The officials explained that criminal enterprises tend to smuggle not only people but also drugs, weapons, counterfeit trade goods, and other illegal contraband. The ICE officials emphasized that irrespective of whether smuggling involves aliens or contraband, ICE can use similar investigative techniques for following the money trail. Moreover, the officials said that, following a certain period of implementation, the Southwest border strategy would be evaluated and expanded into a nationwide strategy. The officials noted, for instance, that although there is no one law enforcement strategy totally effective in all areas of the nation, the methodologies applied in Arizona with both Operation ICE Storm and the Arizona Border Control Initiative would be evaluated and tailored for use in other parts of the country. The strategy’s continuing development period is attributable partly to organizational and training needs associated with integrating the separate and distinct investigative functions of the legacy INS and the U.S. Customs Service, following creation of DHS in March 2003. Also, ICE and CBP— two DHS components with complementary antismuggling missions— signed a memorandum of understanding in November 2004 to address their respective roles and responsibilities, including provisions to ensure proper and timely sharing of information and intelligence. CBP has primary responsibility for interdictions between ports of entry while ICE has primary responsibility for investigations, including those resulting from alien smuggling interdictions referred by CBP. Accordingly, sharing of information between the two components is critical to achieving ICE’s investigative objective of determining how each single violation ties into the larger mosaic of systemic vulnerabilities and organized crime. The ability to make such determinations should be enhanced when DHS components have compatible or interoperable information technology systems—which is a long-term goal of an ongoing, multiyear project called the Consolidated Enforcement Environment. Currently, however, there is no mechanism in place for tracking the number and the results of referrals or leads made by CBP to ICE for investigation, including even whether ICE declined to act on the referrals. Without such a mechanism, there may be missed opportunities for identifying and developing cases on large or significant alien-smuggling organizations. For instance, if a tracking mechanism were in place, CBP could continue pursuing certain leads if ICE—for lack of available resources or other reasons—does not take action on the referrals. The principal federal statute used to prosecute alien smugglers is section 274 of the Immigration and Nationality Act, which prohibits knowingly or recklessly bringing in, transporting, or harboring certain aliens. Under this statute, which is codified at 8 U.S.C. § 1324, about 2,400 criminal defendants were convicted in federal district courts in fiscal year 2004. According to federal officials we interviewed, most alien-smuggling prosecutions stem from reactive or interdiction-type cases at the border, wherein in-depth investigations to follow a money trail are not warranted. However, during our field visits in September 2004 to Phoenix and Houston, we asked U.S. Attorney’s Office officials for their observations regarding whether there has been an increasing emphasis on the financial aspects of alien-smuggling investigations since the creation of DHS and ICE. In Arizona, federal prosecutors emphasized that Operation ICE Storm is a clear indication of ICE’s efforts to become more proactive in alien- smuggling investigations. Also, federal prosecutors in Texas (Houston) said the money trail is being pursued when appropriate, such as proactive cases involving smuggling organizations that are based in the Far East (e.g., Thailand and certain provinces in the People’s Republic of China) and have networks in Latin America and Mexico. The federal officials noted that investigations of these cases may include FBI participation and the use of undercover agents and electronic surveillance and may result in assets being seized and suspects being charged with money laundering and violations of the Racketeer Influenced and Corrupt Organizations Act. More recently, in December 2004, ICE headquarters officials told us that ongoing alien-smuggling cases in other areas of the nation—Florida, Georgia, New York, and Washington—were also using financial investigative techniques and are expected to result in asset seizures. Because these cases were ongoing, the officials declined to provide specific details, other than information already made available to the public. For fiscal year 2004, ICE reported seizures totaling $7.3 million from its alien-smuggling investigations—plus an additional $5.3 million generated by the state of Arizona under Operation ICE Storm. To obtain additional perspectives on the results of alien-smuggling investigations in terms of recovered funds or seized assets, we contacted Treasury’s Executive Office for Asset Forfeiture, which provides management oversight of the Treasury Forfeiture Fund—the receipt account for the deposit of nontax forfeitures made pursuant to laws enforced or administered by the Internal Revenue Service-Criminal Investigation and DHS components (including ICE, CBP, the U.S. Secret Service, and the U.S. Coast Guard). The Treasury officials told us they anticipate that ICE will have increased seizures in fiscal year 2005 or later, as ICE further applies its financial and money- laundering expertise to address alien smuggling. Similarly, ICE officials anticipate increased seizures. In this regard, for the first 6 months of fiscal year 2005, ICE reported seizures of $7.8 million from alien-smuggling investigations. As mentioned previously, alien smuggling globally generates billions of dollars in illicit revenues annually, according to some estimates. How much of the total involves aliens smuggled into the United States is not known, although the United States is often a primary destination country. Also, according to ICE officials, much of the U.S.-related smuggling revenues either may not be paid in this country or, if paid here, may be transported or transmitted abroad quickly. As such, federal efforts to combat alien smuggling by following the money trail frequently may present investigators and prosecutors with opportunities and challenges related to identifying and seizing funds or assets not located in the United States. To help investigators and prosecutors meet the opportunities and challenges associated with transnational crime, the United States has negotiated and signed more than 50 bilateral mutual legal assistance treaties (MLAT) with law enforcement partners around the world, according to the Department of Justice. Such treaties—which are a mechanism for obtaining evidence in a form admissible in a prosecution— provide for a broad range of cooperation in criminal matters, such as locating or identifying persons, taking testimonies and statements, obtaining bank and business records, and assisting in proceedings related to immobilization and forfeiture of assets. To get a sense of the extent to which federal law enforcement agencies were using the MLAT process to follow the money trail abroad in alien smuggling cases, we contacted Justice’s Office of International Affairs, which is responsible for coordinating the gathering of international evidence and in concert with the State Department, engages in the negotiation of new MLATs. According to the Deputy Director, the number of outgoing requests for formal law enforcement assistance in alien- smuggling cases is few in comparison with cases in drug trafficking, money laundering, fraud, and various other offenses. For matters considered to be alien-smuggling cases, the Deputy Director noted that it would be very difficult to quantify the exact number of requests made to foreign countries because, among other reasons, the Office of International Affairs’ database was not originally designed to include a category of “alien smuggling.” Also, we asked ICE headquarters for information regarding use of MLAT requests made in attempts to follow the money trail on alien-smuggling investigations that have extended overseas. That is, we asked how many MLAT requests were made in fiscal years 2003 and 2004, to which countries, and what have been the results in terms of assets tracked or seized. ICE’s Office of Investigations’ Asset Forfeiture Unit responded that it had no way of determining the number of MLAT requests. ICE officials noted, however, that none of ICE’s reported seizures from alien-smuggling cases in fiscal year 2004 ($7.3 million) and the first 6 months of fiscal year 2005 ($7.8 million) were made abroad. Generally, regarding asset seizures and forfeitures, ICE officials noted that there can be competing demands for investigative resources. The mission of ICE’s Office of Investigations—which has more than 5,000 agents in 26 field offices nationwide—encompasses a broad array of national security, financial, and smuggling violations, including narcotics smuggling, financial crimes, illegal arms exports, commercial fraud, child pornography or exploitation, immigration fraud, and human trafficking. ICE headquarters officials cautioned that alien-smuggling cases, in comparison with drug cases, are much less likely to result in seizures of money. The officials explained that almost all drug deals are conducted in cash, and it is not unusual for law enforcement to arrest criminals handling hundreds of thousands or even millions of dollars in drug money. In contrast, the officials noted that alien-smuggling fees per person generally involve less money and the alien smuggler is not arrested with large cash amounts. However, even absent the significant differences in amounts of seized money or other assets from alien smugglers, ICE headquarters and field office officials stressed the importance and utility of applying investigative expertise for determining the scope and operational patterns of alien-smuggling organizations, identifying the principals, and obtaining evidence to build prosecutable cases. Both criminal and civil forfeiture authority have limitations that affect the government’s ability to seize real property in alien smuggling cases— particularly stash houses used by smugglers. Asset forfeiture law has long been used by federal prosecutors and law enforcement as a tool for punishing criminals and preventing the use of property for further illegal activity. In a criminal forfeiture action, upon conviction, the defendant forfeits and the government takes ownership of property that the defendant used to commit or facilitate the offense or property that constituted the proceeds of the illegal activity. Criminal asset forfeiture is rarely an option in alien-smuggling cases for two reasons. First, because criminal asset forfeiture is dependent on conviction of the defendant, it is not available if the defendant is a fugitive, which alien smugglers often are according to Justice. Second, because the stash house is often rental property, it is rare that the property owner is convicted as it is difficult to establish the owner’s knowledge of the smuggling. In contrast to criminal forfeiture, in a civil forfeiture action, the government is not required to charge the owner of the property with a federal offense. However, to forfeit property used to facilitate the offense but purchased with legitimately earned funds, the government must establish a substantial connection between the use of the property and the offense. Once that connection is established, the government can forfeit the house if the owner cannot show innocent ownership due to the owner’s willful blindness to the criminal activity. However, taking civil action as an alternative to criminal action for real property seizures is not an option in alien smuggling cases. Civil forfeiture in alien smuggling cases is generally limited to personal property such as vessels, vehicles, and aircraft and does not extend to real property. Thus, the house used to hide the aliens and conduct the alien-smuggling business could not be forfeited in a civil forfeiture action. Civil forfeiture of real property is available in cases where the house was used to conduct drug transactions, including the storing of drugs and money, child pornography, and money laundering. In the view of Justice and ICE, this statutory distinction between alien smuggling and other criminal offenses is inappropriate. An amendment to the civil forfeiture authority, according to Justice, would enhance federal efforts to dismantle smuggling organizations because would-be defendants often are fugitives, which makes criminal forfeiture unavailable. Also, a civil forfeiture authority for real property used to facilitate alien smuggling would enable the government to establish willful blindness arguments against landlords who hope to profit from such ventures without becoming directly involved. However, our May 2005 report noted that Justice does not have a legislative proposal on this subject pending before Congress because the department’s legislative policy resources have been focused on other priorities. Expanding civil forfeiture authority in alien smuggling cases to include real property used to facilitate the offense may raise concerns, including the potential for abuse of this type of forfeiture and the adequacy of protection for the rights of innocent property owners. In 2000, several reforms were made to civil asset forfeiture law to provide procedural protections for innocent property owners. These reforms were part of a compromise that was developed over several years by Congress, the executive branch, and interest groups. Some observers felt that the legislation did not provide enough reforms and protections, while others felt that it went too far and would curtail a legitimate law enforcement tool. Creation of DHS in March 2003 has provided new opportunities to more effectively combat alien smuggling, particularly in reference to using financial investigative techniques to target and seize the monetary assets of smuggling organizations. However, after more than 2 years, the federal response to alien smuggling is still evolving, including development and implementation of a strategy to follow the money trail. Also evolving is the working relationship of ICE and CBP, two DHS components that have the primary responsibility for investigating and interdicting alien smugglers. Having clearly defined roles and responsibilities for these components is important, given their complementary antismuggling missions. In this regard, ICE’s and CBP’s November 2004 memorandum of understanding did not address a mechanism for tracking the number and the results of leads referred by CBP to ICE for investigation. If a tracking mechanism were in place, CBP could continue pursuing certain leads if ICE—for lack of available resources or other reasons—does not take action on the referrals. As such, a tracking mechanism would help to further ensure that large or significant alien-smuggling organizations are identified and investigated. Federal law enforcement has concerns that efforts to dismantle alien- smuggling organizations are constrained by the current absence of civil forfeiture authority for real property used to facilitate the smuggling of aliens. In contrast, for drug trafficking and various other criminal offense categories, civil forfeiture authority is available for seizing real property used to facilitate these crimes. According to Justice and ICE, the absence of civil forfeiture authority for real property used to facilitate the smuggling of aliens is inappropriate because law enforcement is unable in many cases to seize stash houses where smugglers hide aliens while awaiting payment and travel arrangements to final destinations throughout the nation. To enhance the federal response to alien smuggling, our May 2005 report made two recommendations. Specifically, we recommended that the Secretary of Homeland Security establish a cost-effective mechanism for tracking the number and results of referrals by CBP to ICE, and the Attorney General, in collaboration with the Secretary of Homeland Security, consider developing and submitting to Congress a legislative proposal, with appropriate justification, for amending the civil forfeiture authority for real property used to facilitate the smuggling of aliens. DHS and Justice expressed agreement with the respective recommendation. DHS said CBP and ICE, in consultation with Border and Transportation Security, would work together to identify and implement a solution to address our recommendation. Justice said it plans to move forward with a proposal as GAO recommended. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. For information about this testimony, please contact Richard Stana, Director, Homeland Security and Justice Issues, at (202) 512-8777, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other individuals making key contributions to this testimony include Danny Burton, Grace Coleman, Frances Cook, Odilon Cuero, and Kathleen Ebert. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Globally, alien smuggling generates billions of dollars in illicit revenues annually and poses a threat to the nation's security. Creation of the Department of Homeland Security (DHS) in March 2003 has provided an opportunity to use financial investigative techniques to combat alien smugglers by targeting and seizing their monetary assets. For instance, the composition of DHS's largest investigative component--U.S. Immigration and Customs Enforcement (ICE)--includes the legacy Customs Service, which has extensive experience with money laundering and other financial crimes. Another DHS component, U.S. Customs and Border Protection (CBP) has primary responsibility for interdictions between ports of entry. In summer 2003, ICE announced that it was developing a national strategy for combating alien smuggling. This testimony is based on GAO's May 2005 report on the implementation status of the strategy and investigative results in terms of convictions and seized assets. As of July 5, 2005, ICE had not finalized its strategy for combating alien smuggling. ICE was adjusting the draft strategy to focus on the southwest border and encompass all aspects of smuggling, aliens as well as drugs and other contraband. In adjusting the strategy, ICE officials stressed the importance of incorporating lessons learned from ongoing follow-the-money approaches such as Operation ICE Storm, a multi-agency task force launched in October 2003 to crack down on migrant smuggling and related violence in Arizona. Also, the strategy's effectiveness depends partly on having clearly defined roles and responsibilities for ICE and CBP, two DHS components that have complementary antismuggling missions. CBP is primarily responsible for interdictions between ports of entry and ICE for investigations that extend to the U.S. interior. In this regard, ICE and CBP signed a memorandum of understanding in November 2004 to address their respective roles and responsibilities, including provisions for sharing information and intelligence. Currently, however, there is no mechanism in place for tracking the number and the results of referrals made by CBP to ICE for investigation. CBP and ICE officials acknowledged that establishing a tracking mechanism could have benefits for both DHS components. Such a mechanism would help ICE ensure that appropriate action is taken on the referrals. Also, CBP could continue to pursue certain leads if ICE--for lack of available resources or other reasons--cannot take action on the referrals. In fiscal year 2004, about 2,400 criminal defendants were convicted in federal district courts under the primary alien-smuggling statute, and ICE reported seizures totaling $7.3 million from its alien-smuggling investigations. For the first 6 months of fiscal year 2005, ICE reported $7.8 million in seizures from alien-smuggling investigations. A concern raised by ICE and the Department of Justice is the lack of adequate statutory civil forfeiture authority for seizing real property, such as "stash" houses where smugglers hide aliens while awaiting payment and travel arrangements to final destinations throughout the nation. However, Justice does not have a legislative proposal on this subject pending before Congress because the department's legislative policy resources have been focused on other priorities.
The Air Force and the Navy budget and spend billions annually to procure and repair aviation spare parts. For example, for fiscal year 1997, the Navy budgeted $1.4 billion for this purpose. For fiscal year 1996, the Air Force budgeted $3.9 billion to procure and repair aviation spare parts. The Air Force’s F-100 engines used on F-15 and F-16 aircraft and the Navy’s F-404 engines used on F/A-18 aircraft account for a sizable portion of the procurement and repair budgets and expenditures for aviation spare parts. Both services use automated systems to compute requirements and to prepare their annual budgets for aviation spare parts. The systems base the computations on past usage, acquisition lead times, flying hour programs, maintenance replacement factors, and additional special needs. Requirements are then offset by the assets on hand and on order to arrive at the amounts needed. Although Air Force and Navy policies and procedures related to reserving on-hand assets for depot maintenance requirements differ, both agencies’ policies and procedures result in overstated requirements. Our review of overall budget inventory data related to these assets and our sampling tests of F-100 and F-404 engine parts showed that the Air Force and the Navy overstated budgeted buys and repairs by about $132 million. This overstatement occurred because of questionable Air Force and Navy policies concerning the determination of requirements and the accountability for assets held in reserve to satisfy depot maintenance needs. Since 1984, Air Force policy has been to reserve on-hand consumable parts for depot maintenance needs and not to use these assets to offset computed requirements when deciding to buy or projecting annual budgeted buys. This Air Force policy is unlike the Navy’s, which does require that assets held for depot level maintenance needs be applied to computed requirements. The Congress has made several attempts to change the Air Force’s policy. In response to our 1989 report, the House Committee on Armed Services directed the Air Force to consider depot supply level assets in its requirements and budget computations. In 1992, we reported that the Air Force continued to exclude depot supply level assets from its requirements and budget computations. As a result, the Congress reduced the Air Force’s operation and maintenance budget for fiscal year 1994. Despite these efforts, the Air Force continues its policy of not considering depot supply level assets in requirements and budget computations. Our analysis of overall inventory data for fiscal year 1995 showed that the Air Force overstated fiscal year 1996 budgeted requirements by $72 million because assets reserved for depot maintenance were not applied to budgeted buy requirements. Our sampling test of 22 F-100 engine parts for which there were actual and budgeted buys also showed that the Air Force continues to exclude depot supply level assets from its periodic requirement and annual budget computations. Of 22 sample items, 10 had depot supply level assets valued at about $1.8 million that the Air Force did not apply to offset recurring depot level maintenance requirements in the periodic requirements and annual budget computations. Of the 10 items, 3 had current buys costing about $2.7 million, which could have been reduced by about $366,000 if depot supply level assets had been applied to offset requirements. For example, in September 1994, the San Antonio Air Logistics Center computed an initial buy quantity of 31,420 F-100 engine duct segments (NSN 2840-01-270-7659PT) costing about $2.8 million. In finalizing the buy computation, the Center made changes, lowering the buy to 2,868 items costing about $307,000. However, the computation did not consider 3,680 depot supply assets that were available to offset requirements. If these assets had been applied to offset requirements, this procurement would not have been necessary. Similarly, the Center overstated budget requirements by not applying these depot supply level assets. According to Department of Defense (DOD) Materiel Management Regulation 4140.1-R, dated January 1993, the inventory managers, for the purpose of limiting buys and repairs, shall apply all retail and wholesale assets against wholesale requirements. Nevertheless, DOD’s and the Air Force’s position is that depot supply level assets are set aside for depot maintenance and, therefore, are not considered to offset wholesale requirements. We do not agree with this position because depot supply level assets are a part of the wholesale inventory. They have not been issued from wholesale storage and transferred to the depot maintenance activities. Further, because wholesale requirements are based on past recurring demands, it is reasonable to expect that assets procured to meet these demands should be considered when making future procurement decisions. The Navy’s policies and procedures related to assets reserved for depot maintenance needs, unlike the Air Force’s policies and procedures, require the Navy to apply these assets to computed requirements. However, we found that some Navy requirements are duplicated, resulting in overstated requirements. On the basis of our review of overall fiscal year 1995 budget data for aviation parts and our sampling test of 12 F-404 engine parts, we found that the Navy overstated fiscal year 1997 stock fund budgets by at least $60 million. This occurred because the Navy included reserve level depot maintenance requirements in periodic requirements and annual budget computations twice. These reserve levels are included once as recurring demands based on past depot maintenance usage and again in a planned program requirements category that is not based on recurring demands. For example, in May 1995, the Aviation Supply Office budgeted a fiscal year 1997 buy for 4,734 F-404 nozzle segments (NSN 2840-01-166-4886TN) costing about $7.8 million. We found that the budgeted buy requirement was overstated by 1,008 units, valued at about $1.7 million, because this requirement was included twice. It was included as a separate, identifiable nondemand-based requirement and again as part of the recurring demand-based requirements. Aviation Supply Office officials told us that the apparent duplication of requirements in the fiscal year 1997 aviation parts budget was offset by the application of assets reserved for depot maintenance to the recurring demand requirements. We disagree that the duplication of requirements is entirely offset by the application of these assets because the requirements are still incorrectly included as both recurring and nonrecurring demand requirements, but the assets are only applied once. We reviewed a sample of 34 F-100 and F-404 engine parts for which the Air Force and the Navy projected high-dollar buys or repairs in fiscal year 1995. We identified inaccuracies in the periodic requirement or budget computations for 22 items (64 percent of the sample items) that resulted in under or overstated requirements valued at $35 million. These inaccuracies were due to the use, in requirement computations, of unsupported or incorrect (1) maintenance replacement rates, (2) demand rates, (3) planned program requirements, (4) due-out quantities, (5) lead times, (6) repair costs, and (7) asset quantities on hand and on order. We reviewed 22 F-100 engine consumable parts and found inaccuracies in the Air Force’s computations for 12 items. The inaccuracies caused the fiscal year 1995 budget requirements to be understated by about $2 million on some items and overstated by about $10 million on others. The inaccuracies occurred because inventory managers used incorrect requirement and asset information or did not make necessary changes when updating budget requirement computations. The inaccurate information included incorrect (1) lead times, (2) due-out quantities, and (3) asset quantities on hand and on order. For example, in September 1994 the San Antonio Air Logistics Center computed an initial buy quantity of 756 F-100 engine ring assemblies (NSN 2840-01-327-2917PT). In finalizing the buy computation, the Center made changes to reflect updated information that decreased lead time and due-out requirements and increased on-hand and on-order assets. As a result, the computation changed from a 756 buy to a zero buy. Changes made on buy computations also affect budget requirement projections. However, in this case the Center did not make these changes in the final budget requirements computation. As a result, budget requirements were overstated by $4.3 million. In another example, the San Antonio Air Logistics Center (in September 1994) computed an initial buy quantity of 21,524 F-100 engine stage compressor blades (NSN 2840-00-371-2217PT). In finalizing the buy computation, the Center made changes to reflect updated information that decreased lead time and due-out requirements and increased on-hand and on-order assets. As a result, the computation changed from a 21,524 buy to a zero buy. However, the changes were not reflected in the final budget requirements computation. As a result, budget requirements were overstated by $1.1 million. Our review identified a need to strengthen existing procedures and practices for management level review and validation of budget requirement computations. Air Force Materiel Command Regulation 57-6, dated January 29, 1993, assigns primary responsibility for the accuracy and integrity of consumable item requirements to Air Logistics Center management. However, the regulation allows management personnel at the centers to delegate authority to lower level analysts to carry out certain quality review and control functions. We found that periodic requirements and annual budget computations for the 22 sample items generally were signed off at the supervisor level. However, this level of review is not ensuring that necessary requirement changes are reflected in the budget requirement computations. We reviewed 12 F-404 engine parts and found inaccuracies in the Navy’s computations for 10 items. The inaccuracies caused buys and repairs to be understated by about $8 million on some items and overstated by about $15 million on others. These inaccuracies included unsupported or incorrect (1) maintenance replacement rates, (2) demand rates, (3) planned program requirements, (4) repair costs, and (5) lead times. For example, in March 1995, the Aviation Supply Office computed a repair requirement for 328 F-404 engine compressor rotor assemblies (NSN 2840-01-288-1767) costing $26.6 million. The computation overstated repair requirements by 76, valued at about $6.1 million, because an incorrect maintenance demand rate and an erroneous parts application was used. We could find no data supporting the maintenance demand rate used. The Office provided data that showed a lower demand rate should have been used. Also the data indicated that the rotor assembly was applicable only to one type of fan and not to a second fan, which also was included in the computation. In another case, in May 1995, the Aviation Supply Office budgeted fiscal year 1997 funds for the repair of 554 F-404 engine high-pressure rotors (NSN 2840-01-201-1357) costing $19.1 million. The budgeted repair cost was understated by $7.2 million because an outdated unit repair cost was used. The Office used a unit repair cost of $34,479, but the latest negotiated unit repair cost was $47,577. Our review identified a need to strengthen existing procedures and practices for management level review and validation of requirement and budget computations. For example, we noted that repair computations were not receiving higher management level review and approval. These computations contained a large portion of the inaccuracies identified. We recommend that the Secretary of Defense direct the Secretary of the Air Force to revise buy and budget requirement computation policies and procedures to require that on-hand assets reserved for depot maintenance needs be considered in periodic requirement and annual budget computations and strengthen management oversight procedures and internal controls to ensure that key elements (such as on-hand and due-out quantities and lead times) of requirement and budget computations are accurate. We also recommend that the Secretary of Defense direct the Secretary of the Navy to revise policies and procedures for buy and budget requirement computations to eliminate duplication of depot maintenance requirements and strengthen management oversight procedures and internal controls to ensure that key elements of requirement and budget computations are accurate. DOD agreed that action should be taken to improve the accuracy of requirement determination processes and stated that the Air Force and the Navy are taking such actions (see app. I for DOD’s complete comments). The Air Force is issuing a new instruction that will establish levels of management review depending on the dollar value of the requirement actions. This instruction is expected to provide a stronger management overview that will ensure that key elements of the requirements computation are more accurately maintained. The Navy is implementing an automated system to improve data element validation. The system will provide an on-line checkoff list of key data elements for the item manager to validate when making decisions on requirements execution and budget development. DOD did not agree that current Air Force and Navy procedures related to reserving on-hand assets for depot maintenance resulted in overstated requirements. With regard to the Air Force, DOD stated that if assets were applied to maintenance requirements, as we believe they should be, those assets would not be available to meet other requirements. DOD also stated the issue is becoming moot because wholesale management of nearly all Air Force consumable items are being transferred to the Defense Logistics Agency. We continue to disagree with the DOD position because wholesale requirements include depot maintenance needs that are based on past recurring demands. We believe it would be reasonable inventory management and would save money to use reserved assets to offset wholesale requirements when making procurement decisions. As for the transfer of consumable item management to the Defense Logistics Agency, this transfer is not scheduled to be completed until late 1997. Once the transfer is made, the Defense Logistics Agency must ensure that the Air Force pays for assets when they are received at the depots. Otherwise, the Air Force may continue to reserve assets for depot maintenance, thereby precluding the Defense Logistics Agency from considering them when making procurement decisions. With regard to the Navy, DOD stated that both planned program and recurring demand requirements are needed to provide sufficient supply support, but do not result in overstated requirements. However, DOD acknowledged that, in some situations, depot demands are considered twice. We believe that DOD is wrong in stating that this duplication does not result in overstated requirements. Some of the demands to satisfy depot maintenance needs are included once as recurring demands based on past usage and again as nonrecurring demands to meet planned program requirements. The Navy needs to eliminate this duplication to improve the accuracy of procurement and budget requirement computations and to save money. We reviewed Air Force and Navy policies and procedures relating to periodic requirement and annual budget computations for aviation spare parts. We discussed the rationale for current policies and procedures with officials of the Air Force’s San Antonio Air Logistics Center and the Navy’s Aviation Supply Office. At the San Antonio Air Logistics Center, we reviewed 22 consumable F-100 aircraft engine parts for which the Center projected high-dollar buys for fiscal year 1995. At the Aviation Supply Office, we reviewed 12 consumable and reparable F-404 aircraft engine parts for which the Office projected high-dollar buys or repairs for fiscal year 1995. At both locations, we evaluated periodic requirement and annual budget computations. We analyzed related supporting documentation on which these buy or repair projections were based and discussed the computations with inventory managers and their supervisors. We obtained and reviewed fiscal years 1995 and 1996 buy and repair budgets for the Air Force’s aviation spare parts. We obtained and reviewed fiscal years 1995 and 1997 buy and repair budgets for the Navy’s aviation spare parts. We also obtained and analyzed Air Force and Navy reserve depot maintenance asset totals for fiscal year 1995. We performed our review between March and November 1995 in accordance with generally accepted government auditing standards. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of the report. A written statement also must be sent to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the appropriate congressional committees; the Secretaries of the Navy and the Air Force; and the Director, Office of Management and Budget. Please contact me at (202) 512-5140 if you have any questions. The major contributors to this report are listed in appendix II. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated February 13, 1996. 1. We have decreased the amount of assets reserved for depot maintenance needs from $226 million to $132 million. This reflects a reduction in the Navy’s assets from at least $154 million to at least $60 million. We made this reduction because more current information provided by the Aviation Supply Office indicates that the issuance of some reparable reserve assets does not duplicate requirements. These issues do not register as recurring demands in the wholesale supply system. 2. We deleted this recommendation from the final report. Subsequent to the completion of our fieldwork, the Aviation Supply Office furnished us an instruction outlining procedures for management review and approval of buy and repair computations. In reviewing the repair computations, we found that these procedures were not being followed in that the repair computation documents did not show evidence of management level review and approval. Implementation of our recommendation to strengthen management oversight procedures and internal controls should help eliminate this problem. Calvin Phillips Enrique E. Olivares Bonifacio Roldan-Galarza Richard Madson Donald McCuistion The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Air Force's and Navy's policies and procedures for procuring aviation spare parts, focusing on whether their requirements and budgets reflect the amounts they actually need. GAO found that: (1) the Air Force and Navy budgeted $132 million more than needed for aviation spare parts because they used questionable policies to determine their requirements and assign accountability for depot maintenance assets; (2) the Air Force did not include $72 million of its on-hand assets in preparing its fiscal year (FY) 1996 budget request; (3) the Navy twice counted $60 million in depot maintenance requirements when preparing its FY 1997 budget request; (4) Air Force and Navy computation errors were a result of unsupported and incorrect maintenance replacement rates, demand rates, planned program requirements, repair costs, lead times, due-out quantities, and asset quantities on hand and on order; and (5) errors found in the sample items reviewed totalled $35 million and resulted in some requirements being overstated by as much as $25 million and some being understated by $10 million.
Ephedra, the most widely used ingredient in dietary supplements for weight loss, is a powerful stimulant that can affect the nervous and cardiovascular systems. Adverse events among consumers of dietary supplements containing ephedra have been described in scientific literature and in detailed adverse event reports. Because of concerns about the risks of ephedra, medical organizations, states, and athletic associations have sought to reduce the use of dietary supplements containing ephedra. Under DSHEA, FDA regulates dietary supplements, including vitamins, minerals, herbs and other botanicals, amino acids, certain other dietary substances, and derivatives of these items. DSHEA requires that dietary supplement labels include a complete list of ingredients and the amount of each ingredient in the product. Dietary supplements may not contain synthetic active ingredients that are sold in over-the-counter drugs and prescription medications and cannot be promoted as a treatment, prevention, or cure for a specific disease or condition. Under DSHEA, manufacturers are responsible for ensuring the safety of dietary supplements they sell. Dietary supplements do not need approval from FDA before they are marketed; thus FDA generally addresses safety concerns only after dietary supplements are marketed. DSHEA does not require manufacturers to register with FDA, identify the products they manufacture, or provide reports of adverse events to FDA. Mechanisms that FDA uses to oversee dietary supplements and other products it regulates differ (see app. I for more details). Since manufacturers of dietary supplements are not required to provide reports of adverse events to FDA, the agency relies on voluntary postmarket reporting of adverse events to better understand the safety of dietary supplements. Some individual adverse event reports are especially valuable to FDA because they include enough information to help FDA determine if the adverse event was likely caused by the supplement. These reports include information about the receipt of medical care, health care professionals’ attribution of adverse events to the consumption of dietary supplements, the consumer’s appropriate use of the products, the consumer’s use of other products, underlying health conditions and other alternative explanations for the adverse event, and the consistency of symptoms with the documented effects of the dietary supplement. FDA, through the Department of Justice, can take enforcement action in court against dietary supplements that are adulterated to remove them from the market. A dietary supplement is considered adulterated under a number of circumstances, including when it presents a “significant or unreasonable risk of illness or injury” under the conditions of use recommended or suggested in its labeling, or under ordinary conditions of use if there are no suggestions or recommendations in the labeling, or bears or contains any “poisonous or deleterious substance” which may render it injurious to health under the conditions of use recommended or suggested in its labeling. Instead of going to court, FDA may choose to take administrative action to prohibit the sale of dietary supplements it considers to be adulterated. FDA can promulgate a regulation declaring a particular dietary supplement to be adulterated. FDA has not taken this action with any dietary supplement. FDA can also issue an advisory letter explaining why it considers the dietary supplement to be adulterated. The advisory letter provides guidance to the industry regarding FDA’s opinion and notifies the public that FDA may take legal action against firms or individuals that do not follow the letter’s advice. FDA has done this for two dietary supplement ingredients, comfrey and aristolochic acid. In addition, although it has never been done, the Secretary of Health and Human Services (HHS) may declare that a dietary supplement is adulterated because it poses an “imminent hazard” to public health or safety. In doing so, the Secretary must initiate an administrative hearing to affirm or withdraw the declaration. Ephedra has been associated with numerous adverse health effects. As we previously reported, case reports and scientific literature have suggested that ephedrine alkaloids can increase blood pressure in those with normal blood pressure, predispose certain individuals to rapid heart rate, and cause stroke, among other things. We also reported descriptions of adverse events associated with ephedrine alkaloids that affected the central nervous system, such as seizures, mania, and paranoid psychoses. FDA has received reports of adverse events associated with dietary supplements containing ephedra, including heart attack, stroke, seizure, psychosis, and death, that are consistent with the scientific literature. In February 2003, the RAND Corporation released a review of the scientific evidence on the safety and efficacy of dietary supplements containing ephedra and concluded that a sufficient number of cases of these same types of events had occurred in young adults to warrant further scientific study of the causal relationship between ephedra and these serious adverse events. RAND also found that use of ephedra or ephedrine plus caffeine is associated with a number of other adverse effects, including an increased risk of nausea, vomiting, heart palpitations, and psychiatric symptoms such as anxiety and change in mood. Because of these health concerns, many organizations and jurisdictions have taken actions aimed at reducing the use of dietary supplements containing ephedra. The American Medical Association and the American Heart Association have urged FDA to ban the sale of dietary supplements containing ephedra. In January 2002, Health Canada issued a Health Advisory for Canadians not to use certain products containing ephedra, especially those that also contain caffeine and other stimulants. In 2003, Illinois banned the sale of products containing ephedra and other states have similar bans under consideration. In addition, some states have banned the sale of such products to minors or required label warnings. Several sports organizations, including the NCAA, the National Football League, the U.S. Olympic Committee, and the International Olympic Committee, have banned the use of ephedra by their athletes. In 2003, General Nutrition Centers, the nation’s largest specialty retailer of nutritional supplements, discontinued the sale of products containing ephedra, as have three other major retail outlets. Some manufacturers have stopped producing dietary supplements containing ephedra. Other manufacturers continue to offer dietary supplements containing ephedra while also offering similar products that are ephedra-free. Using the adverse event reports it has received and evidence from the scientific literature, FDA has concluded that dietary supplements containing ephedra pose a “significant public health hazard.” FDA and others have received thousands of reports of adverse events among users of dietary supplements containing ephedra, more than for any other dietary supplement ingredient. Metabolife International also received thousands of reports of adverse events. FDA has received more reports of adverse events for dietary supplements containing ephedra than for any other dietary supplement ingredient. In addition, poison control centers and one manufacturer, Metabolife International, have received thousands of reports of adverse events associated with dietary supplements containing ephedra. From February 22, 1993, through July 14, 2003, FDA received 2,277 reports of adverse events associated with dietary supplements containing ephedra, which was 15 times more reports than it received for the next most commonly reported herbal dietary supplement, St. John’s wort. Other organizations also have received a large number of adverse event reports for dietary supplements containing ephedra. The American Association of Poison Control Centers received 1,428 reports of adverse events associated with dietary supplements containing ephedra, either alone or in combination with other botanical dietary supplement ingredients, in 2002, nearly two-thirds as many as FDA received over a 10- year period. The centers noted that there were more reports of adverse events for ephedra-containing dietary supplements than for others. Further, as we reported in March 2003, Metabolife International had 14,684 health-related call records that contained reports of adverse events associated with its product, Metabolife 356, from May 1997 through July 2002. Neither the American Association of Poison Control Centers nor Metabolife International is required to report these adverse events to FDA. From the adverse event reports it has received and the scientific literature it has reviewed, FDA concluded in March 2000 that dietary supplements containing ephedra pose a significant public health hazard that primarily involves consumers who are young to middle-aged and can result in adverse cardiovascular and nervous system effects. It further concluded that many of the adverse events were serious, resulting in morbidity and mortality that would not be expected in a young population and that could further compromise the health of more vulnerable older adults or those with underlying conditions. A study commissioned by FDA estimated that the agency receives reports for less than 1 percent of adverse events associated with dietary supplements. Although causality cannot be determined based on the individual adverse event reports FDA receives, the agency uses these reports to identify possible risks to consumers from dietary supplements. As we have previously reported, there are well-known weaknesses in the current system of voluntary reporting of adverse events, such as different interpretations in determining an adverse event, underreporting, difficulties estimating population exposure, and poor report quality. Despite these limitations, FDA maintains that even isolated reports can be definitive in associating products with an adverse effect if the report contains sufficient evidence, such as supporting medical documents, a temporal relationship between the product and effect, and evidence of dechallenge and rechallenge. The types of adverse events that we identified in the Metabolife International call records are consistent with the types of adverse events reported to FDA and with the documented physiological effects of ephedra. As we recently reported, most of the Metabolife International call records contained limited information about the event and the consumer. Nonetheless, the call records contribute to existing knowledge about adverse events that have been associated with ephedra use. In our review, we identified 14,684 call records that contained reports of at least one adverse event among consumers of Metabolife 356. Within these call records, we found 92 reports of serious adverse events—heart attacks, strokes, seizures, and deaths—a count that was similar to that of other reviews of the call records. In addition, the call records contain reports of serious adverse events in consumers who were young and among those who used the product within the recommended guidelines. These findings are consistent with reports FDA has received regarding dietary supplements containing ephedra. In our review of health-related call records for users of Metabolife 356, we found that the information in the call records was limited. Call records were sometimes difficult to read and interpret, and consumer information was not consistently recorded. In some cases, the evidence for a report of an adverse event was limited to a single word on a call record. In other cases, information was entered into a form developed by Metabolife International with multiple boxes for consumer- and event-related information. Most call records did not document complete information about the consumer’s age, sex, weight, and height. Because the company did not systematically follow up on calls reporting adverse events, and the adverse events were not reported to FDA, it is not possible to gather more complete information or medical records. As we reported in March 2003, we identified 14,684 call records that contained at least one report of an adverse event among consumers of Metabolife 356. The types of reported adverse events were consistent with the cardiovascular and central nervous system effects that have been associated with ephedra products in the literature, adverse event reports received by FDA, other case reports, and RAND’s review. Within the call records, we identified 92 reports of heart attack, stroke, seizure, and death (see table 1). Our count of reports of these serious adverse events was similar to that of other reviews of the Metabolife International call records, including counts by Metabolife International and its consultants. We also found 1,079 reports of other types of adverse events that FDA identified as serious or potentially serious. These included chest pain, significant elevations in blood pressure, systemic rash, and urinary infection. In addition to these 1,079 reports, we found records that contained reports of a broad range of other types of adverse events, including changes in heart rate such as palpitations and increased heart rate; blood in stool; blood in urine; bruising; hair loss; and menstrual irregularity. Within the subset of call records that contained information on age, the distribution of ages suggests that a relatively young population was experiencing the reported serious adverse events. Among the call records that contained a report of a serious event, 44 percent included information on age. For these call records, more than one-third concerned consumers who reported an age under 30—the average reported age was 38 (ranging from 17 to 65). As noted above, FDA has also received reports of serious adverse events occurring in a population of young adults. Because we do not know the age profile of all Metabolife 356 consumers, we cannot determine if the age distribution among those reporting serious adverse events in the Metabolife International call records reflects that age profile. Within the subset of Metabolife International call records that contained information on how the product was used by the consumer, most of the reported serious adverse events occurred among consumers who reported using the product within the guidelines on the Metabolife 356 label—that is, who reported that they did not take more of the product or take it for a longer period than recommended. Information about product use, however, was incomplete—40 and 55 percent of the call records that reported a serious event contained information about the amount of Metabolife 356 used and the duration of use, respectively. Among the call records that reported a serious adverse event and also contained information about product use, 97 percent of consumers reported using an amount of product within the recommended guidelines. Similarly, 71 percent of those consumers reported using the product for a length of time that was within the recommended guidelines. This pattern is consistent with findings from FDA’s review of adverse events associated with ephedra products. As part of its oversight of dietary supplements, FDA has taken some actions specifically focused on dietary supplements containing ephedra. FDA has issued warnings that focus on improper product labeling, issued warnings to consumers, and issued a proposed rule in 1997 that, among other things, would require a health warning on the label of dietary supplements containing ephedra and prohibit a dietary supplement from containing both ephedra and a stimulant. However, parts of this rule remain under consideration 6 years after it was first proposed. As we previously reported, FDA has focused its enforcement actions regarding dietary supplements on improper labeling. For example, in February 2003, FDA issued warning letters to 26 firms that sell dietary supplements containing ephedra. All of these letters advised marketers that label claims for enhancement of physical performance were unsubstantiated and the products were therefore misbranded. FDA and HHS have also directly warned consumers about the safety of dietary supplements containing ephedra. In February 1995, FDA issued a press release warning consumers about a specific dietary supplement product that contained both ephedra and caffeine, because it had determined that the product represented a threat to public health. Further, in February 2003, the Secretary of HHS issued a statement to caution people against using dietary supplements containing ephedra and indicated that FDA continues to have serious concerns about the risks of these dietary supplements. FDA has also taken actions in its oversight of dietary supplements in general. Specifically, FDA has conducted facility inspections and proposed good manufacturing practice (GMP) regulations that focus on product quality in general, not the safety of an individual ingredient. FDA first issued a proposed rule to regulate dietary supplements containing ephedrine alkaloids in 1997. The proposed rule would define the amount of ephedrine alkaloids in a serving of dietary supplement at and above which the product would be deemed adulterated (8 milligrams), establish labeling requirements regarding maximum frequency of use and daily serving limits, require that labels on these supplements contain a statement warning that the product should not be used for more than 7 days, prohibit the use of ephedrine alkaloids with ingredients that have a known stimulant effect (e.g., caffeine), prohibit labeling claims that promote long-term intake of the supplements to achieve the purported purpose, require a warning statement in conjunction with claims that encourage short-term excessive intake to enhance the purported effect, and require that specific warning statements appear on product labels. Our 1999 report on the proposed rule was critical of the science FDA used to support the serving size and duration of use limits in the proposed rule. However, we did not conclude that dietary supplements containing ephedra were safe, and we commented that the adverse events reported to FDA were serious enough to warrant FDA’s further investigation of ephedra safety. Primarily, we were concerned that FDA used only 13 adverse event reports to establish serving limits and had weak support for proposed limits on duration of use. Partly as a result of our review, FDA withdrew the sections of the proposed rule on serving size and duration of use limits. In the interim, FDA has taken action to regulate certain drugs that contain ephedrine, the active ingredient in ephedra. In September 2001, FDA issued a final rule stating that certain over-the-counter drugs containing ephedrine and related alkaloids in combination with an analgesic or stimulant could not be marketed as over-the-counter drugs. There currently is no similar rule prohibiting the marketing of dietary supplements containing ephedra in combination with analgesics or stimulants, such as caffeine. As a result, dietary supplements may contain ingredients that are prohibited in drugs. In fact, many dietary supplements with ephedra, such as Metabolife 356, also include caffeine. The proposed rule contains a provision that would prohibit dietary supplements from containing both ephedra and other stimulants. In March 2003, almost 6 years after the initial proposal, FDA reopened the comment period for the remaining provisions of this proposed rule for 30 days. FDA sought comments on three areas: New evidence on health risks associated with ephedra. Whether the currently available evidence and medical literature demonstrate that dietary supplements containing ephedra pose a “significant or unreasonable risk of illness or injury” under the conditions of use recommended or suggested in their labeling, or under ordinary conditions of use if there are no suggestions in the labeling. A new warning label for ephedra products that warns about reports of serious adverse events after the use of ephedra, including heart attack, seizure, stroke, and death; cautions that the risk can increase with the dose, with strenuous exercise, and with other stimulants such as caffeine; specifies certain groups (such as women who are pregnant or breast feeding and persons under 18) who should not use these products; and lists other diseases, such as heart disease and high blood pressure, that should rule out the use of ephedrine alkaloids. On July 14, 2003, FDA reported to us that the agency is in the process of reviewing the comments and has not reached a decision regarding further action. While FDA has not attempted to ban the marketing of dietary supplements containing ephedra, the agency has sought, in these comments, additional information that would help it determine whether or not such action would be warranted. Because the regulatory framework for dietary supplements is primarily a postmarketing program and FDA does not review the safety of dietary supplements before they are marketed, adverse event reports are important sources of information about the health risks of dietary supplements containing ephedra. It is often difficult to demonstrate conclusively that a single reported adverse event was caused by ephedra, but some individual reports, particularly when they are complemented by follow-up investigation of the case, can be especially informative. Although the information in the Metabolife International call records we examined was limited, the types of adverse events we observed were consistent with the known risks of ephedra, including serious events such as five reports of death. Based on the pattern of adverse event reports FDA has received and the consistency of those reports with the known effects of ephedra from the scientific literature, the agency concluded 3 years ago that dietary supplements containing ephedra pose a “significant public health hazard.” FDA is currently reviewing information that will help the agency determine what further actions are warranted. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For more information regarding this testimony, please call Marcia Crosse at (202) 512-7119. Key contributors include Martin T. Gahart, Carolyn Feis Korman, Chad Davenport, Roseanne Price, and Julian Klazkin. Mandatory manufacturer reporting of adverse events Under the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, Pub.L. No. 107-188, 116 Stat. 594, manufacturers and distributors are required to registered with FDA no later than December 13, 2003. Monograph drugs are typically over-the-counter drugs that must adhere to specific safety standards set for each ingredient and do not undergo clinical testing. New Drug Applications must be submitted to FDA for all prescription drugs and some over-the- counter drugs prior to marketing. This application must include data that demonstrate the safety and efficacy of the product. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Dietary supplements containing ephedra have been associated with serious health-related adverse events, including heart attacks, strokes, seizures, and deaths. The Food and Drug Administration (FDA) regulates dietary supplements under the Dietary Supplement Health and Education Act of 1994 (DSHEA). Reports of adverse events have been received by FDA and others, including Metabolife International, the manufacturer of a dietary supplement containing ephedra, Metabolife 356. Because of concerns surrounding the safety of dietary supplements containing ephedra, GAO was asked to discuss and update some of the findings from its prior work on ephedra, including its examination of Metabolife International's records of health-related calls from consumers of Metabolife 356. Specifically, GAO examined (1) FDA's analysis of the adverse event reports it received for dietary supplements containing ephedra, (2) how the adverse events reported in the health-related call records collected by Metabolife International illustrate the health risks of dietary supplements containing ephedra, and (3) FDA's actions in the oversight of dietary supplements containing ephedra. FDA has used the adverse event reports it has received to conclude that dietary supplements containing ephedra pose a significant public health hazard. Since February 1993, FDA has received 2,277 reports of adverse events associated with dietary supplements containing ephedra, 15 times more reports than it has received for the next most commonly reported herbal dietary supplement. The types of adverse events that GAO identified in the health-related call records from Metabolife International were consistent with the types of adverse events reported to FDA and with the documented physiological effects of ephedra. Although call records contained limited information for most of the reports, GAO identified 14,684 call records that had reports of at least one adverse event among consumers of Metabolife 356. GAO's count of 92 serious events--heart attacks, strokes, seizures, and deaths--was similar to that of other reviews of the call records, including counts by Metabolife International and its consultants. Many of the serious events were reported among relatively young consumers--more than one-third concerned consumers who reported an age under 30. In addition, for call records containing information on the amount of product consumed or length of product use, GAO found that most of the reported serious adverse events occurred among consumers who followed the usage guidelines on the Metabolife 356 label. As part of its oversight of dietary supplements, FDA has taken some actions specifically focused on dietary supplements containing ephedra. FDA has issued warnings that focus on improper labeling, issued warnings to consumers, and issued a proposed rule in 1997 that, among other things, would require a health warning on the label of dietary supplements containing ephedra and prohibit a dietary supplement from containing both ephedra and a stimulant. FDA subsequently banned the sale of certain classes of over-the-counter drugs containing ephedrine and related alkaloids--the active ingredient in ephedra--in combination with an analgesic or stimulant. As the 1997 proposed rule has not been finalized, there is no rule prohibiting the marketing of dietary supplements with similar ingredients, and many dietary supplements with ephedra, such as Metabolife 356, also include caffeine or other stimulants. To receive comments on new evidence, FDA recently reopened the comment period for the proposed rule, and FDA reported to GAO that the agency is in the process of reviewing comments it has received and has not reached a decision regarding further action.